Jay HarmanBiography
The Enormous Potential of Technology – and the Absence of Children in the Design of the Digital World
Meanwhile, the poor Babel Fish, by effectively removing all barriers to communication between different races and cultures, has caused more and bloodier wars than anything else in the history of creation.
In the digital age, we have never been more connected, and yet never more polarised. Free speech, too, is as untrammelled as it has ever been, but is often used to distort the truth and subvert our democracy, rather than promote the truth and strengthen democracy.
There are, therefore, two important lessons that we can draw from the plight of the Babel fish, the small creature described in Hitchhiker’s Guide to the Galaxy, which when inserted into one’s ear, allows them to understand any language spoken by any species from any planet. The first is the law of unintended consequences, and the second is the risk that, in the wrong hands, ‘every virtue carried to the extreme becomes a vice’. These lessons are particularly relevant to the place of children in the digital environment.
The foundational, idealistic vision of the internet as intrinsically egalitarian demands that all users must be equal, and all users must be treated equally. While admirable in theory, this vision has led to children being treated as adult in the digital world, denied any meaningful recognition of their age or of the needs and vulnerabilities that come with it. “Equality”, taken to its extreme and with little thought for the consequences, has effectively served to discriminate against children and to rob them of their childhood.
Elsewhere, the growing realisation of our surveilled existence, brought into sharp focus by a series of high-profile and catastrophic breaches of public trust, has led to a distorted pursuit of user privacy. Explaining away his company’s pending implementation of end-to-end encryption, Mark Zuckerberg said: “Encryption is a powerful tool for privacy, but that includes the privacy of people doing bad things. When billions of people use a service to connect, some of them are going to misuse it for truly terrible things like child exploitation, terrorism, and extortion.”For context, 16.8 million instances of child sexual exploitation or abuse were reported on Facebook’s platforms in 2018 alone, only a tiny fraction of which would be captured if end-to-end encryption was implemented without the necessary protections in place. “Privacy”, taken to its extreme and with little thought for the consequences, could come to protect paedophiles over children.
In a similar vein, the gold-standard for an integrated society is often held to be one in which “everyone is a potential friend”. Social media companies have been particularly determined in their pursuit of this ideal. But as algorithms are repeatedly shown to serve up sexual predators to children (and vice versa) in the form of automated ‘friend recommendations’, it seems no one stopped to consider that a world in which everyone is a potential friend might not be a safe one for a child. “Connection”, taken to its extreme and with little thought for the consequences, has served up children to strangers, and strangers to children.
The recommendation of content has caused problems, too. Recommendation algorithms exist to serve the right content to the right people at the right time. This ensures that the ‘infinite library’ is accessible and that the content we view is relevant and engaging. In 2017, however, a British schoolgirl called Molly Russell took her own life, and it was subsequently revealed that self-harm and suicide content was repeatedly and relentlessly recommended to her by the algorithms of Instagram and Pinterest. Re-appropriated to focus exclusively on boosting user engagement, these algorithms stopped serving Molly’s interests and, instead, used her data to exploit her vulnerabilities. “Engagement”, taken to its extreme and with little thought for the consequences, would sooner promote suicide to children than allow them to disengage.
And what of efforts to protect children online more generally? To date, online safety education has largely taken the form of ‘stranger danger’ messages intended to discourage children from using digital technology entirely, rather than encourage them to use it responsibly. The inexorable rise of parental control apps and student monitoring systems can also coddle rather than genuinely protect, denying children vital and formative opportunities to both encounter risk and learn how to manage it. “Online protection”, taken to its extreme and with little thought for the consequences, can constrain children’s flourishing rather than create the safe and supportive conditions on which their flourishing depends.
It’s important to emphasise again that none of these things are problematic in-and-of themselves. Equality, privacy, connection, engaging content, and a parent’s impulse to protect their child are all necessary for a thriving digital environment. The problems come when the blinkers are applied, and the needs of children are ignored (and, indeed, when we allow these virtues to be defined in ways that distort their true meaning, to serve ulterior and commercial interests).
The ‘essay question’ we were given for this book was to consider how we might balance privacy, freedom of expression, and security in making the digital world fit for children. Reading the essays that come before mine, I was struck by the significance of this group of people writing about children. An African Union Commissioner, a playwright, an applied mathematician, a NATO cyber security expert, and a UN Special Rapporteur. All of them, whatever their line of work or field of expertise, considering carefully the needs of children in the digital age.
This is not ‘normal’. It may even be a first. And so, my answer to the ‘essay question’ is that the principal problem for children has not been a failure of ‘balance’. The problem has been that children are rarely part of the equation at all. For all the proposals made in this book, the implicit consensus among its contributors is that the solution is one with which we are all familiar: the best interests of children should be a primary consideration in all matters that affect them.
It is dispiriting how regularly this fundamental principle gets cast aside. We must legislate to make sure that it cannot be.
Companies must be ready to account for the steps they have taken to protect and promote the best interests of children in the design of their services, and regulators must be resourced and empowered to instruct or sanction any companies that fall short. Above all, this means mandatory child impact assessments for any digital service, product, or feature that a child is likely to access. These assessments must be transparent, auditable, and carried out both in advance and at regular intervals. In sum, the message to industry should be crystal clear: if you fail to acknowledge honestly or respond appropriately to the impact of your business on children, there will be consequences.
I am encouraged that something like this has been proposed by the UK’s Information Commissioner as part of her Age Appropriate Design Code, albeit in relation to data specifically. This principle also sits at the heart of the UK Government’s plans for a ‘duty of care’. We will have to wait for these to take effect before judging their impact, but I suspect they will confirm what we already know: the enormous potential of technology can only be realised when it is designed with children in mind.
In the digital age, how should we balance the freedom, security and privacy of children? Does an adult’s right to freedom of expression trump a child’s right to be protected from harmful material? When should a child’s right to autonomy or freedom of association trump a parent’s concern to know where they are? Can privacy protect children if it also protects those who wish to harm them? And, importantly, what responsibility should be bestowed on technology companies for striking this balance? How should they be held to account?