John EdwardsBiography
Children and Privacy Online: It’s Time to Change the Dynamic – More Responsibility on the Platforms, More Autonomy for the Kids

Not so long ago, the best advice available for keeping children safe online, was to locate the family computer in a common area. Thus, the theory went, if children strayed into an unsafe corner of the internet, were exchanging messages with an unknown correspondent, or were accessing materials inappropriate for their age and maturity, a wise and caring parent could intervene.

Such an approach now seems quaint. First for the obvious reason, that most online activity is now mobile, and can be carried from room to room, accessed at the bus stop, the playground, or under the bedclothes late at night.

But the naivete was there before the mode of access shifted to portable devices. The advice imposed a degree of transparency, but the burden of that transparency was borne by the end user, the child, and the presumptively vigilant and savvy parent. In other words, this represented an abdication of responsibility from content providers and online services targeting children, indifferent to whether children accessed their sites and hosted materials.

Parents should be the first line of defence for children and young people. This was recognised by the OECD Recommendation on the Protection of Children Online in 2012. But even then, the limitations of parents’ ability to effectively counsel and supervise children online was recognised as a significant limiting factor.

That limitation increases with every technological innovation and social media iteration. By the time parents are aware of Snapchat or TikTok, the young have moved on to the next thing.

The digital world has enormous potential to enhance and protect children’s rights. But the sword cuts both ways. The very same characteristics that allow children to independently access information in their best interests and further their autonomy and self-development, allow a delivery mechanism for harmful and exploitative content, for the harvesting of data and for the indefinite retention of ill-judged, intemperate or simply regretted posts.

The age of user generated content is a particular challenge. Children, their parents and others in the community can take private moments, and upload them for permanent and infinitely reproduced consumption, editing, manipulating and recontex-tualising, by anyone in the world with an internet connection.

Children can be induced to innocently participate in online “challenges” on widely used platforms, that are in fact intended to harvest fodder for fetishists.1 As recently as October 2019, the Guardian reported that Facebook can identify, and sell advertising targeted at “children interested in alcohol and gambling”.2

New functions can be added to existing platforms with little testing or safeguards. Livestreaming, for example, while promoted by Facebook’s Mark Zuckerberg as a way for a Dad to tune in remotely for an eight-year-old’s birthday party, can just as easily be used to expose that same eight-year-old to the horror of a mass shooting, as happened in Christchurch, New Zealand in March 2019. In the aftermath of that atrocity, Facebook could not even answer the question of how many instances of child abuse, rape, suicide and murder its insufficiently tested application has facilitated since launch.3 The magnitude of those shortcomings, including the rush to market the livestreaming product before it was adequately tested, was revealed. Six weeks after 51 people were shot to death in their place of worship, and their pain and anguish was pushed onto the tablets and phones of unsuspecting children and adults the world over, Facebook introduced measures which, had they been in place at the time, would have prevented the terrorist from broadcasting his attack.4

At least three of the rights in the 5Rights Framework speak directly to children’s privacy. The need for children, and their parents, to have good, clear, timely and easily understood information about the consequences of interacting with their sites is fundamental to making the digital world safe for children and young people. The Right to Know, and the Right to Informed and Conscious Use have formed the basis of data protection and privacy laws around the world for at least 40 years. That today, it is necessary to make a special case for tech and content companies to comply with those principles in respect of children is a stark illustration of the failure of the regulatory model to date, and the success of the digital oligarchs in keeping ahead of politicians and regulators.

The third privacy right advocated by the 5Rights Framework that allows some mitigation of the accreted harms of data harvested under opaque, misleading or absent pretences is the Right to Remove.

We are seeing the first generation of children born in the social media era mature into adulthood. For many, their every developmental step will have been documented and shared online, often innocently by a parent who lacked the knowledge or foresight of the surveillance capitalist business models that were to come.

I have had to confront a case in which an adult traumatised by childhood experiences of abuse sought to regain control and restrict dissemination of nude images of her 13-year-old self, which had become part of an artist’s portfolio and gallery collections. 

But a child should not have to wait until adulthood to exercise some autonomy over the dissemination of private images. And nor should she be required to justify exercising that right by the kind of extreme example as the one my Office encountered.

The Right to Remove is a challenge to a number of foundational principles of the digital economy. It is a specific challenge to those which have emerged from a culture that regards the right to freedom of expression almost as supreme law. Freedom of expression has been invoked to such an extent that civil rights (!) organisations will seek to overturn laws which attempt to support victims of revenge porn: believing that one person’s right to post an intimate image of another, in breach of confidence and trust, trumps the subject’s right to that image.

It is probably for this reason that the Right to Remove is expressed in such modest terms by 5Rights as “the right to easily remove what you yourself have put up”.

While worthy, and sufficiently moderate to win some acceptance among the US digital oligarchs, I would suggest it is an insufficiently ambitious attempt at reclaiming agency and autonomy. Why stop at simply being able to control material that has been uploaded or provided by the child? 

Should a child not have a right to assert, even against a parent, that an amusing image of toilet training still accessible on the parent’s Facebook page, might be fodder for the bullies tormenting them and ought to be taken down? That the video of their distress at not having received the Christmas present they were hoping for at eight is not an amusing memory to be shared with the world for a 12-year-old?

As far back as 2015, Kate Eichhorn in ‘The End of Forgetting: Growing Up with Social Media’ noted that British parents posted on average, nearly 200 photographs of their child online each year, and that the terms on which those images are hosted, packaged and analysed change unilaterally and arbitrarily. The New York Times recently reported that hundreds of thousands of images of children uploaded to Flickr in 2005 ended up in a facial recognition/AI training database.5 That we only learn about these secondary and tertiary uses of information, 14 years after the fact demonstrates the impossibility of parents making sound judgements for their children in the face of an overwhelming information asymmetry. The Right to Remove can rebalance that asymmetry.

Children should have the presumptive, no-questions-asked right to delete content which they have submitted, or in which they appear; regardless of the relationship between them and the “owner” or poster of the image or information. Should that be an absolute right? Perhaps not, but it should be incumbent on an adult or commercial enterprise to justify why they have not acceded to a child’s preference. The burden and the cost of making and defending such a judgement should be borne by the agency seeking to profit from the engagement and the content.

Knowing that their business model depends on the ongoing licence to maintain the content, and that they will incur the cost and administrative burden of removal requests, might well motivate digital industries to better address the rights to informed and conscious use, to know, and to digital literacy.

John Edwards was appointed as Privacy Commissioner of New Zealand in February 2014 following a career of over 20-years practicing law. He has degrees in law (LLB) and public policy (MPP) from Victoria University of Wellington, and has advised and represented a wide range of clients from the public and private sector. He chaired the New Zealand Law Society Privacy and Human Rights Committee, was Contributing Editor of Brookers Human Rights Law and Practice, and has published widely on human rights and privacy matters. In addition to a practice specialty in the field of information and privacy law, he held warrants as a district inspector for mental health, and as district inspector for intellectual disability services. He has also provided legal services to the Kingdom of Tonga. In October 2014, John was elected as Chair of the Executive Committee of the International Conference of Data Protection and Privacy Commissioners, and completed his three-year term in October 2017.