John Carr OBEBiography
Privacy Lawyers Need to Up Their Game: An Internet that is Safer and Better for Children is Going to be Safer and Better for Everyone

“Protecting Child Privacy” was the title of a day long symposium I recently attended (November 2019) at the Institute for Privacy Protection, part of Seton Hall Law School in the USA. The organisers had attracted a star-studded cast of American speakers and there were two speakers from Europe: myself and Max Schrems 1. Now if there is anyone who can claim rock star status in the world of online privacy it is probably Schrems. The fact that he is young, and looks and dresses like a rock star only adds lustre.

Here is a key excerpt from his brilliant presentation:

In preparing my comments for today I talked to lots of lawyers from the privacy world. However, I found it very difficult to find any who specialised or claimed expert knowledge in relation to children and privacy.

OK, Schrems may have a limited circle of associates but what he said struck a chord with me. It exactly mirrored my own experience.

There is a substantial and growing body of lawyers who specialise in privacy. The GDPR has pretty much guaranteed and created a “privacy industry”. But it’s still very early days. I can still only think of a handful of lawyers or other experts who have developed and sustained a major interest in children’s privacy rights, or have a deep understanding of the real-world consequences for children of this or that decision about how to draft or interpret a law or regulation with a privacy dimension. Sadly, few of these lawyers are working for the European Commission. Neither are they working for many of the major Data Protection Authorities around Europe nor, come to that, the European Data Protection Board. The same was true for its predecessor, the Article 29 Working Party.2

In its entire 19-year life, Article 29 only produced one major report on children.3 It was published in 2008 and it concerned the protection of children’s personal data in schools. An important issue to be sure, but… 

Article 29 and its associates doubtless became distracted in the run up to the publication of the draft text for the GDPR. In December 2011, Statewatch leaked a late draft.4 In it, the Commission had proposed that for persons under the age of 18, where consent was to be the basis for processing personal data, it would be necessary for the service provider to obtain parental consent. This was widely interpreted in the media as meaning Facebook and other platforms would be closed to all persons under the age of 18, unless their parents agreed to let them use it. The balloon went up. The idea was dropped. A month or so later when the official text finally appeared, 18 had been replaced by 13. It was to apply in every EU Member State. No variation. Why was 18 ever considered to be an appropriate minimum? We can guess, although no real explanation was offered. But why did the Commission officials then shift from 18 to 13 so rapidly? It seems unlikely the shift was based on any new research that had suddenly become available, and neither was it based on any consultations with experts in the field.

This time Commission officials did offer an explanation, but not a very good one. They simply said 13 was to be preferred because that was what the Americans were already doing, and it had therefore become a de facto standard. Other than that, no actual evidence was produced to show why 13 was a good pick. When it came to the final decision, the politicians threw it out. Without more, the mere fact the Americans were already doing it and that it had become a de facto standard did not convince the French, Germans and many other Governments that the whole of the EU should choose to go the same way. Conceivably, this cack-handed approach even encouraged European governments to actively look for an alternative. That is how we ended up with a hotch-potch of ages between 13 and 16, with 16 as the default age of consent. Again, no evidence was produced to justify or explain the decision, and no obvious thought had been given to the possible consequences of having different age standards for children in different countries connecting with each other using the same Apps, at the same time. 

Consent as the basis of processing data is the most easily understood (if often poorly implemented) basis for engaging with an online service and, in respect of children, it offers a route which at least encourages the possibility of parents being involved with their children’s online activity. 

Yet privacy lawyers and those involved in writing the GDPR constantly stressed how, when considering which Article 65 grounds to use as the basis of processing personal data, it would generally be “better” for companies not to rely on consent but, instead, to use one of the other grounds mentioned. In that light we have to ask: could or should these same lawyers have anticipated how these other grounds might, effectively, cut parents out of the loop? By allowing companies to use, for example “legitimate interest” or contracts to apply in relation to children’s involvement with their products and services, the issue of parental consent is made redundant and with it, at least some possibility of parental engagement. Facebook took the opportunity in effect to create a whole new class of membership or type of user. It side-stepped parents altogether.

Sticking with the shortcomings of the GDPR, by which I mean sticking with the child-unaware shortcomings of those who drafted it, let us not forget the scourge of child sexual abuse materials (CSAM) that continue to circulate on the internet, and the gigantic harm the mere fact of circulation does to the victims depicted.

Under existing rules, companies that sell domain names are meant to collect accurate information about the identity and contact details of the person or entity buying the domain. Historically, these data were meant to be made immediately available to everyone via a publicly accessible database called WHOIS. Yet in 2018, the UK’s Internet Watch Foundation (IWF) identified over 1,000 websites that seemed to have been established solely in order to distribute child sexual abuse material.6 In the entire passage of the GDPR, in the draft text, in the Parliament and in Committee, WHOIS was never even mentioned. Not once. On the contrary what came out the other end meant law enforcement and other interested agencies that might want to track down whoever is publishing CSAM via a website now have to jump through many time consuming and expensive hoops to see who the supposed owners are. This could and should have been avoided. The law must be changed as soon as possible. It provides another example of how the absence of knowledgeable input during the drafting of the GDPR has led to poor outcomes for children.

Finally, thanks to some prolonged and intense lobbying, the ePrivacy Regulation has been temporarily shelved. But if it had gone through in its original form, it would have become illegal for companies providing messaging services to continue using tools such as photoDNA to try to detect the presence of already known CSAM.7 What were the draftspersons thinking of when they put that together? Not children.

For all of the above reasons, the work of the 5Rights Foundation is hugely important, both in respect of helping develop the UK’s Age Appropriate Design Code and on the General Comment on children’s rights in the digital environment with the Committee on the Rights of the Child.

Yes, there are challenges associated with squaring the privacy circle for children without trespassing on adults’ rights but, up until now, too many people, too many lawyers who ought to have known better, have put children’s privacy in a box marked “too difficult” and ignored it while they addressed other issues. It is vital that we break out of this circle of uncertainty by developing a corps of lawyers, activists and institutions who understand the importance of privacy for children. 

John Carr OBE writes and consults about internet safety and security, and is one of the world’s leading authorities on children and young people’s use of the internet and associated new technologies. John is Secretary of the UK Children’s Charities Coalition on Internet Safety, and works extensively in many parts of the world. John has been an Expert Adviser to the United Nations (International Telecommunication Union), European Union and the European Network and Information Security Agency, and Council of Europe. He is Technical Adviser (Online) to global NGO, ECPAT International, and Expert Adviser to the European NGO Alliance for Child Safety Online (eNACSO). John is a member of the British Board of Film Classification Advisory Panel on Children’s Viewing, has acted as a consultant to the Office of the Children’s Commissioner for England, and is a former Director of the Internet Watch Foundation. John was a Member of the Home Secretary’s Task Force on Child Internet Safety, which became the Executive Board of the UK Council for Child Internet Safety (UKCCIS).