Professor Hany FaridBiography
Protecting Children Online: The Past, Present, and Future

Let’s begin with some sobering statistics: in 2018 alone, the US-based National Center for Missing and Exploited Children (NCMEC) received to their CyberTipline over 18.4 million reports, constituting over 45 million pieces of child sexual abuse material (CSAM). This is a rate of approximately 2000 reports per hour, every hour, every day, every week, every month of the year. These reported images record the sexual assault of, for the most part, children under the age of 12 (and often as young as a few months). Since its inception in 1998, the CyberTipline has received a total of 55 million such reports, meaning that the reports from 2018 alone constitute approximately half of all reports over the past two decades.

Even with these staggering numbers, they are only the tip of the spear, as we are only accounting for one reporting agency, and are not accounting for the entirety of online services (many of whom don’t actively participate in programs to report CSAM), services that use end-to-end encryption, peer-to-peer networks, personal correspondences, and for the entirety of the dark-web.

How, in 20 short years, did we go from the promise of the internet to democratize access to knowledge and make the world more understanding and enlightened, to the horror that is the internet today?


The past
The landmark case of New York v. Ferber made it illegal to create, distribute, or possess child sexual abuse material (CSAM). The result of this ruling, along with significant law enforcement efforts, was effective, and by the mid-1990s, CSAM was, according to the NCMEC, on the way to becoming a “solved problem.” By the early 2000s, however, the rise of the internet brought with it an explosion in the global distribution of CSAM. Alarmed by this growth, in 2003, Attorney General Ashcroft convened executives from the top technology firms to ask them to propose a solution to eliminate this harmful content from their networks. Between 2003 and 2008 these technology companies did nothing to address the ever-growing problem of their online services being used to distribute a staggering amount of CSAM with increasingly violent acts on increasingly younger children (as young, in some cases, as only a few months old).

In 2008, Microsoft invited me to attend a yearly meeting of a dozen or so technology companies to provide insight into why, after five years, there was no solution to the growing and troubling spread of CSAM online. Convinced that a solution was possible, I began a collaboration with Microsoft researchers to develop technology that could quickly and reliably identify and remove CSAM from online services. Within a year we had developed and deployed such a technology: photoDNA, a robust hashing technology. Robust image hashing algorithms like photoDNA work by extracting a distinct digital signature from known harmful or illegal content and comparing these signatures against content at the point of upload. Flagged content can then be instantaneously removed and reported. PhotoDNA has, in the intervening decade, seen global adoption (it is licensed at no cost) and has proven to be effective in disrupting the global distribution of previously identified CSAM: more than 95% of the 18.4 million reports in 2018 to NCMEC’s CyberTipline, were from photoDNA.

This story illustrates an important point. The issue of inaction for more than five years was never one of technological

limitations, it was simply an issue of will: the major technology companies at the time simply did not want to solve the problem. This is particularly inexcusable given that we were addressing some of the most unambiguously violent, heinous, and illegal content being shared on their services. The issue was, in my opinion, two-fold: (1) Fear. Fear that if it could be shown that CSAM could be efficiently and effectively removed, then the technology sector would have no defense for not contending with myriad abuses on their services; and (2) Priorities. The majority of social media services are driven by advertising dollars which in turn means that they are motivated to maxi mize the amount of time that users spend on their services. Optimizing for the number of users and user engagement is, in many cases, at odds with effective content moderation.

The present
In the intervening decade following the development and deployment of photoDNA, the titans of tech have barely done anything to improve or expand this technology. This is particularly stunning for an industry that prides itself on bold and rapid innovation.

In the defense of the technology sector, they are contending with an unprecedented amount of data: some 500 hours of video uploaded to YouTube every minute, some one billion daily uploads to Facebook, and some 500 million tweets per day. On the other hand, these same companies have had over a decade to get their house in order and have simply failed to do so. And these services don’t seem to have trouble dealing with unwanted material when it serves their interests. They routinely and effectively remove copyright infringement material and adult pornography.

During his 2018 Congressional testimony, Mr. Zuckerberg repeatedly invoked artificial intelligence (AI) as the savior for content moderation (in five to ten years’ time). Putting aside that it is not clear what we should do in the intervening decade, this claim is almost certainly overly optimistic.

Last year, for example, Mike Schroepfer, Facebook’s chief technology officer, showcased Facebook’s latest AI technology for discriminating images of broccoli from images of marijuana.

Despite all of the latest advances in AI and pattern recognition, this system is only able to perform this task with an average accuracy of 91%. This means that approximately 1 in 10 times, the system is wrong. At the scale of a billion uploads a day, this technology cannot possibly automatically moderate content. And this discrimination task is surely much easier than the task of identifying the broad class of CSAM, extremism, or disinformation material.

By comparison, the robust image hashing technique used by photoDNA has an expected error rate of approximately one in 50 billion. The promise of AI is just that, a promise, and we cannot wait a decade (or more) with the hope that AI will improve by nine orders of magnitude when it might be able to contend with automatic online content moderation.

End-to-end encryption
Earlier this year, Mr. Zuckerberg announced that Facebook is implementing end-to-end encryption on its services, preventing anyone (including Facebook) from seeing the contents of any communications. In announcing the decision, Mr. Zuckerberg conceded that it came at a cost:

“At the same time, there are real safety concerns to address before we can implement end-to-end encryption across all of our messaging services,” he wrote. “Encryption is a powerful tool for privacy, but that includes the privacy of people doing bad things. When billions of people use a service to connect, some of them are going to misuse it for truly terrible things like child exploitation, terrorism, and extortion.”

The adoption of end-to-end encryption would significantly hamper the efficacy of programs like photoDNA. This is particularly troubling given that the majority of the millions of yearly reports to NCMEC’s CyberTipline originate on Facebook’s Messaging services. Blindly implementing end-to-end encryption will significantly increase the risk and harm to children around the world, not to mention the inability to contend with other illegal and dangerous activities on Facebook’s services.

We should continue to have the debate between balancing privacy afforded by end-to-end encryption and the cost to our safety. In the meantime, recent advances in encryption and

robust hashing technology mean that technologies like photoDNA (i.e. robust image hashing) can be adapted to operate within an end-to-end encryption system. We should make every effort to find a balance between privacy and security, and not simply sacrifice one for the other.

Counter-arguments
The argument against better content moderation and end-to-end encryption usually fall into one of several categories.

Freedom of expression. It is argued that content moderation is a violation of the right to freedom of expression. It is not. Online services routinely ban protected speech for a variety of reasons, and can do so under their terms of service. Facebook and YouTube, for example, do not allow (legal) adult pornography on their services and do a fairly good job of removing this content. The reason they do this is because without this rule, their services would be littered with pornography, scaring away advertisers. You cannot ban protected speech and then hide behind freedom of expression as an excuse for inaction.

Marketplace of ideas. It is argued that we should allow all forms of speech and then allow users to choose from the marketplace of ideas. There is, however, no counter-speech to child sexual abuse material, bomb-making and beheading videos, threats of rape, revenge porn, or fraud. And even if there was, the marketplace of ideas only works if the marketplace is fair. It is not: the online services have their thumbs on the scale because they promote content that engages users to stay on their services longer and this content tends to be the most outrageous, salacious, and controversial.

Sunshine. It is argued that “sunshine is the best disinfectant,” and that the best way to counter hate-speech is with more speech. This, again, assumes a fair marketplace where ideas are given equal airtime, and that the dialogue around competing viewpoints is reasoned, thoughtful, and respectful. Perhaps this is true at the Oxford debate club, but it is certainly not the case on YouTube, Twitter, and Facebook where some of the most hateful, illegal, and dangerous content is routinely shared and celebrated. Perhaps sunshine is the best disinfectant: but for germs, not the plague.

Complexity. It is argued by technology companies that content moderation is too complex because material often falls into a gray area where it is difficult to determine its appropriateness. While it is certainly true that some material can be difficult to classify, it is also true that large amounts of material are unambiguously illegal or violations of terms of service. There is no need to be crippled by indecision when it comes to this clear-cut content.

Slippery slope. It is argued that if we remove one type of material, then we will remove another, and another, and another, thus slowly eroding the global exchange of ideas. It is difficult to take this argument seriously because in the physical world we place constraints on speech without the predicted dire consequences. Why should the online world be any different when it comes to removing illegal and dangerous content?

Privacy. It is argued that end-to-end encryption, without safeguards or access under a lawful warrant, is necessary to protect our privacy. Erica Portnoy, from the Electronic Frontier Foundation (EFF), for example, argues that “A secure messenger should provide the same amount of privacy as you have in your living room. And the D.O.J. is saying it would be worth putting a camera in every living room to catch a few child predators.” On the first part, we agree: you have certain expectations of privacy in your living room, but not absolute privacy. On the second part, we disagree: first, the DOJ is not asking to place a camera in every living room. It is asking to be allowed to view content when a lawful warrant has been issued, as it can in your living room. And lastly, is the EFF really comfortable referring to 45 million pieces of CSAM content reported to NCMEC last year as “a few child predators?”

Conclusions
We can and we must do better when it comes to contending with the horrific spread of child sexual abuse material. I reject the naysayers that argue that it is too difficult or impossible, or those that say that reasonable and responsible content moderation will lead to the stifling of an open exchange of ideas.

Professor Hany Farid is Professor of Electrical Engineering and Computer Science at the School of Information at the University of California, Berkeley. His research focuses on digital forensics, image analysis, and human perception. He received his undergraduate degree in Computer Science and Applied Mathematics from the University of Rochester in 1989, his MS in Computer Science from SUNY Albany, and his PhD in Computer Science from the University of Pennsylvania in 1997. Following a two-year post-doctoral fellowship in Brain and Cognitive Sciences at MIT, he joined the faculty at Dartmouth College in 1999 where he remained until 2019. He is the recipient of an Alfred P. Sloan Fellowship, a John Simon Guggenheim Fellowship, and is a Fellow of the National Academy of Inventors.