Dr Ian LevyBiography
On the Need for Software Safety in a Digital World

Society’s expectation of children’s rights has evolved with our environment, context and technology. We no longer send small children up chimneys or down mines. We no longer believe it’s OK for children to drink gin. We try to ensure that children get a decent education. We believe that children should have greater control over their lives and opportunities to express themselves fully. In the digital age, what is the next step in the evolution of children’s rights? I believe it’s the right to safe software.

We all want our children to be safe and we take actions – both individually and as societies – to try to protect them from hazards and harms. Those hazards and harms fall broadly into three categories.

The first is hazards and harms with obvious protections. For example, it’s reasonable to expect adults to know that knives are inherently dangerous products that a child is incapable of handling safely. Individual adults make consistent risk decisions – don’t let kids have knives. As a society, we also further bolster that protection by making it illegal to buy a knife if you’re under 18. That protection is both for the putative purchaser, but also to try to reduce the second order effects (i.e. youth knife crime). We can’t expect children to process complex information in the same way as adults and therefore we can’t expect children to fully understand the risks – both primary and secondary – so we put protections in place.

The second is a set of hazards and harms where the details of protections matter but are not obvious. For example, consider child car seats. We all accept that car crashes are a hazard and that a child involved in a crash is at risk of harm. But how many of us could work out the correct density of the material in the car seat that absorbs the impact? Or design ISOFIX mounts, or seat belt loops? It’s not reasonable to expect parents to make these assessments, so as a society we use experts to set minimum standards and enforce those standards through law to ensure adoption. These expert-driven standards govern certain parts of our lives, providing protection for everything from toxic paint in toys, to more complex topics such as medical drug safety. Failing to adhere to the standards leads to significant consequences for those promulgating the goods involved.

Finally, there is a set of non-obvious hazards and harms that only experts can conceive before they happen at scale. Given the knowledge at the time, who could have predicted the ‘magic mineral’ asbestos would have such a terrible effect on health? Who would have predicted that square windows on early commercial aircraft would lead to fatal structural failure? In retrospect, with our knowledge today, these are obvious. Similarly, the mental health impacts of a mobile phone that automatically airbrushed selfies are obvious in retrospect, but they weren’t to the software engineer who invented it. Neither were the harms caused by social media-enabled cyber bullying, or by paedophiles contacting children using credible (but false) online personas. Data driven micro-targeting on the internet probably wasn’t an obvious consequence of the first supermarket loyalty card to its inventor.

Across these different types of hazards and harms, individuals and societies provide protection for our children before the harms accrue. We engender an implicit right to a safe childhood. What does this look like in the future digital age? What’s the digital equivalent of making a safe car seat? Or ensuring toys don’t contain lead paint?

I think the physical hazards and harms faced by children today will be broadly the same in the future, but the digital hazards and harms that children already face today will become much more impactful. Currently, our digital identity is secondary to our physical identity, so when a large-scale data breach occurs (as it does all too often), the actual impact on people is relatively small. Of course, there are exceptions, for example, the customer list of a sexual health clinic being disclosed would have a significant impact on those people. On the whole, our digital identities are fungible. For our children, their digital identity will, for all intents and purposes, become their immutable, primary identity.

Today, we take ubiquitous internet connectivity for granted, and its absence is merely an inconvenience. For our children, ubiquitous connectivity will be necessary for them to function in society. With their experiences increasingly lived through and affected by technology, software will pervade every aspect of their lives. It already enables ubiquitous communication, whether a part of our mobile phone infrastructure or the social media platforms we increasingly rely on. Software is what enables our devices and apps to do the apparently magical things they manage to do. Software is what keeps our critical infrastructure working optimally. In the future, software will have more direct impacts on us, and will even be the arbiter of certain parts of our lives, deciding whether we can do particular things. This is what leads me to contend that our children have a right to safe software.

Safe software should, at its most basic, protect itself from cyber-attack. We’ve seen poor cyber security lead to real world hazards and harms for children; fitness trackers that could be abused by anyone to monitor the location of any child wearing the device. Medical devices that attackers can control to the detriment of the patient. Connected toys that expose young children to malfeasance from attackers close to them, and online services aimed only at children that leave their users’ details available to anyone. These examples show that even the most basic security issues aren’t always considered when designing digital stuff for children. We should be able to root out this sort of pathological stupidity by setting basic standards and ensuring they’re met, something we’ve started to do with our code of practice for consumer internet of things devices.

In my opinion, this should belong in the first set of hazards we explored earlier: the hazard and harms are obvious, as are the mitigations. Others will say that even the most obvious cyber security mitigations aren’t obvious to the majority of the population, and this should sit in the second category. I’m not sure it matters and perhaps this ambiguity is an example of how it’s hard for the public to really understand these digital hazards. Either way, there should be no excuse for software to not exhibit basic cyber security. The consumer law that applies in the physical world seems to directly apply here for software, devices and services.

Safe software should minimise the harm directly caused by its use.

Safe software shouldn’t help kids get access to damaging content and shouldn’t target them for adverts for inappropriate products like nicotine and cosmetic surgery.

Safe software should help engender safe online behaviours in our children and not require them to divulge huge amounts of personal data to access a service. Data consent is impossible for most adults to understand, so it seems ridiculous to expect children to give informed consent.

Safe software should help minimise excessive screen time and design out features that will adversely affect our children’s mental health.

Safe software should not track our children’s behaviours online, other than to provide a safety net to nudge them when they’re doing risky things and to intervene when they’re doing dangerous things.

Safe software should provide a simple way for children to ask for help when they’ve made a mistake and for that help to be provided quickly and painlessly, whether the mistake was sexting someone, remotely opening the house to a burglar or reporting some bizarre symptoms to a future medical AI to try to get out of school.

Safe software should help its users protect themselves in the real world wherever possible.

And probably most importantly, safe software should be built with the safety and security needs of its users top of mind, rather than the profit of its developer. Just as in the real world, many digital spaces are shared between children and adults and yet the software behind them tends to treat all users as adult users. Children’s needs are different to adults, but they have a right to have them satisfied.

These are broadly the sort of hazards and harms we see online today, and they will only get more numerous and more impactful as our technological innovation continues apace. However, I believe these hazards and harms broadly fall into our second category: at a high level they’re obvious, but the solutions may be complex and require technical know- how to understand. We need to change the narrative we have today, which is largely based on hyperbole, distraction and fear. What is the software equivalent to the ECE R44 ‘safe car seat’ mark to help children and parents make good purchasing choices? Again, consumer law in the physical world broadly translates to the digital world, but we need to better understand standards of due diligence in the digital world.

As we move towards the world where software pervades every characteristic of our lives, I believe we’ll move into the third category of hazards and harms that we, as society, can’t easily predict (if at all). There’s often no physical equivalent, and there’s certainly no consumer law protections in place here. Think about the data economy/surveillance capitalism we have today, where we effectively barter for services using our data. Originally conceived to target adverts, we’re starting to see the darker side of this economy with intrusive and inappropriate use of these data in cases like Cambridge Analytica. The discoverable power in those data (and the concomitant potential impact on real lives) will continue to increase if left unchecked, leading to an unrecoverable erosion of long-term digital privacy for our children’s generation. We now understand this fact: we should do something about it.

We know that use of new technologies in a pervasive way will have effects on users in ways we don’t really understand because we’ve not spent the time researching the downsides. For example, I have real worries about the possibility of biased artificial intelligence algorithms disadvantaging an entire sub-population of a generation in ways we cannot conceive today.

I worry about us building critical services that our children will rely on for their daily lives using infrastructure that was never designed to support this.

I worry about malign nation states attacking the companies that build these systems to ensure they have cyber-attack capabilities or long-term leverage over other nations, and that really means the citizens of those nations.

I worry about us continuing to judge the safety and security of large-scale software systems through marketing hype and biased rhetoric, rather than science and evidence.

But most of all, I worry about software taking the place of the people who teach our children social norms, respect for others, and the ability to judge real world risk on a day-to-day basis. It seems that we are destined to repeat the tragedies of the past in the inculcation of software into our lives. The devastating effects of thalidomide in pregnancy were not adequately considered at the time, an event that led to much more stringent drug testing and regulatory regimes around the world. While the effects software will have on our children will be very different, I believe they could be the same order of harm, but at a scale we have not seen before. But we can fix this. We have the majority of the science we need, and we have proven harm reduction approaches in other spheres of life that we can re-purpose. We just need to apply them properly.

I believe in neither the utopian nor dystopian view of our technological future. But I absolutely believe that software will forever change how our children live, interact, work, play and grow. It will change their fundamental relationship with the things around them, including their ability to communicate, exchange knowledge and possibly even their thoughts and dreams. And I know for a fact that the way we build software, the way we deploy software, services and devices, and the way we talk about the very real risks that can accrue from the unwise use of software are wholly inappropriate for the risks our children will face.

We need to learn from the past and other sectors. Software is not benign, and it will never be error free. The companies that build and profit from software are rarely entirely altruistic and will often have incentives that aren’t aligned with the safety and security of our children. As a society we need to decide when it’s acceptable to use software, what long term impacts we’re willing to tolerate for the service or benefit we get, and how we judge and regulate the systems that enable all this. Pervasive software could be a massive force for good in our children’s lives, but it is unlikely to become that if the market is left to its own devices. That’s true in digitally advanced markets like the UK, but it is also true in markets where technology is just being introduced where citizens don’t yet have the digital skills necessary to operate safely.

I believe a child’s right to safe software is essential to ensure their safe and secure future, wherever they live on the globe. And it is our collective responsibility to ensure that right becomes a reality before irreparable harm happens. It’s for government, academics and tech industry to provide us with a language to describe these things, but it’s up to all of us to demand better from our software, services and devices.

Dr Ian Levy is Technical Director of the National Cyber Security Centre and has led GCHQ’s technical cyber defence work for almost two decades. He leads on developing defences to manage cyberthreats, fostering technical innovation to find solutions that can protect the UK from attack and malicious activity. Ian completed his Doctor of Philosophy (PhD) in Computer Science at the University of Warwick.