Jānis SārtsBiography
Securing Digital Natives!

Recently I was at a party with friends, where one of them, knowing the issues I work on, came to me with a story that had deeply disturbed him. He told me how, one evening, he and his wife were discussing the need to have a short break from their intensive work schedules and to take a short escape trip to a European city. After some deliberation, they settled on Barcelona. The same evening, he was spooked that, when opening his Facebook account, the first posts he found there were recommendations for Barcelona, despite the fact that they had not started any search for flights or hotels. His conclu- sion was that Facebook was listening to them; a deduction he wanted to confirm with me.

That was not the first time I was asked that question while hearing similar stories. What I answered was that, for all we know Facebook does not listen to our private conversations: it just has very rich data sets and increasingly good artificial intelligence algorithms (AI) that give it the ability to predict our future behaviors. I could see my friend was not fully convinced. It is hard to accept that we are so predictable and easily influenced.

To me, this story once again illustrated how unprepared we are as societies and individuals to face a data-driven world. As we as people emit ever-larger amounts of data in the digital world, players with increasingly sophisticated technological tools who are equipped with the latest research from the field of cognitive sciences are starting to get deeper insights into our behavior patterns and decision making. This allows for increased influence by a few, over many. However, the scale and efficiency of these operations are unclear since there is a limited amount of publicly available, reliable data on technology- driven influence on peoples’ behavior.

To assess the potential practical effects of data-based impacts on behavior, the NATO affiliated Center of Excellence on Strategic Communications conducted an experiment in 2018.1

During the experiment, a team of researchers was imbedded into the red team during a NATO military exercise. They were tasked with seeking open-source data about military personnel involved in the exercise and based on available datasets seek to impact their behavior patterns during this exercise. The results were very disturbing. Researchers were able to incite military personnel to act against the orders they were given (leaving the positions they had to defend), and induce other types of behaviors that were counterproductive to the successful outcome and the security of this military exercise. It is worth mentioning that these were full time, professional military personnel that had received training on the risks of the digital environment.

Although this was a limited experiment with narrow focus, I believe it gives an insight, at least, into how big data, AI and cognitive sciences can be misused and their potential power to induce behaviors: even ones that are clearly counterproductive to the best interests of the individual in question and their organizations. 

What, in my view, are the wider implications of such conclusions? Emotional and instinctive human decision-making is an easy target for these kinds of impacts, and rational assessments of the information we are consuming can be circumvented rather easily. Data that we as citizens of increasingly digitalized societies are producing is very rich and easy to retrieve. Some of the richest datasets have been produced by us and by people that are very close to us, clearly not understanding what this data can tell about us and how data emitting from different sources can be interrelated to profile an individual. The longer people have been “digital”, the richer data becomes, the more accurate insight on an individual one can have, and therefore the more efficiently behaviour can be impacted. 

Interestingly enough, in the current digital environment it is very hard to detect if anybody is using such technology and similar techniques to change behaviour, because of a lack of transparency.

Clearly, children and youngsters are one of the most vulnerable groups. Many of them are digital natives from the moment they are able to walk (sometimes even earlier). One of the effects is how rich the data may become throughout their lives. In terms of privacy, that would mean that companies and AI can not only attain a reasonably full picture of who you are, what you do, and how you act currently, but track these datapoints over the course of many years, potentially giving very deep insights into personality and its driving factors. 

Another risk is for youth decision-making. As the experiment described above demonstrates, it is easy to trigger an adult’s instincts and emotions to produce a desired behaviour. 

Youngsters, especially adolescents, are especially prone to making emotional and instinctive decisions. This behaviour typically coincides with a younger age group who excessively use digital interaction and communication tools, thus enriching available data considerably (relative to other age groups). This group may be the most vulnerable to psychological influence in the digital arena. Of course, at the same time, we have seen youth groups develop through multiple experiences in their digital life and develop organic resilience to some of those effects that we do not see in older groups.

In sum, if data is the oil of 21st century, youth is one of the richest, if not the richest, future oilfield in the human landscape, and we have very little understanding of who and for what purpose this oilfield is being drilled.

I agree with those believing in the potential of new technologies to make our lives and societies significantly better. However, currently most data systems are used to create better-targeted ads and impact our user choices. This technology can and should be used to create better healthcare, develop individualized education, more efficient public transport, better use of public resources, etc. But, as we are striving for it, we should always remember the inadvertent negative effects technologies can help to create. I think balance lies in developing technologies which consider the societal effects and possible risks, while creating regulatory frameworks that do not impede technological development.

Some potential ways forward
We clearly need to agree upon what constitutes the ethical and moral use of data! As AI develops, data will provide more and more opportunities. With the introduction of 5G infrastructure and the internet of things (IoT) the amount of data that can be generated will grow exponentially. I do not believe we should be embracing every opportunity given by new technology. I think we need clear rules to identify where AI is, and prevent AI from, making us behave differently. For establishing these rules, we need to see how human rights and human freedoms can be applied, to set the rules for the digital environment.

Secondly, we clearly lack transparency. How is my data being used? Is someone trying to affect my behaviour based on harvested data? Is somebody buying my data? Although GDPR has given some controls to the individual, it is not enough.

If data is so powerful, should we (and under what circumstances) allow data on children under the age of 18 to be collected? I see a case where we would allow such data collection (for education or healthcare) on minors only under strictly defined, and very few, circumstances. 

Of course, the introduction of digital hygiene training in school curriculums from the very first years of school is a clear requirement. We also need to invest in new educational tools for digital hygiene and secure online behaviour for minors through relevant games, using virtual reality and augmented reality technologies to enhance the learning experience, while also making it appealing, contextual and fun.

Jānis Sārts is Director of the NATO Strategic Communications Centre of Excellence. Jānis began his career in the Ministry of Defence. He has been a State Secretary of the Ministry of Defence, Latvia for seven years, and has headed the Latvian Government’s efforts to increase security and defence in cyberspace. He was a Chair of the National Cyber Security Board, a body that is responsible for formulating and overseeing the implementation of Latvia’s cyber security policy.