Published on in Defence & National SecurityCyber Security

We live in an era when the volume of data available on each of us is unprecedented. This so called ‘Big Data’ is enabling technologists to know us better than we know ourselves. Obviously, the technology facilitates this profiling. But we as individuals are surrendering the smallest of personal details – often unknowingly – by our everyday online actions.

This allows our profiles to be compiled, while revealing an ever greater insight into our everyday behaviour. For example, Facebook knows where the vast majority of its near-two billion active users, live, work, and socialize. It knows what time we get up and when we went to bed – and perhaps even who with. It shares data between Whatsapp (which it owns), to know who we message and call, when and where, and refine its understanding of us [1].

So, should we be scared by big data, and its ability to manipulate our behaviour?

I’ve recently contributed a chapter to a forthcoming book. Called ‘The World Information War: Western Resilience, Campaigning, and Cognitive Effects’, the book outlines the threats from information warfare faced by the West, and analyses the ways it can defend itself.

My chapter shows how interference such as that witnessed during the 2016 US Presidential election may be in its infancy, and how developments in Artificial Intelligence (AI) pose profound questions for our security and society, with three specific scientific, technical, and commercial trends:

  1. Life and behavioural sciences are increasingly converging on the idea that humans are biochemical algorithms that can be ‘nudged’ to shift opinions and behaviours.
  2. The rise of big data and social media is providing unprecedented levels of insight into, and a growing ability to predict, individual and group behaviour.
  3. The emergence of data-driven influence campaigns such as those employed by both the Trump and Vote Leave campaigns in elections in 2016.

Big Data is watching us!

This rapid increase in our understanding and ability to manipulate human behaviour comes in an era of ‘Big Data’, when the information on us available to those who might want to nudge our behaviours is growing every day. More data is more insight and makes us more manipulable.

The average Internet user spends 6 hours and 42 minutes online each day [2]. Half of that is spent on mobile devices [3]. That equates to more than 100 days of online activity every year for every Internet user, or more than 27 percent of every year. For those tracking this activity, our web use leaves a lot of evidence for the kind of person we are. It also highlights our vulnerabilities.

Similarly, what we watch on smart TV streaming services, when and how we watch, what we listen to on music streaming services, data from exercise and calorie tracking apps, can all provide deep insight into who we are. This level of surveillance – continuous, unblinking, ubiquitous, and perhaps inescapable – has far-reaching implications.

Data-driven influence campaigns 

We are already seeing a confluence of big data insights with behavioural science-based manipulation through data-driven influence campaigns. The most widely known of these was Cambridge Analytica’s involvement in the 2016 US Presidential election. In its own words, it used ‘…data to change audience behaviour’ providing real-time insights and offering to change the preferences of target audiences for both political and commercial ends.

Depending on your viewpoint, they have been credited – or accused – with playing a major role in Trump’s victory in that election, as well as the Vote Leave campaign’s victory in the UK’s EU referendum in the same year. Cambridge Analytica’s effectiveness might well have been overstated – both by their own marketing material, and political opponents of the campaigns they supported. But the trend for which they remain the most widely known example is real: influence campaigns are increasingly reduced to equations, and AI will soon replace large numbers of mathematicians and physicists as surely as they themselves are replacing arts graduates.

Without AI, influencer campaigners and analysts alike will be overwhelmed by the volume of big data available to them.  With it, the methods of the behavioural sciences can be applied on an unprecedented scale.

What has changed is the scale, surreptitiousness, and speed of the modelling, measurement of effect, and the sophistication and range of the psychological research being applied. As the volume of data available on each of us is only set to increase, and the methods of online manipulation improve, and their domestic applications increase, foreign interference is likely to increase, too.

The science of disinformation

Today, with the sophistication of our understanding of human psychology much advanced, disinformation is becoming more science than art. Big Data, AI, and social media are enabling influence campaigns to target vulnerable groups more precisely, measure effect more accurately, and operate on an industrial scale. It seems unlikely that tinkering around the edges of the system we have will be sufficient to combat the threat.

Data brokers – companies that buy, agglomerate and sell our data from credit card history and loyalty card information to web use, media streaming and, with the internet of things, our use of power, water and domestic appliances – demonstrate that the problem is much wider than just social media.  With information available on citizens and the ability to disseminate disinformation both on a much larger scale than ever before, the threat is much greater. The West’s response will need to be at least as decisive and comprehensive.

So, to answer my earlier question, yes, we should be scared.

Find out more…

‘The World Information War: Western Resilience, Campaigning, and Cognitive Effects’ published by Routledge is available for pre-order and will ship after 11th May. Visit: https://www.routledge.com/The-World-Information-War-Western-Resilience-Campaigning-and-Cognitive/Clack-Johnson/p/book/9780367496517

Fujitsu’s stance

At Fujitsu we are acutely aware of the potential risks posed by the development of unethical AI systems. As such, we are engaging with industry, academia and regulators as they continue to investigate and develop good practise measures and guidelines to ensure rigorous governance and the ethical use of AI solutions across a wide range of industry applications.

Like the technology, this is an area that is changing rapidly, and one that we are monitoring closely. Until AI solutions reach the point where government regulators and industry have provided applicable deployment and mature usage guidelines, Fujitsu will seek to improve its development practises through direct engagement with industry standard bodies and test with participation in wider community to improve deployment of AI solutions.

 

[1] Lomas, N., (2016) ‘How to Opt Out of Sharing your Whatsapp Info with Facebook’. TechCrunch 26 August. https://techcrunch.com/gallery/how-to-opt-out-of-sharing-your-whatsapp-info-with-facebook/ (accessed 4 January 2020).

[2] Kemp, S. (2019). “Digital Trends 2019: Every Single Stat you Need to Know about the Internet.” thenextweb.com. 30 January. https://thenextweb.com/contributors/2019/01/30/digital-trends-2019-every-single-stat-you-need-to-know-about-the-internet/ (accessed 7 December 2019).

[3] MacKay, J. (2019). “Screen Time Stats 2019: Here’s How Much you Use your Phone during the Workday.” 21 March. RescueTime:blog  https://blog.rescuetime.com/screen-time-stats-2018/ (accessed 4 January 2020).

 

(Visited 221 times, 1 visits today)
Tags -

Keith Dear

Dr. Keith Dear is Director Artificial Intelligence Innovation at Fujitsu Defence and National Security.Keith has served as an Expert Advisor to the Prime Minister on Defence Modernisation and the Integrated Review, leading also on UK space strategy in No. 10, and advising on national strategies on emerging technology.A former Intelligence Officer in the RAF he has served in Iraq, completed three deployments to Afghanistan, deployed to Abkhazia (Georgia) with the United Nations, to Mali alongside the French, and served on exchange with US Air Force.

Keith now continues his service as a Group Captain (Reserve) in 601 Squadron, leading on Science, Technology and academic liaison. He is a Chief of the Air Staff’s Fellow, Research Associate at Oxford’s Changing Character of War Programme and Associate Fellow at RUSI - where he guest edited the Special Edition on AI in November 2019.

He speaks widely on AI, Big Data and Decision-Making and was named one of the most relevant voices in European tech by the leading business ‘Big Things’ Conference in 2019.He holds a DPhil in Experimental Psychology from the University of Oxford. In 2011, he was awarded King’s College London’s O’Dwyer-Russell prize for his MA studies in Terrorism and Counter-Terrorism. He co-leads the Defence Entrepreneurs’ Forum (UK) and was founder and CEO of Airbridge Aviation, a not-for-profit start-up dedicated to delivering humanitarian aid by cargo drones.

Leave a Reply

Your email address will not be published. Required fields are marked *