Published on in Defence & National SecurityCyber Security

As Artificial Intelligence (AI) becomes further embedded into our day-to-day lives, maintaining the integrity of these systems and the data they use is becoming increasingly critical. Unfortunately, this growing pervasiveness has given unscrupulous attackers the opportunities to exploit any vulnerability found within machine learning models. And this has given rise to ‘Adversarial AI’.

The potential impact that adversarial AI can have on our society and the harmful implications it will create to our security, trust and general wellbeing will only become more apparent with the continued pace in adoption of autonomous systems.  So, what are the implications for our national security, and what can we do to mitigate and plan against these risks in the military environment?

What is Adversarial AI?

The idea of an adversarial AI attack is fundamentally very simple. An attacker seeks to generate and introduce small changes to a dataset that – although imperceptible to the human eye – can cause major changes to the output of an AI system. As a result, it causes machine learning models to misinterpret the data inputs that feed it, making it behave in a way that’s favourable to the attacker.

To produce unexpected behaviour, attackers create ‘adversarial examples’. These often resemble normal inputs, but instead are meticulously optimized to break the model’s performance. Attackers typically create these adversarial examples by developing models that can repeatedly make miniscule changes to the data inputs of an AI system.

These are known as ‘poisoning attacks’, a good example being image classification systems. Here, an adversarial attacker can introduce random noise into image datasets that completely alters the results of a trained classifier[1].

While this might sound like a ‘fun’ research exercise, imagine the damage it can cause in scenarios such as self-driving vehicles. Attackers could target autonomous vehicles by placing stickers or using paint to create an adversarial ‘stop’ sign that the vehicle would interpret as another type of instruction[2].

Implications for Defence and National Security

Clearly, adversarial AI has the potential to become a major security threat. If an adversary can determine a particular behaviour in a model that’s unknown to system developers, they can look to exploit that behaviour in order to create intentional consequences. As such, adversarial attacks pose a significant threat to the stability and safety of defence and national security systems[3], where AI and robotic technologies are increasingly being incorporated.

The challenge for Defence – like many other commercial entities – is knowing the exact conditions for such attacks. These are typically quite unintuitive for humans and notoriously difficult to predict when and where the attacks could occur.

While it may be possible to estimate the likelihood of an adversarial attack, knowing the exact response the AI system will take is also extremely difficult to predict. This has the potential to lead to outcomes where less safe military engagements and interactions are a result – and trust is compromised. Catering for these types of scenarios – and possibly more – will have to be verified and used where applicable to guide suitable response mechanisms.

Systems of Systems design approach

To combat the threat, Defence should consider a ‘Systems of Systems’ design approach towards understanding the impact of compromised information from the edge on the overall stability of the decision support system. This will also ensure that parts of the critical chain have suitable checks and balances[4]. A risk analysis of the combined system may indicate and reveal a different risk impact and risk likelihood.

Where this cannot be assessed in real operational environments prior to use, simulation offers a different route for providing the evidence required to allow continued operation with the necessary system assurance guarantees[5]. A ‘systems of systems’ approach also enables Defence to further assess – and potentially limit – the impact of compromised information by evaluating the role of humans, in terms of where they should be best placed within the decision-making loop to ascertain information reliability.

Within the context of human machine teaming[6], humans could potentially be trained to monitor such attacks and to assist the guidance of AI systems to more appropriate behaviours against known safety boundaries. This is to ensure that any future operator be best placed with the correct situational awareness in order to be able to take full control of the AI system, both safely and effectively.

Protecting AI against adversarial attacks

It is clear that protecting AI against adversarial attacks is extremely challenging. And similar to cyber security, it suffers from the limitation of assuming prior knowledge of attacks, which is not ideal for real-world scenarios, like military environments. Adversarial techniques are constantly evolving, and bad actors are regularly developing new attack methods, causing AI systems to face attacks that haven’t been evaluated during their training phase.

What makes adversarial attacks different from cyber threats, however, is their unknown nature and the possible countermeasures. Defence has to ensure that the testing of AI systems against adversarial attacks is a key requirement that becomes embedded within the lifecycle and maintenance of mission-critical applications.

Find out more…

To explore the area of Adversarial AI in more detail, I’ve recently written a White Paper called Adversarial AI: Fooling the Algorithm in the Age of Autonomy. Here, I cover different types of adversarial AI attacks, and what can be done to protect AI and developers against such attacks. Download the White Paper here.

[1] https://venturebeat.com/2020/02/24/googles-ai-detects-adversarial-attacks-against-image-classifiers/

[2] https://towardsdatascience.com/your-car-may-not-know-when-to-stop-adversarial-attacks-against-autonomous-vehicles-a16df91511f4

[3] https://www.gov.uk/government/publications/human-machine-teaming-jcn-118

[4] https://www.isaca.org/resources/news-and-trends/newsletters/atisaca/2019/volume-17/systems-thinking-in-risk-management

[5] https://www.frontiersin.org/articles/10.3389/fpsyg.2018.00691/full

[6] https://www.gov.uk/government/publications/human-machine-teaming-jcn-118

(Visited 167 times, 1 visits today)

Darminder Ghataoura

Dr. Darminder Ghataoura has over 15 years’ experience in the design and development of AI systems and services across the UK Public and Defence sectors as well as UK andinternational commercial businesses. Darminder currently heads up Fujitsu's offerings and capabilities in AI and Data Science within the Defence and National Security space, acting as Technical Design Authority with responsibility for shaping proposals and development of integrated AI solutions. He also manages the strategic technical AI relationships with partners and UK government and was awarded with the Fujitsu Distinguished Engineer recognition in 2020.

Darminder holds an Engineering Doctorate (EngD) in Autonomous Military Sensor Networks for Surveillance Applications, from University College London (UCL).

Leave a Reply

Your email address will not be published. Required fields are marked *