Artificial Intelligence – or AI – is the shiny new toy that everyone in the IT world wants to play with. It reminds me of when the iPad was launched on an unsuspecting world. Everybody who was somebody – or even thought they were somebody – wanted one. And so, they created a business case to justify having one.
Many of the conversations I’m hearing around AI are a bit like this at the moment. Organisations feel the need to shout about using AI, almost as though they are justifying it to themselves. They are creating problems for AI to fix without thinking through the implications. And perhaps more importantly without engaging other parts of their business.
It’s almost as if they have decided AI is the answer, before understanding what the question or problem is.
But AI is more than just a shiny new toy. And it’s not just people clambering after it. Large corporations are buying into it in a big way. In December alone, Google, Apple and Intel – to name just three – all acquired AI specialist firms.
So what actually is AI?
I ran a simple search for “what is AI” in Google and it returned over 2,770,000,000 results. The fourth result on this search was from the BBC, and specifically the CBBC Newsround website, which defines AI as follows:
“Artificial intelligence – or AI for short – is technology that enables a computer to think or act in a more ‘human’ way. It does this by taking in information from its surroundings, and deciding its response based on what it learns or senses.”
Another definition on that first page of results included:
“…any task performed by a program or a machine that, if a human carried out the same activity, we would say the human had to apply intelligence to accomplish the task.”
A similar search for “Powered by AI” returned 1,120,000,000 results. Companies making the claim, just on the first page of results range from IBM to eBay to Chooch.ai.
So, how do buyers or consumers really know if something they’re interested in really is “powered by AI” or not. And perhaps more fundamentally, does it really matter?
Ethics and responsibility
The question of ethics and the dangers of over-reliance on automation seem to crop up increasingly frequently in any conversation about AI. In fact, my colleague Dr. Darminder Ghataoura has recently written a blog specifically on the issue.
The creation of an AI-based decision-making algorithm is a specialised science, understood by a select few. But their outputs are potentially far-reaching, affecting millions of people or costing millions of pounds. So how do we ensure that those decisions are ethically correct and non-discriminatory?
Who is accountable if things go wrong? And from a security perspective, how do we ensure that the decisions will enhance our security, without compromise?
Software developers and their employers or customers can submit code to automated security checking solutions which will highlight the presence of any known bad practices or vulnerabilities within that code, either put there erroneously or maliciously. But to the best of my knowledge, similar solutions do not yet exist to allow recipients of an AI algorithm to verify its content.
The human threat to cybersecurity…
The activity of humans – either inadvertent or malicious – is, without doubt, the root cause of most data breaches, and possibly most security incidents. From easy mistakes such as sending e-mails to the wrong people and misplacing USB sticks, to badly configured S3 buckets, ‘we’ are always the common factor.
But if AI is about enabling computers to undertake actions previously performed by humans – yet we are the ones designing and building these AI-powered systems to replicate human actions, more quickly – how do we alleviate this risk of error?
And that’s just the good guys, working with best intentions. Just imagine what the bad guys are busy designing AI solutions to do?
Serial cybercriminals have been able to deliberately spoof things like anti-virus download sites with malicious payloads instead of protective ones, with users not becoming aware until after the event. As AI systems learn from the data they receive, if this data is tampered with, presumably a threat protection algorithm could be taught to misbehave too?
So, how do we ensure our AI-powered cyber protection solutions are always a step ahead of the cyber criminal’s AI-powered cyber-attacks?
If there is such a thing as malicious AI, how do we identify it before it runs? And remember AI systems are “intelligent computer systems” so may be able to differentiate between test and live environments.
It’s unequivocal that the opportunities presented by AI are far-reaching with the potential to transform lives and possibly influence the future of humanity. But with it comes some very real and significant risks and threats.
Rather than deciding AI is the answer for every single question, perhaps the right approach for organisations wanting to use AI is to look at existing business processes or problem areas, and then ask the AI experts if they can devise a solution that does it quicker, or more efficiently. This way, all areas of the business already involved in that existing process are likely to be involved in attempts to improve it.
AI is the shiny new toy. But it comes with risks – and there is currently no health warning on the label.
Find out more…
Like the technology, this is an area that is changing rapidly, and one that we are monitoring closely. Have a chat with our AI practice and let us help you create an AI strategy for your business that delivers on the benefits while managing the risks.
Since joining the Fujitsu Defence & National Security business unit, Mark has assumed responsibility for the department’s strategy and portfolio in relation to cyber along with managing strategic technical security relationships with partners and UK government.
In June 2019 Mark was awarded the status of Fujitsu