Published on in Innovation

Machines learn by interpreting data about past events. In many ways, this is how human beings learn: we look for examples of what has gone before to make assumptions about what might be coming.

So when we look into the recent past for data on artificial intelligence (AI), what do we see? While the technology is able to make decisions in a range of areas, from assessing a resume to reading a medical scan, do people have confidence in it? Do they trust it? There have been many examples of AI seemingly falling short, from abusive chatbots to accidents involving self-driving cars.

AI is still emerging technology, are these teething problems or is there a fundamental issue with trusting the technology? In this blog post, I’m going to explore the current sentiment towards AI, and investigate how businesses can overcome this to demonstrate the value in trusting technology.

Trust and Transparency

At the moment, trust is very much top of our mind. We live in a world where trust – in institutions, media, technology, and even our own senses – is being shaken.

When we discussed this at our recent Executive Discussion Evening (EDE), ‘Are we running out of trust?’, William Tunstall Pedoe made an interesting point. He argued that we struggle to feel confident about AI because it’s so difficult to understand.

Firstly, it’s unclear what actually counts as AI – partly because it’s hard to settle on a definition of ‘intelligence’. This is a challenge that William himself is intimately familiar with, as the founder of an AI business that would go on to produce Amazon’s Alexa.

The lack of clarity around AI is worsened by the black box model. Data goes in, and a decision comes out, but the system can’t explain how it reached its answer. In areas like healthcare or financial services, it is vital to be able to put decisions under scrutiny. For instance, it is now possible to use AI to detect cancer from a scan. But without understanding the machine’s thought process, it is difficult for doctors to trust the diagnosis, let alone be able to deal the consequences if the diagnosis is wrong.

One Danish hospital found that medical professionals disagreed with the AI in two-thirds of cases, but they couldn’t work out why.

Even if those closest to the technology have faith in it, most organisations, and certainly any high impact decision needs to be accounted for. The EU’s GDPR regulations even mandate this, so without explainability, it is difficult to make an AI system compliant in the EU.

Participation is key to trust

At the EDE, we were also lucky enough to be joined by Margaret Heffernan, the entrepreneur, CEO, writer famous for her work on wilful blindness.

Margaret made another interesting point about the transparency of AI: it starts before the algorithm is written.

Currently, as citizens and everyday consumers, we aren’t involved in the process of AI development. Instead, we are told that AI is an inevitability, and we should accept it as part of our future.

But Margaret emphasises that this is not a language that builds trust. People should have a voice in the development and regulation of AI, since these are technologies that will shape the future of all humankind.

And there’s clear evidence that involving people in the development of AI makes them more confident and willing to trust the technology. One study found that participants were happy to rely on algorithms they knew to be imperfect, as long as they had been given an opportunity to modify them before us.

The challenge of bias

Bias is a major barrier impeding trust in AI.

Human actions are imperfect, and the decisions we make normally have an element of bias in them. But AI uses the data on these historic decisions to learn how to make its own judgements – so they end up being biased too.

The technology will naturally reflect the asymmetries of the real world. For instance if most CEOs are middle aged and male, might a system conclude these were important factors to look for in a prospective candidate? What’s more, the human world is complex and illogical. Many of society’s ‘rules’, that we humans intuitively understand, would not be obvious to a machine. We know it is bad for a men and women to be paid differently for the same work, yet it is OK to offer them different car insurance premiums.

Data sets need to be diverse and representative in order to avoid making horrible and flawed decisions. If you’re training a facial recognition system, for instance, and you don’t have a diverse enough data set, it won’t be able to spot all the different types of human faces. In this situation, it’s not the AI that we can’t trust. It’s the data.

Our research[1] found that there is still a long way to go before people will fully trust AI. 60% would not trust AI on its own, and would want the final decision to be made by a person. And the higher impact the decision, the less people are willing to trust. While 40% would not trust AI to make a quality inspection for a product, this rose to 54% for a medical diagnosis and 69% for a court decision.

Assembling the right kind of data will be a serious challenge for organisations using AI. Getting it wrong risks making our world less ethical, less equal – and less human.

Trust is a commitment

Trust is the theme of our 2019 Fujitsu Tech and Service Vision. But our focus on this has goes back much further than this: it’s always been an integral part of our culture, and our principle of human-centric innovation.

That’s why we’ve laid out our AI policy in the Fujitsu Group AI Commitment, which sets out how and why we want to use this technology:

  1. Provide value to customers and society with AI
  2. Strive for Human Centric AI
  3. Strive for a sustainable society with AI
  4. Strive for AI that respects and supports people’s decision making
  5. As corporate social responsibility, emphasize transparency and accountability for AI

Alongside our AI commitment, the Fujitsu Laboratories of Europe has become one of the founding partners of AI4People, a global forum on the social impact of AI.

This ethical approach is appropriate for any business looking to establish employee and customer trust in AI. We’re excited about developing this as an important part of all our partnerships.

Furthermore, Fujitsu is developing Explainable AI, which enables us to understand the rationale behind the patterns that AI finds. The technology enables the learning system to indicate the most significant factors driving a particular outcome. It cross-references these against a knowledge graph to rapidly identify likely explanations for the pattern.

For instance, working with the University of Tokyo, we are using the technology to find explanations for possible links between genetic mutation and disease, accelerating cancer diagnosis.

A future built on trust

So, I’ll return to my original question: do we trust AI?

As it stands currently, there people are not fully willing to trust the technology, and in many cases for justifiable reasons.  There are many challenges that have to be overcome before this technology inspires trust, or works in a responsible way.

But while we might rely on AI today, tomorrow is another story.

AI has great potential to transform our world for the better. When we get it right, it will be capable of delivering a fair, consistent, accurate judgment, 100% of the time.

And I trust that this future is close at hand.

[1] Fujitsu Global Transformation Survey of 900 business leaders in 9 countries, conducted February 2019.

(Visited 111 times, 1 visits today)
Tags -

Leave a Reply

Your email address will not be published. Required fields are marked *