EY Poll: Uncertainty and lack of transparency the driving force behind AI trust crisis

  • Introducing new mechanisms to address the unique risks of AI is needed
  • Leaders have to embed risk management, monitoring mechanisms for AI

Almost half of the polled participants (41%) are interested in exploring AI, but aren’t sure where to start.

Asia-Pacific (APAC) organisations are holding back their adoption of AI due to mistrust, potential bias, and a lack of transparency and explainability. This is according to a poll conducted by professional services firm EY during a webinar, with 70% of the participants identifying the above points as the biggest barriers to raising trust levels in AI, particularly in Australia.

“Trust is the foundation on which organisations can build stakeholder and consumer confidence and active participation with AI systems. Across Asia-Pacific, governments and organisations acknowledge how AI technology can deliver increasing value and are experiencing life with AI, in particular facial recognition technology,” says EY Asia-Pacific Intelligent Automation leader Andy Gillard.

“With the risks and impacts of AI spanning across technical, ethical and social domains, introducing new mechanisms to address the unique risks of AI is needed, such as the development of frameworks and guidelines.”

It’s clear that most organisations realise the growing value and potential of AI. When asked about their AI journeys, 18.8% of those EY polled expressed that they are exploring AI solutions that may be relevant to their industry. However, almost half of the polled participants (41%) are interested in exploring AI, but aren’t sure of where to start.

The poll results also found that 52.3% believe “process automation” will be a main benefit of AI, while 18.8% feel that “generating new revenue potential through new products and processes” is a major pro.

Commenting on the subject, EY Asia-Pacific Advisory Leader for AI and Analytics Gavin Seewooruttun says that APAC organisations need to view AI implementation “through a human lens rather than treat it as strictly technological effort.”

“To do this, leaders have to embed risk management into enablers and monitoring mechanisms for AI by demonstrating their commitment to being accountable for AI systems predictions, decisions and behaviours,” he continues.

Gillard further adds that it is crucial for organisations to rethink their approach in addressing the attributes necessary to sustain trust in their AI solutions. Companies can look towards the EY Trusted AI Framework, which lists “performance, bias, transparency, resiliency and explainability” as areas that will help maximise AI potential as well as reassure customers to trust these technology

“In our near future, if companies employ AI wisely, consciously, and with an ethical and responsible mind, AI can make an exciting and material contribution to a better working world,” Gillard concludes.

 

Related Stories:

Learning for the future of work is to understand what makes us unique as humans

Randstad: Shortage of skilled talents hamper growth of IT, fintech in Malaysia

Accenture: Most companies are facing limited growth due to an ‘innovation achievement gap’

 
 
Keyword(s) :
 
Author Name :
 
Download Digerati50 2020-2021 PDF

Digerati50 2020-2021

Get and download a digital copy of Digerati50 2020-2021