• As AI and machine learning gain wider adoption, the issue of trust becomes more important
  • Even big tech firms have had problems with AI going wrong
  • Certification and quality control systems are being set up and may help build trust

As artificial intelligence (AI) and machine learning (ML) move out of the lab and into real-world business applications, companies are asking “can we trust AI and ML for decision-making?”

It’s a question that’s important not just for large businesses. The government’s AI Activity in UK businesses study in January 2022 showed that even among small firms, 15% had adopted at least one AI technology. The government data suggests that every working day 200 small- and mid-sized firms invest in their first AI application.

AI is being used in a wide range of applications including data analysis (such as demand forecasting), processing natural language (deployed in customer-facing roles such as web chatbots), and computer vision (often used in quality control).

Albert King, Chief Data Officer for Scotland, said in March: “Being trustworthy, ethical [and] inclusive are actually necessary conditions for comfortable adoption of AI technology.”

AI has new types of failure

30%

of UK firms cited

legal liability

as a key challenge for adopting AI

EU Study, 2019

The problem is that AI and ML are complex and can be untrustworthy. For example, in 2018 Amazon piloted a system to vet candidates for its US tech jobs by scanning CVs. After analysing 10 years of human decisions, the AI taught itself to favour male candidates, marking down CVs that included phrases such as “women’s chess club captain” and those who attended all-female universities. Amazon scrapped the system.

Few UK SMEs have yet to hand over sensitive tasks like this to algorithms. They are typically using out-of-the-box technologies that don’t affect the firm’s operating fundamentals and offer quick ROI. Examples include smart energy control systems for their offices or production lines, or marketing automation.

However, AI is being increasingly deployed in more sensitive areas where verification is difficult. A faulty intelligent energy control system will be discovered when the next electricity bill is received; a faulty smart recruitment portal that marks down female or minority candidates could be much harder to diagnose.

Many potential uses of AI by SMEs lie somewhere between these two examples. For instance, a faulty AI tool that aims to help efficient, sustainable procurement won’t destroy a company, though it could waste hours of management time and tangle the supply chain.

Computer says ‘no’

The issue of quality control is particularly acute for some types of ML, such as image recognition, that use the digital equivalent of “gut instinct” and can’t provide a human-understandable set of rules that explain their decision.

This is already a problem for recruitment or credit scoring. Under Article 15 of GDPR, anyone who has an important decision taken about them by an algorithm has a right to “meaningful information about the logic involved”.

Big firms employ data scientists to verify the large-scale AI applications they need. But this is unreasonable for SMEs buying off-the-shelf or semi-custom AI-powered projects, and potentially reduces their willingness to try this new technology.

They need an independent certification system, like the Euro NCAP programme, that does extensive crash testing of new cars and then allocates a safety rating up to five stars.

And that’s what’s coming. In January 2022, the Alan Turing Institute, the UK’s national institute for data science and AI, started  a project to create global technical standards for AI  . Also involved with the project are BSI, the body whose famous kitemark is used as a quality symbol on products and services ranging from construction projects to honey.

Scott Steedman, Director-General, Standards at BSI, commented: “International standards are a vital tool to help unlock the economic potential of AI, including establishing a common language for all to use.”

Building trust in machine learning

Testing and quality assurance of AI won’t be compulsory. But even a voluntary system can have impact. Euro NCAP’s programme is voluntary too, but cars that score five stars get a sales boost and those that score poorly have even been withdrawn from sale.

Some jurisdictions are going further and potentially forcing firms to ensure their AI is trustworthy or risk penalties. The EU, for instance, has proposed a legal framework for AI that sets tight requirements for applications deemed high risk. This doesn’t just include obvious categories such as medical devices and power stations, but applications that could infringe personal rights such as recruitment, systems that evaluate workplace performance, and credit scoring.

For a free credit insurance consultation call our UK team, 09:00-17:00 Mon-Fri.