Will we ever trust artificial intelligence to act on its own? Probably not, at least for the foreseeable future.
One of the mandates within the European Union’s AI Act is a call for human oversight of artificial intelligence, though it’s not clear what forms that oversight will take. Similar legislation is expected to be implemented in other parts of the world. However, from a business perspective, human supervision may be a practical necessity.
Expect many parties to have a role in AI oversight. “Human oversight is a shared responsibility between developers and users of AI,” wrote Dr. Johann Laux, postdoctoral fellow at the University of Oxford, in an analysis of the EU AI Act. “The onus to execute oversight falls primarily on users, but should involve developers to continually learn AI systems on a case-by-case basis. Norms can be adjusted whether the supervision should be constitutive or corrective of the results of an AI. So far, the new surveillance laws do not fully distinguish between these types of surveillance.”
Of course, human supervision is not necessarily a panacea for producing error-free AI. An analysis carried out two years ago concluded that “people are largely incapable of carrying out the supervisory functions assigned to them”, Laux also pointed out. Humans have been shown to over-rely (automation bias) and under-rely (algorithm aversion) algorithmic advice and do poorly in judging the accuracy of algorithmic predictions.” One case where human supervision is impractical is aviation systems, for example.
It’s a matter of trust and confidence – both in AI systems and human judgment. One thing is clear, observers agree: We have a lot of faith in AI for entertainment and small-scale recommendation engines — but it’s not ready to take on the biggest business tasks yet on a manly, autonomous basis.
“For personal entertainment on a small scale, we’ve likely built up a certain level of trust,” Ding Zhao, associate professor of mechanical engineering at Carnegie Mellon’s College of Engineering and head of the CMU Safe AI Lab. “However, for civil infrastructure such as self-driving, mass production or healthcare, there are still issues that need to be addressed,” he said.
There are now many cases where AI-driven decisions or processes have been overridden or overridden by humans, Zhao added. “In the field of autonomous vehicles, this is quite common,” he said. “In healthcare, it’s also common practice for a human doctor to be able to override AI decisions.”
If anything, we are a long way from independent AI operations, especially when it comes to data transparency, said Carm Taglienti, chief technology officer and chief data officer at Insight Enterprises. “There are a lot of unknowns when it comes to the datasets used to train many large language models,” he says. “Not knowing where data comes from erodes confidence in an AI output and can also perpetuate biases.”
Belief in AI also varies across industries, according to Yao Morin, JLL’s chief technology officer and angel investor. “In industries such as finance and manufacturing, where traditional AI has been used for a while, people have more confidence in AI’s predictive ability.”
With other industries and applications, such as autonomous vehicles, the jury is still out, Morin cautioned.
Plus, modern AI approaches have become “less predictive and less repeatable than classic AI models,” Taglienti warned. “Moving forward, there are several emerging trends that could improve the confidence and determinism of generative AI models, such as action-based language models, causality models, and agent-based behavior.”
However, demand for higher-end AI applications will continue to grow, Zhao predicts. “The rising cost of human labor due to an aging population and the falling cost of AI will push us to use autonomous agents on a larger scale. We will be forced to solve AI security issues within the next two decades.”
Humans must be involved in AI decisions at these higher levels, and this requires thinking through levels of responsibility. “The key is to clarify responsibility – who should be responsible for decisions, the user or the machine – the enterprise behind the technologies?” Zhao asked. However, he sees assigning the authority or ability to override or subvert AI as “a political question rather than a scientific one.”
Ultimately, “it really depends on whoever is held accountable for AI’s decisions,” Taglienti said. “It’s in their best interest—whether they’re CIOs, CISOs, or IT leaders—to understand how an AI-driven decision was made and whether there were any mistakes or errors. Ultimately, there are real-world implications and risks to letting AI make decisions, and it should be used deliberately with clearly defined guardrails.”
AI can help, but the final decisions must be human, Taglienti said. “After all, an AI agent today will not be able to understand many of the nuances required for these inputs. Human overlay and verification remain an essential part of AI-driven processes.”
Human intervention is essential “when AI misses the mark or falls short of expectations,” Taglienti said. “Perhaps the result was inaccurate, too general or off topic. This is why testing and learning is so critical. This allows us to better understand the impacts and limitations of this technology and how we can use it safely and responsibly.”
Aiming for a human-AI relationship “shouldn’t aim for a lights-out process,” Morin said. Instead, the key is “thinking about it in a balanced way, having human reasoning and intelligence, as well as the machine’s broad knowledge base and ability to summarize.”