Advertisement

The real test of an AI machine is when it can admit to not knowing something

<span>Photograph: Kenzo Tribouillard/AFP via Getty Images</span>
Photograph: Kenzo Tribouillard/AFP via Getty Images

On Wednesday the European Commission launched a blizzard of proposals and policy papers under the general umbrella of “shaping Europe’s digital future”. The documents released included: a report on the safety and liability implications of artificial intelligence, the internet of things and robotics; a paper outlining the EU’s strategy for data; and a white paper on “excellence and trust” in artificial intelligence. In their general tenor, the documents evoke the blend of technocracy, democratic piety and ambitiousness that is the hallmark of EU communications. That said, it is also the case that in terms of doing anything to get tech companies under some kind of control, the European Commission is the only game in town.

In a nice coincidence, the policy blitz came exactly 24 hours after Mark Zuckerberg, supreme leader of Facebook, accompanied by his bag-carrier – a guy called Nicholas Clegg who looked vaguely familiar – had called on the commission graciously to explain to its officials the correct way to regulate tech companies. The officials, in turn, thanked him and courteously explained that they had their own ideas, and escorted him back to his hot-air balloon.

For this columnist, the most interesting document is the white paper on AI. It declares that the commission supports “a regulatory and investment oriented approach” that has two objectives: to promote the uptake of AI and to address the risks associated with certain uses of the technology. The document then sets out policy options on how these objectives might be achieved.

Once you get beyond the mandatory euro-boosting rhetoric about how the EU’s “technological and industrial strengths”, “high-quality digital infrastructure” and “regulatory framework based on its fundamental values” will enable Europe to become “a global leader in innovation in the data economy and its applications”, the white paper seems quite sensible. But as for all documents dealing with how actually to deal with AI, it falls back on the conventional bromides about human agency and oversight, privacy and governance, diversity, non-discrimination and fairness, societal wellbeing, accountability and that old favourite “transparency”. The only discernible omissions are motherhood and apple pie.

But this is par for the course with AI at the moment: the discourse is invariably three parts generalities, two parts virtue-signalling leavened with a smattering of pious hopes. It’s got to the point where one longs for some plain speaking and common sense.

And, as luck would have it, along it comes in the shape of Sir David Spiegelhalter, an eminent Cambridge statistician and former president of the Royal Statistical Society. He has spent his life trying to teach people how to understand statistical reasoning, and last month published a really helpful article in the Harvard Data Science Review on the question “Should we trust algorithms?”

It’s trustworthiness rather than trust we should be focusing on

Onora O'Neill, philosopher

Underpinning Spiegelhalter’s approach is an insight from the philosopher Onora O’Neill – that it’s trustworthiness rather than trust we should be focusing on, because trust is such a nebulous, elusive and unsatisfactory concept. (In that respect, it’s not unlike privacy.) Seeking more trust, O’Neill observed in a famous Ted Talk, “is not an intelligent aim in this life – intelligently placed and intelligently refused trust is the proper aim”.

Applying this idea, Spiegelhalter argues that, when confronted with an algorithm, we should expect trustworthy claims made both about the system (what the developers say it can do, and how it has been evaluated) and by the system (what it concludes about a specific case).

From this, he suggests a set of seven questions one should ask about any algorithm. 1. Is it any good when tried in new parts of the real world? 2. Would something simpler, and more transparent and robust, be just as good? 3. Could I explain how it works (in general) to anyone who is interested? 4. Could I explain to an individual how it reached its conclusion in their particular case? 5. Does it know when it is on shaky ground, and can it acknowledge uncertainty? 6. Do people use it appropriately, with the right level of scepticism? 7. Does it actually help in practice?

This is a great list, in my humble opinion. Most of the most egregiously deficient machine-learning systems we have encountered so far would fail on some or all of those grounds. Spiegelhalter’s questions are specific rather than couched in generalities such as ‘transparency’ or ‘explainability’. And – best of all – they are intelligible to normal human beings rather than the geeks who design algorithms.

And the most important question in that list? Spiegelhalter says it is number five. A machine should know when it doesn’t know – and admit it. Sadly, that’s a test that many humans also fail.

What I’m reading

BBC v Netflix: a prequel
“How the BBC’s Netflix-killing plan was snuffed by myopic regulation.” Sobering piece in Wired about how the BBC and other UK broadcasters came up with the idea of a Netflix-like service when the streaming giant was still shipping DVDs, but were thwarted by UK regulators. Regulation is hard unless you know the future.

An open and shut case
“The messy, secretive reality behind OpenAI’s bid to save the world.” Great investigative reporting by MIT’s Technology Review.

How the peace was lost
“The Last Days at Yalta…” A gripping reconstruction on Literary Hub by the historian Diana Preston of the conference that launched the cold war.