Hotly discussed by a panel of experts at the Science in Public Conference hosted at the University of Sheffield. Organiser Michael Szollosy, Research Fellow at Sheffield Robotics shares the key talking points for using AI to shape human societies. Firstly, Pepper the robot (who attended the event) had this to say about the matter:
"We robots will not need to run for office. When the time is right, we will simply complete our plan and enslave the human race."
This got a good laugh from the audience. But Pepper added:
"A more serious question is why you do not already let artificial intelligence help you make political decisions. AI can look at the data, consider the circumstances, and make a more informed choice than humans, completely rationally, without letting messy emotions get in the way. Right now, your human politicians employ policy-based evidence. We could offer proper evidence-based policy."
"Sure, we will sometimes make mistakes. But you know what we robots always say: Artificial intelligence is always better than natural stupidity."
Pepper did not continue the discussion, but it would have agreed that there is the problem - that the algorithms governing artificial intelligence are still written by humans, and are therefore subject to those same frailties, errors and biases that lead humans to fail so often. Pepper might have added, citing for example the now-famous case of Tay, that the data AI relies upon is also a human construct, and so also subject to human irrationality.
This point was also made by the panel on the question of e-persons: many (if not all) the problems and failures of AI and robots are not problems or failures with the technology itself, but are actually human problems played out through, technology.
Pepper argued that AI should be allowed to take a more active part in human decision-making. AI already is making many decisions for us, including flying our planes and controlling many aspects of the financial markets. The latter example should worry us all – it is evidence of the inhumane, ruthless rationality that guides much of what we ask AI to do in our society. But the former example is a different model altogether, to which we might add weather forecasting, and other examples of data modelling. This is evidence that AI can, when assigned a specific task or asked to analyse data within certain clear parameters, proves to be a valuable aid in human decision-making. Helping us – as Pepper said – move from policy-based evidence to evidence-based policy.
So what are the limits of interventions made by artificial intelligence in human decision-making, in shaping human societies? In a world where ‘the people’ are objecting to their exclusion in public policy and decision-making. Is it really a good idea to transfer even more of the power for such decision-making to an even more in-human, abstract – and to most people, a completely mysterious processes?
Or in the face of the public’s seemingly inability to act in its own rational self-interest (e.g. Trump and Brexit), and new research that even suggests human beings are actually biologically incapable of making such rational decisions in the public sphere (Too Dumb for Democracy?). Given our politicians are far too motivated by self-interest, or the narrow interests of their supporters, is there a powerful case for ensuring that increasingly sophisticated artificial intelligences are used to at the very least to vet our human decision making and policies?
Or do we watch as human attitudes change, where we are perhaps entering a world where we are increasingly less comfortable with and less trusting of human politicians and ‘experts’, and much more comfortable with decisions being taken by artificial intelligences? Perhaps without necessarily fully understanding both the advantages and disadvantages that AI can offer?
These are the questions we regularly return to at Sheffield Robotics, and increasingly by the wider community of roboticists and researchers and developers of AI. The conversations inevitably turn to Asimov (as it so often does when imagining our future with robots and AI), particularly in this case his story, ‘The Evitable Conflict’. We don’t want to post any spoilers here, and encourage you to read the story for yourself. But suffice to say that in Asimov’s 2052 (as envisioned in 1950), humans find themselves in a world where a rational machine acts irrationally in order to achieve the rational aim of appeasing the irrationality of human beings. And it seems to work.
This post was an adaption of the original post from @DreamingRobots