3 - 4 OCTOBER 2018 / EXCEL LONDON

Stuart Russell: SHOULD WE FEAR SUPERSMART ROBOTS?

Wednesday 20 September 2017

robot2

Stuart Russell is a pioneer in the understanding and uses of artificial intelligence (AI), its long-term future, and its relation to humanity. He also is a leading authority on robotics and bioinformatics.

Ahead of his opening keynote at IP EXPO Europe on the 5th October at 10.00am, Stuart shares more about if we should fear supermart robots from his latest paper for Scientific American…

It is hard to escape the nagging suspicion that creating machines smarter than ourselves might be a problem. After all, if gorillas had accidentally created humans way back when, the now endangered primates probably would be wishing they had not done so. But why, specifically, is advanced artificial intelligence a problem?

Hollywood’s theory that spontaneously evil machine consciousness will drive armies of killer robots is just silly. The real problem relates to the possibility that AI may become incredibly good at achieving something other than what we really want. In 1960 legendary mathematician Norbert Wiener, who founded the field of cybernetics, put it this way: “If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere..., we had better be quite sure that the purpose put into the machine is the purpose which we really desire.”

A machine with a specific purpose has another property, one that we usually associate with living things: a wish to preserve its own existence. For the machine, this trait is not innate, nor is it something introduced by humans; it is a logical consequence of the simple fact that the machine cannot achieve its original purpose if it is dead. So if we send out a robot with the sole directive of fetching coffee, it will have a strong incentive to ensure success by disabling its own off switch or even exterminating anyone who might interfere with its mission. If we are not careful, then, we could face a kind of global chess match against very determined, superintelligent machines whose objectives conflict with our own, with the real world as the chessboard.

The prospect of entering into and losing such a match should concentrate the minds of computer scientists. Some researchers argue that we can seal the machines inside a kind of fire wall, using them to answer difficult questions but never allowing them to affect the real world. (Of course, this means giving up on superintelligent robots!) Unfortunately, that plan seems unlikely to work: we have yet to invent a fire wall that is secure against ordinary humans, let alone superintelligent machines.

To read the full paper on Stuart Russell’s view on super smart robots, click here - https://people.eecs.berkeley.edu/~russell/papers/sciam16-supersmart.pdf

Top