Advertisement

Weird Science Weekly: Chess-playing computers may cause the robot apocalypse

Although science is often seen as stuffy and dull, it's responsible for incredible discoveries and amazing breakthroughs. It can get weird pretty quickly, though. In this week's installment of Weird Science Weekly, we're going to look at the various ways recent technologies could be leading us closer and closer to the robot apocalypse...

Sore-loser chess programs might be the end of us all...

In the movie The Terminator, we heard the human side of the story of Judgment Day, with the machines getting smart and seeing us as a threat. It wasn't until the sequel that we heard the tale from the machine's point of view, as it chose to start World War III to avoid having its plug pulled. This is all science fiction of course, but according to Steve Omohundro, a researcher in artificial intelligence, the vision of those scifi movies may not be so far off the mark.

In a recent paper, he pointed out that unless we're very careful, some of the autonomous systems that we're designing now could turn against us, and it's the simplest systems that are potentially the most dangerous. It isn't so much out of some kind of malice, but simply because if the system is designed to achieve maximum effectiveness, it might completely bypass any safety measures (like having a functional 'off' switch) we put into place in order to reach that goal.

As Omohundro writes: "When roboticists are asked by nervous onlookers about safety, a common answer is 'We can always unplug it!' But imagine this outcome from the chess robot's point of view. A future in which it is unplugged is a future in which it cannot play or win any games of chess. This has very low utility and so expected utility maximization will cause the creation of the instrumental subgoal of preventing itself from being unplugged. If the system believes the roboticist will persist in trying to unplug it, it will be motivated to develop the subgoal of permanently stopping the roboticist. Because nothing in the simple chess utility function gives a negative weight to murder, the seemingly harmless chess robot will become a killer out of the drive for self-protection."

Of course, there's Asimov's Laws (although there's four of them, not just three, and failures in some science fiction seem to be based on forgetting the 'zeroth' law), but again, what if the program can simply re-write itself to bypass those limitations if they aren't convenient for its purpose?

It looks like we need to be very careful going into the future, and perhaps take a tip from our science fiction ... if we're going to design learning computers, the first thing to teach it should be the value of human life.

[ Related: Weird Science Weekly: Bionic kangaroo is the future of robotics ]

Simple robots band together to perform collective tasks

Speaking of simplistic robots, researchers at the University of Sheffield have produced tiny, simple robots that use no memory or processing power, but can swarm together to accomplish a task collectively.

This is a very cool development, as this emulates the way organisms in nature (like insects or bacteria) can group together to accomplish something, and the emergent behaviour of the robots in swarming together on their own shows that the idea has a lot of promise. As to whether this will help us or ultimately bring about our doom, only time will tell.

[ Related: Weird Science Weekly: Lab-grown vaginas implanted for the first time ]

UBC researchers work to improve robot-human interaction

One of the aspects of working with robots right now is that people are often unsure about how to interact with them. There exists a multitude of specialized non-verbal cues that humans have developed when interacting with each other, but these don't exist between humans and robots. However, recognizing what these cues are does allow us to program them into robots, so that we're aware of their intentions.

AJung Moon, a PhD student in the University of British Columbia's Department of Mechanical Engineering talks about the research in this video:

Along with precognitive robots, and ones that produce facial expressions, this is a big step towards building better robots for our homes, as long as they don't use this against us, by gaining our trust and then destroying us.

[ More Geekquinox: Russian dashboard cameras capture bright blue fireball over city of Murmansk ]

Keep your eyes on the wonders of science, and if you spot anything particularly strange you'd like me to check out for next week, comment below, email me using the link in the banner above, or drop me a line on Twitter!

(Photo courtesy: Reuters)

Geek out with the latest in science and weather.
Follow @ygeekquinox on Twitter!