Advertisement

Making smart machines ethical: Montreal forum seeks to lead conversation on responsible AI

What does an ethical robot look like? Have your say

So far 2017 has been a bad PR year for machines.

A Google image-recognition tool thought a black couple was a pair of gorillas.

Facebook's news feed, which customizes what users see based on their preferences, has been accused of fostering polarized ideological communities and in helping Russia influence the U.S. election.

For all the benefits that software and automation may bring society, we've seen cases that could cause harm.

A conference happening Thursday and Friday in Montreal hopes to come up with ethical guidelines before it gets worse.

The Forum on the Socially Responsible Development of Artificial Intelligence is gathering companies, academics and government officials to discuss what a regulatory framework for intelligent machines might look like.

Specifically, it hopes to lead conversations on three key questions: What will be the impact of AI on jobs? Who is responsible when a machine makes a bad decision? And how can we ensure machines make decisions fairly?

It will conclude with a formal announcement on responsible AI use, which attendees hope will guide the industry in the future.

What is AI anyway?

Artificial intelligence may conjure up sci-fi images of superintelligent robots that plot to exterminate the pesky humans who get in their way. But experts say this is an unlikely scenario in the short term. The technology is nowhere close to this kind of self-aware sophistication.

Rather, AI is a catch-all term for computer techniques used to help humans make decisions from large, complex data.

One such technique is called machine learning. It trains computers to categorize information based on examples that you feed it. For example, after seeing millions of emails, a computer will learn to better tell spam from legitimate messages. A self-driving car continuously analyses the environment around it to decide where to turn and how fast to go.

Taking bias out of the machine

Machine learning is used more and more in justice systems, for example, to predict criminal behaviour, or in finance to deny or approve loans. How do you ensure these systems don't discriminate on gender, class or race, as ProPublica exposed in its investigation of algorithmic bias?

"Algorithms don't have bias. It's training data sets that have bias," said Abhishek Gupta, an AI ethicist and software developer at telecom equipment maker Ericsson in Montreal.

Or yet, it was biased humans who created the data and used it to train software. In the US, black people are arrested at much higher rates. So a machine will conclude from this that members of the black community have a higher risk of reoffending.

To train its image recognition tool, Google used a collection of one million pre-categorized images. Of the 500 images of people, only two were of a black person.

"Companies making these systems need show that they made the best efforts to be as inclusive as possible, and to document their why certain populations were picked [in the training data]," Gupta said.

Who's responsible for a robot's misbehaviour?

If a self-driving car decides to sacrifice a child pedestrian to save its passengers, who is accountable? The car maker? The owner? The programmer who wrote the software?

"It's hard to say who is responsible. As a casual user you have no idea how these things are built," said Peter Asaro, an assistant professor at The New School in New York and an AI philosopher.

And as algorithms become more complex, its very creators may no longer understand how it works or what comes out.

"The accountability will be what they do about it when something bad happens," Asaro said.

Companies themselves shouldn't be left to self-regulate these ethical questions, he says. Facebook's distanced reaction to Russia's alleged meddling in the US election supports this, which is why he supports legislation around AI.

"But what regulations would be appropriate? In the auto industry, class-action lawsuits brought about safety changes like seatbelts. This is not as obvious in AI."

When robots take Canadian jobs, then what?

Depending on who you ask, Canada could lose 1.5 million to 7.5 million jobs due to automation in the coming years. AI is expected to hit manufacturing, agriculture, trucking, retail, and other low-skill, repeatable jobs first.

Governments and schools will need to adapt to deal with a possible wave of unemployment and adapt new training programs.

It's something Sydney Swaine-Simon, a neurotechnology entrepreneur in Montreal, spends a lot of time thinking about.

"We need to think now about what can we do now to prepare people," he said.

"Say someone wants to be a radiologist. By time they are done with studies, they might start to be replaced with automation."

Swaine-Simon wants AI tools to be more accessible to non-technical people so they can see how they can use it in their line of work, and help them forecast their future job prospects..