Artificial intelligence should respect people's privacy, foster diversity of thought and be open to scrutiny. It should not be used to discriminate, to replace human relationships or to interfere in democratic processes.
These are some of the principles in the Montreal Declaration for responsible AI development, unveiled Tuesday after one year of consultations involving 100 ethicists and technology professionals.
The document, an initiative of the Université de Montréal, hopes to guide companies that use AI systems to think about the moral implications of this increasingly powerful technology.
"Companies are deploying these systems today," said Abhishek Gupta, a software engineer at Microsoft's Montreal office and founder of the Montreal AI Ethics Institute.
"We're using them in practice with real people. It's not something in the far future that we can keep sitting and thinking about."
AI technology has been involved in some of the biggest recent public scandals, lending a sense of urgency to calls to rein it in.
The Cambridge Analytica scandal reveled the dangers of harvesting Facebook user's data without their consent for political purposes. Amazon's facial recognition software was used by U.S. police forces without rules to guide it.
China's authoritarian government is blending video footage with data from online transactions to give their citizens a "social credit score" in what seems like a dystopian trope straight from the sci-fi TV show Black Mirror.
"It's important to have recommendations of norms and values that govern new technologies," said Christine Tappolet, a professor of philosophy at the Université de Montréal and one of the writers of the declaration.
"Systems that detect what emotions people feel, what preferences they have — people may not want these facts known about them," she added. "This needs to be respected."
Need for more digital literacy
Tappolet is especially concerned with the use of AI in automated weapons that could be programmed to identify targets and kill them.
The declaration contain a principle that "the decision to kill must always be made by human beings, and responsibility for this decision must not be transferred to an AI."
The declaration is not legally binding, but it's a starting point for future legal frameworks or regulations, said Catherine Régis, a law professor at the Université de Montréal and another of the document's co-authors. The university has formed a task force to explore legal actions to guide ethical AI research.
Ultimately, though, companies will adopt its principles if there is enough public pressure, Tappolet said. She thinks teaching digital literacy and how companies collect and use data are essential to creating an informed public.
Montreal is increasingly recognized as a global hub of expertise in deep learning, an AI technique, thanks to university research institutes like MILA and successful startups like Element AI.
In the last two years, tech giants like Google, Facebook and Microsoft have opened and expanded research labs in the city.
Just hours after the declaration was unveiled, three AI companies from the UK — QuantumBlack, WinningMinds and BIOS — announced they were opening offices in Montreal.