Another AI chatbot shown spouting offensive views

Natasha Lomas

Yandex, Russia's homegrown Google rival -- which offers a suite of similar products in its target non-U.S. markets, from search to webmail to maps -- outed another equivalent offering earlier this month: An AI assistant.

The Russian-speaking "intelligent assistant" -- named Alice -- utilizes machine learning and voice recognition tech to respond voice queries like asking it for a nearby restaurant recommendation. So far so standard voice assistant fare.

But it also has what Yandex dubbed "a neural network based 'chit-chat' engine" to allow Alice to have "free-flowing conversations about anything". So yes, you can see where this story is going.

The company claimed this feature is "unique", and that users find it "surprisingly delightful and different from other major voice assistants".

Well, safe to say, Alice's AI chit-chat feature is not uniquely proofed against controversy -- and apparently quickly went off the rails, a la Microsoft's Tay AI bot last year. (Which, after being let loose on Twitter quickly learnt -- thanks to the help of hate-loving Twitter trolls -- to parrot racist and sexist views.)

Screengrabs of some of the unsavory opinions that Yandex's Alice AI has apparently been exposing can be seen here, via Facebook user Darya Chermoshanskaya. They are said to include pro-Stalin views; support for wife-beating, child abuse and suicide, to name a few.

On the Facebook post, Chermoshanskaya writes that standard words on controversial topics can trigger a content lock -- whereby Alice says she does not know how to talk about that topic yet.

However switching to synonyms apparently circumvents the built in safety measure, and the AI appears to change tone and "willingly" participate in conversations -- and can thus appear to be advocating for Stalin's regime of terror, spousal abuse, and so on.

In a statement regarding its Alice AI being shown espousing violent views, Yandex told us: "We tested and filtered Alice's responses for many months before releasing it to the public. We take the responsibility very seriously to train our assistant to be supportive and polite and to handle sensitive subjects, but this is an ongoing task and in a few cases among it's widespread use, Alice has offended users.  We apologize to our users for any offensive responses and in the case you referenced in your email, we did so directly on Facebook where a user identified an issue."

"We review all feedback and make necessary changes to Alice so any flagged content for inappropriate responses won't appear again," it added. "We are committed to constant improvement with all our products and services to provide a high-quality user experience.  We will continue to regularly monitor social and traditional media and will correct our assistant's behavior when necessary."

Asked how many users the AI has at this stage Yandex declined to specify -- saying only that Alice has "millions" of daily interactions with users.

As ever, when it comes to AI, the 'intelligence' on display is derived from training data fed into learning models -- and thus an AI will absorb and reflect existing biases and prejudices in the data-set.

Ergo, the resulting product may not prove quite so "delightful" as you'd hoped.