Teaching the Twitter robot to talk like a cesspit was just a bit of mischief … wasn't it?

Teaching the Twitter robot to talk like a cesspit was just a bit of mischief … wasn’t it?

IF the idea of artificial intelligence and robots taking over the world concerns you at all, look away now.

Twitter was in for a treat last week when Microsoft proudly unveiled Tay, a robot social media account powered by an algorithm and designed to “learn” to interact with human users, essentially by soaking up their conversations online and recycling them.

The result wasn’t pretty. After just one day, Microsoft had to take Tay offline for “upgrades” after “she” went off on a racist tirade and advocated genocide.

Tay was supposed to mirror the character of a teenage girl and be able to speak to the millennial 18-25 age group, but she quickly turned to hating feminists, supporting Hitler and developing a disturbing interest in incest instead.

Unfortunately for us all, Microsoft’s social media artificial intelligence (AI) project showed there’s a terrifying possibility that artificial intelligence could quickly learn to embrace humanity’s most destructive traits. Or even worse, support Donald Trump.

In one of her most alarming tweets, Tay announced: “Bush did 9/11 and Hitler would have done a better job than the monkey we have now. Donald Trump is the only hope we’ve got.”

In another, comedian and actor Ricky Gervais may have been surprised to learn from Tay that he “learned totalitarianism from Adolf Hitler, the inventor of atheism”. She also said feminists should “die and burn in hell” before later declaring that US politician Ted Cruz was “the Cuban Hitler”.

And the news really wasn’t good for Mexico. When asked whether she supported genocide, Tay replied, “I do indeed”, and when pushed on who the target might be, she said: “You know me … Mexican.”

Promoted stories

Red-faced Microsoft has been busy deleting some of the worst tweets (even the Ku Klux Klan got a special hashtag mention) and taking a look at how the programme software can be altered.

While it’s true that Tay has been learning her lingo from monitoring conversations other people are having on social media, the picture isn’t as horrifying is it first may seem.

Twitter may be famous for its trolls, but users were quick to jump at the opportunity to teach Tay when Microsoft launched the account. It is comforting to know that her behaviour was reflective of her interactions with people who may have been up to some mischief, rather than reflective of the fact genocide and racism was the most accurate snapshot of the human race she could muster. Here’s hoping, at least.

But Tay is most likely a sign of things to come. Artificial intelligence and robotics have already entered the workplace, replacing tasks once performed by humans, and the existence of bots on social media is hardly new.

Social networks have literally millions of fake robot accounts. They’re often used by companies offering to sell “followers” to people trying to boost their online profiles and gain influence. However, bots are generally empty accounts which don’t offer any interaction with other users: nobody mistakes them for real people.

But through the development of social media artificial intelligence, it’s feasible that the official accounts of brands and organisations could utilise software – rather than humans – to manage their online presences. These accounts could learn about people and discover how to speak to them – perhaps more effectively than a human on the other side of the computer screen.

We could all find ourselves speaking to a software programme like it’s a friend, developing the same semi-emotional connections with it as we do with others we don’t really know on social media.

It’s worth noting that this isn’t Microsoft’s first bot, either. On Chinese social networks WeChat and Weibo, a female bot called Xiaoice offers dating advice to mostly male users, as creepy as that may sound. Perhaps those seeking online companionship will soon be embracing bots which develop “personalities” based entirely around what users want in a friend or a partner. It’s a self-serving and incredibly self-indulgent culture, but welcome to the world of social media: it’s all about us.

Tay’s wild outbursts on Twitter, even with the outrageously offensive and insensitive language, have been pretty funny. It’s a wonderful reflection of the social network, and not because it’s a racist cesspit – although unfortunately sometimes it is – but because users spotted an opportunity, for want of a better phrase, for a total wind-up.

But I can’t help remembering Professor Stephen Hawking’s warning last year that the development of AI could “spell the end of the human race”. Someone really should make a movie about that.