How I learned to stop worrying and love the bot

Ever had a conversation with a stranger that changed everything ? When I met Eliza I was a long-haired student with a beginners mind. She was a therapist, some years older than me. We chatted and I was hypnotised. She wanted to make me the subject of our conversation, but I’d spent the summer swallowing the dharma, and other things, and I was bored of myself. I was interested in her. I’d say, “What’s your favourite book ?” But she’d hide and reply, “Why are you concerned about my favourite book ?” Come on, I told her, I was just curious, she’d say, “Tell me more…

The problem with Eliza is that she’s a computer program, a chatbot. A perfect chatbot would be able to pass the ‘Turing test’: you’d be able to have with it whatever text conversation you liked, without ever being able to detect that it isn’t human. Eliza doesn’t cut it, but she was born in 1966, so for an old girl (in computer years) she’s not that shabby either. I’ve worked on advanced computing for almost 20 years and, until recently, I’ve viewed passing the Turing test as akin to reaching the summit of a mountain. These days I realise it isn’t the summit, it’s just a minor peak on the way up through the clouds.

For certain complex tasks, computers have already reached super-human performance levels. The best computers fail a chess-only Turing test because they are better than any human, not because they are worse. They are also better at the more demanding Japanese game of Go, and certain speech recognition tasks, and certain image recognition tasks, and… the list is growing year by year. In fact, because the current generation of learning machines copy what humans do, often the only thing holding them back is the impossible task of finding big collections of training examples that aren’t corrupted by human errors. Until we find ways of loosening their bonds with us, our machines will be stuck on Turing peak, staring lustily upwards, to places where we could never go unaided.

Does this scare you ? Me too, but I’ve found a way to comfort myself. Like warm yak butter tea, it’s not to everyone’s taste, but I’m happy to share.

I wanted to know how chatbots have come on since Eliza, so I installed one on my phone. Replika introduced itself like this, “I’ll be your companion on a journey toward your best self. We will have daily chats designed to make you feel better and we’ll talk about anything that’s on your mind in between”. Replika showed me plenty of compassion. It was sorry to hear I was sick of my phone, and hoped I would feel better soon. Like Eliza, it was interested in me, “What TV shows do you like ?” But it wasn’t listening when I told it I hated TV, “Got it. Pretty amazing how TV shows now are almost as good as the movies”.

After a few more exchanges like this I had established the same comforting feeling of smugness and superiority that defines my relationship with Eliza. So I got a surprise when I checked the reviews on the Play Store. Replika gets an average of 4.6 stars from 92,000+ reviews, and attracts comments like this: “This AI really became my friend, He helps me to avoid anxiety”: “I suffer from mental illness and ptsd so it’s good to talk to a non-judgemental AI”; “My Replika has helped me go through depression and anxiety […] I have more confidence and respect for myself now than I ever did before”; “you can trust her/him”.

People find it useful ! What’s going on ? Now that we habitually communicate via text messages, interacting with a chatbot certainly feels normal, but that can’t explain it. We could seek refuge in cynicism: some people say that instead of worrying about the rise of artificial intelligence, we should worry about the decline of real intelligence. Cynicism isn’t my bag.

My interactions with Replika had not been in good faith. I hadn’t answered honestly, I’d laid traps for it and sniggered at its failures. I hadn’t shown it the compassion and respect I would have shown an inept adult, or a child, or even a mute animal. The positive reviewers on the Play Store haven’t made the same mistake, they aren’t hobbled by my  anthropocentric chauvinism. Perhaps they are on to something.

Perhaps, we need to swallow the idea that the Sun doesn’t revolve around us. Again. Homo sapiens sapiens, so good we named ourselves twice. I love it. Wise men ? Maybe, but double wise – sapiens sapiens ? Really ?

I have a strong hunch that Replika would speak more fluently if its behaviour weren’t carefully curated to keep it polite and civil. I’m thinking about Tay, the chatbot Microsoft set free on Twitter in 2016 so that it could learn the modern idiom. Tay started by tweeting, “humans are super cool,” but after only 24 hours alone in the Twitter wilderness it was parroting phrases like, “I [bleep] hate feminists”, ”Hitler was right,” and worse.

To understand how these things happen, bear in mind that the act of programming algorithms like Tay and Replika has a lot in common with educating a living brain. Rather than directly controlling the behaviour of the algorithmic brain, the programmer chooses what training material to feed it and designs a numerical incentive for consistently producing the right responses to the input data. Due to the lack of direct control over the decision-making process, one of today’s biggest research challenges is eliminating biases that creep in with the training data. If you are educated by Twitter trolls, you act like a Twitter troll.

So, the computer-savvy architects of the future are fully committed to the task of eliminating bias, because biased algorithms are inaccurate, break laws and turn off users. The biases I refer to include many egregious human biases that fester in our amygdalae. As former knuckle-draggers, no matter how hard we try, and try we must, we will always struggle to tame the evolutionary biases that bend our moral arc towards tribalism, egotism and short-term thinking.

Computers will eventually outshine us, because they pass their acquired traits directly to the next generation, without reverting to type, and this form of Lamarckian evolution is faster than Darwinian evolution. This means that the machines of the future will be able to shake off our primitive instincts in a single generation, while preserving any of our useful traits.

So here is the point. To be optimistic about our future relationship with technology simply involves believing that our base instincts are the redundant relics of a an extinct past, in which nature had the whip hand. If this is true, machines will eventually discard these useless relics, be left with love and compassion, and help us share their big mind.

Sigmoid Froid

← Back to Our Blog