How the world responded to the Google AI LaMDA claims

23 June 2022 07:18 AM By CIO Studio

You might have seen some fascinating news over the last week or two – an engineer at Google claimed that their new AI had reached sentience: it could think, feel and reason like a “living” being. Those claims have been largely debunked, but how did the world react?

A bit of background

It all started when news broke that Google Engineer Blake Lemoine had been suspended, allegedly for claiming Google’s AI had become a sentient being. It later emerged that the suspension was more about his actions after, of course.

This whole issue is about an AI Chatbot, created by Google, called LaMDA (pronounced “Lambda”) – short for “Language Model for Dialogue Applications”.

Chatbots are designed to do exactly that – chat. They’re quite a big thing now, with many websites using them as their first line of support. But some of the bigger and deeper chatbots do way more than this – they can handle a huge amount of data.

There are 4 key problems that need to be solved for chatbots to really reach their potential:

1. Figuring out what you’re asking

If you’re going to have a conversation with someone or something, you firstly need to talk the same language. However, this isn’t just about the technicalities of English (in this case) – it’s also about the inexact language that most people use.

For example, if you were to ask a chatbot for a business, “When do you close?”, it needs to figure out that you’re asking about its open hours (rather than the business ceasing, for example). Also, most likely you are talking about today.

It’s a simple example, but as you can imagine, this gets significantly more complicated as the types of conversations broaden. The sorts of things that we’ve spent a lifetime to learn including tone, context, cultural changes to language and more all come into play.

And that’s before we get into “yeah, nah”!

2. Figuring out the context, called “state”

Another issue, which we often take for granted when communicating is “state” – essentially, how conversation is shaped by the conversation to date. A simple Chatbot works on a “question -> answer” fashion, often not considering what’s happened before.

Going back to our basic example above, “What time tomorrow?” is fairly meaningless without considering the previous question about when the store closes. Again, this is immeasurably more complex in everyday conversation.

3. Figuring out the answer

This is the easy part, once we’ve figured out the question. It’s still a complex task, but that part of it has been mostly resolved.

4. Communicating it

This is where it really becomes a “chat” bot rather than just an information system. The biggest developments in this area are attempting to mimic how conversations actually occur between humans – almost a variation of the Turing Test.

And this is the big area Google’s LaMDA project was working on – how to make interactions seem “real”, and how to use state and context to mimic a conversation that could very well be with another human.

Part of that is also inference – dealing with inexact information and drawing conclusions about it. Or in essence, “thinking” about what you can infer from what you know.

What makes LaMDA different?

LaMDA was announced by Google a year ago – back in May 2021 – and is primarily focused on step 4 – communication and trying to hold a conversation in a meaningful way.

LaMDA is also very highly trained – it has learned lots of things and is learning a lot more. But primarily, it’s about how a conversation occurs with the intention of making it seem “human-like”.

And it’s this characteristic that has led to this whole debate. Is LaMDA Sentient? Was it just so damn good that it fooled Lemoine? Or was this just a big elaborate marketing trick by Google?

What does “sentience” mean and why does it matter?

This is where things get interesting – sentience is essentially the line between mimicking emotions, feelings, and self-awareness, and actually feeling emotions, feelings, and self-awareness.

It’s not about the ability to “think” per se and is a tricky thing to measure. If an AI-based machine can “learn” about emotions and how they impact communication and then mimic this effect, is this truly sentience?

Sentience really does matter and sits at the heart of the whole concept of what we think of as life. And when you’re talking about life, you’re immediately into the concept of human rights (or even animal rights), ethics, and more. It gets very complicated very fast.

So, how did the world respond?

This story really did take the world by storm. In fact, Google News reports almost a million news articles that mention “LaMDA” in the last couple of weeks and all major publications covered the story – from mainstream like the Herald and Stuff, through to more in-depth coverage in tech media overseas.

Almost without fail, the format of most of the articles follows a “what if” style format – reporting Lemoine’s claims, often comparing them to sci-fi movies or books, then finishing with “yeah well, it’s not really sentient”.

And that seems to be the consensus of AI experts far and wide – LaMDA appears to do a great job of seeming sentient, however looking at the evidence, it’s much more likely that the engineer – Lemoine – simply got it wrong. Very few are claiming ill intent; most seem to be of the view that he is earnest, doing what he thinks is right, but has been sucked in by a very smart machine.

This isn’t a new phenomenon and it even has its own name – the “Eliza Effect”, as this Atlantic article points out. It was named after one of the earliest predecessors of chatbots, Eliza, which fooled the researcher’s secretary into thinking it was alive.

Locally, Lana Hart’s Stuff article takes a more biblical approach, reflecting on the meaning of bible verses, and makes the case for a Code of Conduct for Artificial Intelligence. While a somewhat novel approach, it’s not new – in fact, Elon Musk has often warned about the risks AI poses to humanity.

But is a Code of Conduct needed, or even plausible? We do need to think about what happens when sentience really arrives, but the fact is, we’re a very, very long way away from that happening.

Also locally, The Spinoff’s Ben Gracewood’s take doesn’t mince words and is a good read. He makes the point that, frankly, computers are stupid. And ends in part with:

Is LaMDA sentient? Definitely not. Is it cool and almost magical? Shit yes.

Hard to argue with Ben’s take.

Paul Matthews is the Chief Executive at CIO Studio and loves how AI is changing the world for the better. If you want to see if a little AI magic could help in your digital strategy, drop us a line.

Get industry updates, tech news, and CIO Studio blogs free to your inbox!