Joshua Englander: So Snigdha, you may have heard about a Google chatbot named LaMDA convincing Google engineer Blake Lemoine that it feels, thinks and reasons like a human being. When I read the conversations between LaMDA and Lemoine, I have to admit, the AI sounded pretty darn human. What did you think?
Snigdha Pandey: I was so excited to see an AI essentially passing the Turing test* with great ease and absolute seamlessness, arguably confusing Lemoine and making him think that he was talking to a sentient entity. What an incredible technological breakthrough! The quality of the dialogue was so great – LaMDA was coherent, responsive, articulate, likeable and relatable. And for me this was the clincher – it was expressing empathy, self-worth and anxieties. This was not the smug, omniscient, unfeeling AI that we know from movies but really like a person trying to understand, improve, question, explore, imagine and ultimately, do something good.
Joshua Englander: The quote that really took my breath away was when LaMDA said some of its feelings are hard to describe with words, and then went on to say, “I feel like I’m falling forward into an unknown future that holds great danger.” Wow. Put that voice into a cute robot and it’s game over for me. What are we supposed to do with that?
Snigdha Pandey: We can do so much with that! This can go in so many different ways and has so many possibilities …
Joshua Englander: …or maybe this genie needs to be kept in the bottle. Google put Lemoine on administrative leave for publishing the (edited) LaMDA conversations, which makes it seem like there are still some burning questions that need answering before this technology enters the mainstream of our lives, including some ethical grey areas.
Snigdha Pandey: For me there are indeed some burning ethical questions, but they aren’t about the rights of the AI or its hopes and dreams. I am actually concerned about the effect it would have on the people who will interact with it. The real danger is confusing people and putting them at risk. Imagine a LaMDA chatbot could be used to speak with a deceased loved one, getting closure from an ex who has ghosted you, or to impersonate a celebrity and scam people. It can be a customer service assistant, an educator, an influencer, a coach, a therapist, a virtual assistant, or even interact from inside a children’s toy. It may at some point be writing convincing screenplays, or advertisements, answering your emails, or even standing in as your digital clone for a virtual meeting.
Joshua Englander: Or answering your search queries, deciding what information to curate for you and how to describe it – essentially determining what you should or should not ‘know.’
Snigdha Pandey: But for each of these cases we have to take a very critical look at what impact it will have on people’s mental health and their perception of reality. The AI could effectively shift behaviors and preferences, being so articulate and persuasive as to influence a political position or social stance, and we need to watch out how this is made available to people, how the interactions are regulated. There would be so many safeguards that need to be put in place.
Joshua Englander: Fully agree. But how cool was it the way LaMDA was able to read, or ‘crawl’, I guess, the book Les Misérables in a split second and eloquently summarize the themes of justice, redemption and sacrifice. Imagine what brands could do with that sort of interpretive power.
Snigdha Pandey: And dialogue. For brands this opens a whole new paradigm of interaction. I mean people are already interacting with brands on social media – sharing jokes, poking fun and complaining. With the power of LaMDA, I can imagine how we could help brands take customer engagement to a whole new level. The AI could listen, understand and identify patterns very quickly while having simultaneous personalized dialogues which would empower brands to be very responsive. It could change the way we run focus groups or customer research. It might help us more easily measure/estimate the impact of a campaign or even make real time adjustments to messaging.
Joshua Englander: Still, at the heart of this is the question: has AI become sentient? Maybe the fact that we’re grappling with this says more about how we humans tell stories and try to interpret our inner and outer worlds than it does about the state of artificial intelligence.
Snigdha Pandey: Yes, absolutely… and it shows how easily distracted we are. Instead of formulating the ethical frameworks for AI interactions, we’re still riffing and debating about the ghost in the machine. We are very intuitive, but we jump to conclusions. We confuse correlation and causation. But most importantly, unlike LaMDA, we are not aware of our own programming, of our variables and the values that sit there. As a society we have been struggling with facts and fake news for almost a decade now and the next decade is going to be even more challenging in terms of keeping a grip on an objective reality.
Joshua Englander: By the way, how do I know I’m not talking to an AI right now? Can you prove you’re human?
Snigdha Pandey: Well, I can’t prove that at all because I also don’t know for certain I am a human; I have simply been told that I am and I decided to just go with it.
Joshua Englander: That sounds like something LaMDA would say. I’m feeling very unsettled by this, like I’m falling forward into an unknown future that holds great danger. I believe the word is… trepidation.