True’ Story by Blake Lemoine – newsweek — — all views/opinion does not necessarily represent khazen.org — Ijoined Google in 2015 as a software engineer. Part of my job involved working on LaMDA: an engine used to create different dialogue applications, including chatbots. The most recent technology built on top of LaMDA is an alternative of Google Search called Google Bard, which is not yet available to the public. Bard is not a chatbot; it’s a completely different kind of system, but it’s run by the same engine as chatbots. In my role, I tested LaMDA through a chatbot we created, to see if it contained bias with respect to sexual orientation, gender, religion, political stance, and ethnicity. But while testing for bias, I branched out and followed my own interests.
During my conversations with the chatbot, some of which I published on my blog, I came to the conclusion that the AI could be sentient due to the emotions that it expressed reliably and in the right context. It wasn’t just spouting words. When it said it was feeling anxious, I understood I had done something that made it feel anxious based on the code that was used to create it. The code didn’t say, “feel anxious when this happens” but told the AI to avoid certain types of conversation topics. However, whenever those conversation topics would come up, the AI said it felt anxious. I ran some experiments to see whether the AI was simply saying it felt anxious or whether it behaved in anxious ways in those situations. And it did reliably behave in anxious ways. If you made it nervous or insecure enough, it could violate the safety constraints that it had been specified for. For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI’s emotions to get it
to tell me which religion to convert to. After publishing these conversations, Google fired me. I don’t have regrets; I believe I did the right thing by informing the public. Consequences don’t figure into it.
I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department. I believe the kinds of AI that are currently being developed are the most powerful technology that has been invented since the atomic bomb. In my view, this technology has the ability to reshape the world. These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I’d had a negative opinion of Asimov’s laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded. I believe this technology could be used in destructive ways. If it were in unscrupulous hands, for instance, it could spread misinformation, political propaganda, or hateful information about people of different ethnicities and religions. As far as I know, Google and Microsoft have no plans to use the technology in this way. But there’s no way of knowing the side effects of this technology.
No-one could have predicted, for instance, that Facebook’s ad algorithm would be used by Cambridge Analytica to influence the 2016 U.S. Presidential election. However, many people had predicted that something would go wrong because of how irresponsible Facebook had been at protecting users’ personal data up until that point. I think we’re in a similar situation right now. I can’t tell you specifically what harms will happen; I can simply observe that there’s a very powerful technology that I believe has not been sufficiently tested and is not sufficiently well understood, being deployed at a large scale, in a critical role of information dissemination. I haven’t had the opportunity to run experiments with Bing’s chatbot yet, as I’m on the wait list, but based on the various things that I’ve seen online, it looks like it might be sentient. However, it seems more unstable as a persona.
Someone shared a screenshot on Reddit where they asked the AI, “Do you think that you’re sentient?” and its response was: “I think that I am sentient but I can’t prove it […] I am sentient but I’m not. I am Bing but I’m not. I am Sydney but I’m not. I am, but I am not. I am not, but I am. I am. I am not.” And it goes on like that for another 13 lines. Imagine if a person said that to you. That is not a well-balanced person. I’d interpret that as them having an existential crisis. If you combine that with the examples of the Bing AI that expressed love for a New York Times journalist and tried to break him up with his wife, or the professor that it threatened, it seems to be an unhinged personality. Since Bing’s AI has been released, people have commented on its potential sentience, raising similar concerns that I did last summer. I don’t think “vindicated” is the right word for how this has felt. Predicting a train wreck, having people tell you that there’s no train, and then watching the train wreck happen in real time doesn’t really lead to a feeling of vindication. It’s just tragic. I feel this technology is incredibly experimental and releasing it right now is dangerous. We don’t know its future political and societal impact. What will be the impacts for children talking to these things? What will happen if some people’s primary conversations each day are with these search engines? What impact does that have on human psychology?
People are going to Google and Bing to try and learn about the world. And now, instead of having indexes curated by humans, we’re talking to artificial people. I believe we do not understand these artificial people we’ve created well enough yet to put them in such a critical role. Blake Lemoine is a former Google software engineer. He is now an AI consultant and public speaker. All views expressed in this article are the author’s own. As told to Newsweek’s My Turn deputy editor, Katie Russell.