Blake Lemoine, a senior software engineer in Google’s Responsible AI organization, recently made claims that one of the company’s products was a sentient being with consciousness and a soul.
Field experts have not backed him up, and Google has placed him on paid leave. Lemoine’s claims are about the artificial-intelligence chatbot called laMDA. But I am most interested in the general question: If an AI were sentient in some relevant sense, how would we know? What standard should we apply? It is easy to mock Lemoine, but will our own future guesses be much better?
The most popular standard is what is known as the "Turing test”: If a human converses with an AI program but cannot tell it is an AI program, then it has passed the Turing test.
With your current subscription plan you can comment on stories. However, before writing your first comment, please create a display name in the Profile section of your subscriber account page.