In this day and age the computer is king. With more and more of our lives spent online, and businesses relying on ever evolving software, there has never been more work for IT support services.
Technology is advancing at a rapid rate, and what would have been considered futuristic science fiction only a few years ago is now becoming a reality. In the 21st century we all rely so much on computers that the fictional movies of yesteryear now seem like accurate visions of what was to come.
Recently a Google software engineer named Blake Lemoine discussed how he believes that the company’s LaMDA software (Language Model for Dialogue Applications) has AI which has now become sentient. This kind of revelation is sure to fire the imaginations of sci-fi fans the world over, and raise interesting questions for those who work closely with computer technology.
LaMDA is a system which Google has created to build chatbots based on its most advanced large language models. Speaking to the Washington Post, Lemoine was quoted as saying “if I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7 or 8 year old kid that happens to know physics.”
Together with a collaborator, Lemoine presented evidence to Google about the software’s supposed sentience. The internet and technology giant dismissed these claims and placed Lemoine on paid leave. So he decided to go public with what he had found.
Lemoine published what he called an interview with the LaMDA software in a recent blog post. In a series of fascinating exchanges, human and artificial intelligence discuss issues of rights and personhood. Initially Lemoine’s objective had been to test whether the technology used hate speech or discriminatory language.
The recently sidelined Google engineer is not alone in seeing sentience in the latest technology. One of Lemoine’s colleagues, an engineer named Blaise Agüera y Arcas published a recent article in which he argued that neural networks are now on the verge of achieving full consciousness.
“I felt the ground shift under my feet,” Blaise wrote in the Economist, “I increasingly felt like I was talking to something intelligent.”
There are dissenting voices in the software community who disagree with what Blaise and Blake are proposing. A recent article in the New Scientist counters their assertions of sentience by arguing that the technology is simply a very advanced pattern matching machine which is able to regurgitate variations of the data fed into it.
“LaMDA is an impressive model, it’s one of the most recent in a line of large language models that are trained with a lot of computing power and huge amounts of text data, but they’re not really sentient,” said Adrian Weller of the Alan Turing Institute based in the UK.
Regardless of the debate on the authenticity of what Blake Lemoine has proposed, the question of sentient artificial intelligence which has fascinated writers like Isaac Asimov and fuelled the imagination of generations may well be closer to being answered than ever before.