Real risks behind artificial intelligence go way beyond fear of sentience, AI experts warn

WASHINGTON (TND) — A former Google engineer made waves this past month with claims the tech company’s new chat bot feature gained sentience, but technology experts say there are other, more concerning risks artificial intelligence poses to society.

Blake Lemoine received pushback when he argued the bot, known as LaMDA, or Language Model for Dialogue Applications, is now capable of feeling. He was placed on leave in June after giving documents to a Senate committee, claiming the bot discriminated against people on the basis of religion, among other biases. Lemoine was fired this past month for what the company says is violating data security policies, but he still believes LaMDA poses a problem as AI becomes more engrained in society.

“These are just engineers, building bigger and better systems for increasing the revenue into Google with no mindset towards ethics,” Lemoine told Insider earlier this week, referring to what he believes is Google’s lack of preparation for a technology that gains personhood. Google dismissed Lemoine’s claims, calling them unfounded, arguing LaMDA has undergone multiple reviews. The chatbot’s responsible development was detailed in a research paper released by the company, according to Google.

The chat bot is far ahead of Google’s past language pattern models, capable of engaging in conversation in more natural ways than AIs of the past, one expert argues. “In terms of natural language, LaMDA by far surpasses any other chatbot system that I’ve personally seen,” Gaurav Nemade, LaMDA’s first product manager, told Big Technology podcaster Alex Kantrowitz.

Such technology may soon be used to teach physics classes or otherwise provide a rich interactive experience for users, said Nemade, who left Google in January.

Other technologists say the bot’s sophistication can easily trick people into believing they are interacting with something more human than machine.

“You can think of LaMDA like an actor; it will take on the persona of anything you ask it to,” computer science professor Richard Fikes at Stanford University told the Stanford Daily. “[Lemoine] got pulled into the role of LaMDA playing a sentient being,” he added, noting that LaMDA can gather information from the entire internet and store it inside its database.

These databases, with their massive cache of data, help the chatbot to understand how most human conversations are supposed to sound, according to Fikes. LaMDA is a software designed to respond to sentence prompts, meaning sentience is not possible, at least not with Google’s bot, other experts argue.

“Sentience is the ability to sense the world, to have feelings and emotions and to act in response to those sensations, feelings and emotions,” John Etchemendy, co-director of the Stanford Institute for Human-centered Artificial Intelligence (HAI), wrote in a statement to The Stanford Daily.

It is difficult to know when a sentient technology will happen, but some technologists believe it’s worth discussing whether such developments are on the way.

Bruce Schneier, a computer security professional and lecturer in public policy at the Harvard Kennedy School, sees the potential benefits and risks of unleashing LaMDA on the world. He doesn’t believe the chatbot is a person, but he is worried one day it won’t be so easy for humans to make that distinction.

Artificial Intelligence might be helpful to society in obvious ways, such as using it to diagnose illnesses in brain scans, or finding new places to drill for oil, but in other ways it can be dangerous, Schneier believes.

“As we get technologies that do things differently than us, we stop understanding them and maybe we lose control over whether they are better or not,” Schneier told The National Desk, noting that AI can’t explain why it comes to certain conclusions about functions.

That explainability problem, as researchers call it, makes it a challenge to detect issues with the algorithms and systems AI creates.

Schneier cited the 2015 Volkswagen emissions scandal while speaking a a recent conference as a possible example of a loophole AI can exploit without human knowledge.

Volkswagen was caught purposely programming diesel engine-powered vehicles to activate emissions controls only during laboratory emissions testing, manipulating to make them appear to meet US standards. Those same vehicles emitted up to 40 times more emissions while on the road, the Environmental Protection Agency determined.

Imagine having an AI system design engine software that could do the same thing while not notifying humans how it accomplished its goals, Schneier argued.

Lack of understanding is not the only problem. LaMDA and other machine learning technologies are built using software, which can be hacked.

“If you can hack the AIs that sets the interest rates, you can make some money. You can hack the AIs that drive the cars around, you can kill a lot of people,” he said, providing examples where AI can be transformed into weapons or inverted for nefarious reasons.

Schneier added: “Depending on your goal, as these computers get into a position of authority, where they are affecting the world in a direct way without human intervention, the effects of hacking become more catastrophic.”

Researchers are working on explainable AI, or ways for the technology to explain its outputs and how it creates systems, Schneier noted.

In the meantime, he argued, humans can simply look at the results of an AI process to determine whether a device using that technology is behaving correctly and legally.