Will artificial intelligence change human relationships?

Controversy about Google's robot that is capable of talking raises important points in our relationship with machines — and people

In early June, a Google engineer caused controversy by stating that the system of artificial intelligence of the company (named LaMDA, which is the acronym in English for Language Model for Dialogue Applications) was a sentient being, capable of perceiving its senses and having consciousness. Google denied the claims and removed the engineer from his duties.

The doubt, however, remained in the air. Are we close to the day when computers will have consciousness, feelings and autonomy? Will machines be able to disobey human commands, as happens in the film “2001 – A Space Odyssey”?

Experts in the field believe not. They say that LaMDA is an artificial brain, in the cloud, that learns to “talk”, not to think or feel. It works like this: the machine receives millions of texts, learns to put the words in context and create a dialogue — but it looks more like a parrot than a human being.

“Learning has an objective, which is presented in the form of a game. It has a complete sentence, but a word is missing, and the system has to guess it,” explains to the BBC Julio Gonzalo Arroyo, professor at Uned (National University of Distance Education), Spain, and researcher in the department of natural language processing and information retrieval.

Machines are able to identify the meaning of a word and observe the others that are in a text around it. This way, they learn to predict patterns and words in the same way that we observe text prediction in messaging apps on cell phones, but with much more memory.


LaMDA, specifically, learns to create spontaneous responses, which don't seem like they came from a robot programmed to say fixed phrases. Furthermore, it is capable of recognizing the nuances of a conversation, because, after learning billions of words, the machine understands which are the most appropriate to use in each context.

Humanizing the machine

According to one Washington Post article, most researchers in the field say that the answers generated by artificial intelligence systems like LaMDA are based on what humans have already posted on Wikipedia, Reddit, message boards and other corners of the internet. And this does not mean that the machine understands the meaning of the sentences it says.

“We now have machines that can generate words without thinking, but we haven’t learned how to stop imagining a mind behind them,” Emily M. Bender, a professor of linguistics at the University of Washington, told the newspaper. 

For Google, it makes no sense to use these conversational models to consider the machine a sentient being, as the systems imitate dialogues found in millions of sentences. In other words, there is so much data feeding the AI that it doesn't need to be sentient for us to feel that this conversation is real.

Therefore, other experts say that the issue is not whether the machine is sentient or not — but how we, humans, react to this interaction when we think that the robot is “people like us”, points out one article published in Wired. Would you be able to open up to an artificial intelligence in the form of a therapist? Or a girlfriend, like in the movie "She"?

The consequences for human relationships

In the future, when AI starts having deeper conversations with us, we will need to be careful about what we say, as this data will be shared with the company that created the robot therapist (or girlfriend). And they could even feed the language used by a digital “me” loose in the metaverse speaking just like the original human being. 

On the one hand, treating the machine with this level of empathy could lead us to reveal too much information. On the other hand, treating her as an unimportant being, or as an object, can affect the way we treat other human beings, point out researchers Jason Edward Lewis, Noelani Arista, Archer Pechawis and Suzanne Kite in Wired.

They find it more interesting to think about how we relate to these machines, whether we are, for example, being abusive or sexist towards virtual assistants (which are largely female). 

After all, if we get used to approaching robots in a disrespectful way on a daily basis, this can lead us to act like this towards other human beings. “A chatbot or human virtual assistant must be respected, so that its own simulacrum of humanity does not habituate us to cruelty towards real humans”, points out the article.

Was this content useful to you?
YesNo

Related posts