WASHINGTON: People will overlook white lies from robots, but are less tolerant when machines “misrepresent” their capabilities or carry out surveillance, according to George Mason University scientists.
The team surveyed almost 500 people for their views on hypothetical scenarios where robots have long been deployed, such as in health care, retail and domestic cleaning.
The results showed people to be mostly willing to put with up a dishonest health care robot if it “protects a patient from unnecessary pain” by being economical with the truth.
In contrast, a scenario in which a robot covertly filmed people was deemed unacceptable by a majority of those surveyed. Some participants said the snooping could be justified, however, speculating that a robot “might film for security reasons.”
The deployment of industrial and farm robots has soared over the past decade as technology is made more sophisticated and robots more affordable.
Last month, Morgan Stanley analysts said merging AI with robots to create “humanoids” could lead to millions of people losing their jobs in sectors such as construction and farming.
“With the advent of generative AI, I felt it was important to begin examining possible cases in which anthropomorphic design and behaviour sets could be utilised to manipulate users,” said Andres Rosero of GMU.
AI systems have been found to churn out so-called “hallucinations,” or giving inaccurate or misleading responses to user prompts.
Massachusetts Institute of Technology recently published research showing how “transparency is often lacking” in the training of AI bots or large language models.
“We’ve already seen examples of companies using web design principles and artificial intelligence chatbots in ways that are designed to manipulate users towards a certain action,” said Rosero, whose team’s findings were published in the journal Frontiers in Robotics and AI. – dpa