Logo

Logo

Giving human touch to Alexa or Siri can backfire

For the study, the researchers recruited 141 participants through Amazon Mechanical Turk, a crowd-sourced site that allows people to get paid to participate in studies.

Giving human touch to Alexa or Siri can backfire

Even small changes in the dialogue can make the chat bot seem more interactive. (Representational Image: iStock)

An Indian American researcher-led team has found that giving a human touch to chatbots like Apple Siri or Amazon Alexa may actually disappoint users.

Just giving a chatbot human name or adding human-like features to its avatar might not be enough to win over a user if the device fails to maintain a conversational back-and-forth with that person, according to S Shyam Sundar, Co-director of Media Effects Research Laboratory at Pennsylvania State University.

“People are pleasantly surprised when a chatbot with fewer human cues has higher interactivity,” said Sundar.

Advertisement

“But when there are high human cues, it may set up your expectations for high interactivity – and when the chatbot doesn’t deliver that – it may leave you disappointed,” he added.

In fact, human-like features might create a backlash against less responsive human-like chatbots.

During the study, Sundar found that chatbots that had human features — such as a human avatar — but lacked interactivity, disappointed people who used it.

However, people responded better to a less-interactive chatbot that did not have human-like cues.

High interactivity is marked by swift responses that match a user’s queries and feature a threaded exchange that can be followed easily.

According to Sundar, even small changes in the dialogue, like acknowledging what the user said before providing a response, can make the chatbot seem more interactive.

Because there is an expectation that people may be leery of interacting with a machine, developers typically add human names to their chatbots — for example, Apple’s Siri — or programme a human-like avatar to appear when the chatbot responds to a user.

The researchers, who published their findings in the journal Computers in Human Behavior, also found that just mentioning whether a human or a machine is involved — or, providing an identity cue — guides how people perceive the interaction.

For the study, the researchers recruited 141 participants through Amazon Mechanical Turk, a crowd-sourced site that allows people to get paid to participate in studies.

Sundar said the findings could help developers improve acceptance of chat technology among users.

“There’s a big push in the industry for chatbots,” said Sundar.

“They’re low-cost and easy-to-use, which makes the technology attractive to companies for use in customer service, online tutoring and even cognitive therapy — but we also know that chatbots have limitations,” he added.

Advertisement