As human interaction with robots and artificial intelligence increases exponentially in areas like healthcare, manufacturing, transportation, space exploration, defence technologies, information about how humans and autonomous systems i.e robots work within teams remains scarce.
Recent findings from research demonstrate that human-autonomy teaming comes with interaction limitations that can leave these teams less efficient than all-human teams.
Existing knowledge about teamwork primarily is based on human-to-human or human-to-automation interaction, which positions humans as supervisors of automated partners.
But as autonomy has increasingly developed decision-making skills based on spontaneous situation assessments, it can become a teammate rather than a servant. These shared decision interactions are identified as human-autonomy teaming, or HAT.
Nancy Cooke a cognitive psychologist and professor of human systems engineering at Arizona State University (ASU). She explored how an artificial intelligence agent can contribute to team communications failure, and how it can improve those interactions, in her discussion at the annual meeting of the American Association for the Advancement of Science (AAAS).
"One of the key aspects of being on a team is interacting with team members, and a lot of that on human teams happens by communicating in natural language, which is a bit of a sticking point for AI and robots," Cooke said.
Her discussion addressed a study in which teams of two humans and an AI, or "synthetic teammate," fly an unmanned aerial vehicle (UAV). The AI was the pilot, while the people served as a sensor operator and navigator.
The AI, developed by the Air Force Research Laboratory, communicated with the people via text chat.
"The team could function pretty well with the agent as long as nothing went wrong. As soon as things get tough or the team has to be a little adaptive, things start falling apart, because the agent isn't a very good team member."
The AI was unable to anticipate its teammates' needs the way humans do. As a result, it didn't provide critical information until asked—it doesn't give a "heads up."
"The whole team kind of fell apart," Cooke said. "The humans would say, 'OK, you aren't going to give me any information proactively, I'm not going to give you any either.' It's everybody for themselves."
This was what Professor Cookie said and she made a bit clear that Robots can be a Ideal Teammate unless and until a long breaking mishaps occur.
And even I do think Robots can be Good Team Partner supposingly we give instructions and calibrate them perfectly.
What you people think ? Do let me know in the Comment Section.
Till then,
Happy Learning !!!
Top comments (0)