Babies and AI go head to head in new NYU study

A recent NYU study compares the ability of infants and machines to understand human behavior.

A+person+wearing+a+white+T-shirt+and+holding+a+baby.+There+is+a+black-and-white+geometric+logo+superimposed+in+the+top+right+corner+of+the+frame.

An image from the study. (Courtesy of Graylin Lucas)

Graylin Lucas, Contributing Writer

Babies may know a lot more than we think. A recent NYU study shows that compared to artificial intelligence, infants are better at detecting the motivation behind human decisions. 

Moira Dillon, the senior author of the study and an assistant psychology professor at NYU, said that the goal of the study was to give artificial intelligence an example of basic human knowledge so it can be used as a tool for reasoning-based tasks. She mentioned that the researchers wanted to design a program that replicates infants’ knowledge about human behavior.

“The AI that’s inspired by human intelligence is going to have those same endowments that we humans have in our repertoire for understanding other people, and may be able to help us,” Dillon said. “It may be able to, for example, determine the needs of a person who is trying to act, but can’t quite achieve their goal.” 

The researchers studied 11-month-old infants to determine how their ability to recognize human motivation might differ from AI. To measure the difference between them, they created the Baby Intuitions Benchmark, a system composed of six tasks aimed at estimating infant and machine intelligence.

The babies in the study reacted to animated sequences of moving shapes simulating human behavior, which were recorded by researchers and compared to the AI’s reactions to the same videos. Researchers also monitored the human and AI responses when the animations deviated from predictable human behavior. 

Researchers found that the AI models lacked the common sense psychological instincts that the babies had, as the computer models couldn’t predict the motivations behind the sequences. 

Grace Lindsay, an NYU psychology and data science professor who was not involved in the study, said that despite the study’s results, it is possible that AI could one day match humans’ innate ability to recognize motivations. 

“The basic science of understanding human intelligence and then translating it into machines in a way that really works is also challenging,” Lindsay said. “In this study, they built this task explicitly to be able to be used by humans and machines, and that at least kind of greases the wheels of that transfer.” 

Dillon is not concerned about AI becoming “too human” since her findings suggest that AI fails in basic reasoning and inferences.

“The concern that AI would be able to understand rich, multilayered and multi-agent scenarios is still out of reach,” Dillon said. “I don’t think it’s impossible for AI to achieve those kinds of inferences and get that kind of reasoning at some point, but even state-of-the-art AI has challenges that we’ve revealed here.”

Contact Graylin Lucas at [email protected].