
explores
WWhen you walk into a room with people, do you immediately pick up on the vibe? Can you quickly scan faces, recognize the hidden meaning behind the shape of an eyebrow or the twitch of a smile, and sense emotion when the conversation changes? Or do the meanings and intentions of others often escape you?
Not everyone is equally adept at picking up social cues in a given environment, a skill colloquially known as room reading. Recently, scientists from the University of California, Berkeley, and the Japanese National Institute of Information and Communications Technology in Osaka set out to understand why.
Across three studies, the researchers found that individual differences in the ability to pick up nonverbal cues stem from idiosyncrasies in the way different people gather, weigh, and integrate facial and contextual information from the environment.
advertisement
Nautilus members enjoy an ad-free experience. Log in or join now.
Read more: “How to tell if you’re a jerk“
They found that those who are good at reading social cues are able to quickly engage in complex calculus, evaluating the relative clarity or ambiguity of different signals so that they can give greater weight to those with clearer meaning. However, those who are not good at it, try to keep it simple and give equal weight to every piece of information they perceive. The scholars published their book Results in Nature Communications.
“We don’t know exactly why these differences occur,” said Jefferson Ortega, Ph.D., a psychologist. student at the University of California, Berkeley and co-author of the study, V statement. “But the idea is that some people may use this more simplified integration strategy because it is less cognitively demanding, or it could also be because of an underlying cognitive deficit.”
advertisement
Nautilus members enjoy an ad-free experience. Log in or join now.
To conduct their experiment, Ortega’s team asked 944 volunteers to guess someone’s mood in a series of video clips, including Hollywood films, documentaries, and home videos collected from YouTube. The researchers made the backgrounds in some recordings blurry, while others had blurry faces and clear context, in order to isolate the influence of different types of information people might use to make their assessments. In the third set of videos, the context and faces were clear.
Ortega and his colleagues expected that most people would use an inference method known as Bayesian integration, where they weigh ambiguities in a set of signals. But only 70% of participants did so. The other 30% chose the intermediate signals, regardless of how clear or ambiguous they were.
“It was very surprising,” Ortega said. “The computational mechanisms — the algorithm the brain uses to do this — are not well understood. That’s where the motivation for this paper came from. It’s just an amazing achievement.”
advertisement
Nautilus members enjoy an ad-free experience. Log in or join now.
Something you can think about next time you have to read the room quickly.
Enjoy Nautilus? Subscribe to our free website Newsletter.
Main image: Blueastro / Shutterstock
advertisement
Nautilus members enjoy an ad-free experience. Log in or join now.