A new study published on January 11 in the journal Scientific Reports claims that a common facial recognition algorithm can predict, with 72% accuracy, a person’s political orientation (for example liberal or conservative) from a single social media profile picture. The study’s author, Michal Kosinski, says this poses dramatic risks to privacy and civil liberties.
Kosinski, an associate professor at Stanford University’s Graduate School of Business, has published widely on the social implications of facial recognition technology. Some of his past studies on algorithmic profiling have generated considerable controversy, for example his 2018 paper on using AI to detect sexual orientation.
Guessing political orientation from profile pictures
The new study used the profile pictures of over one million participants to show that a common facial recognition algorithm can predict people’s political orientation with 72% accuracy. Accuracy levels were similar across several countries (the U.S., Canada, and the UK). They were also similar across online platforms (Facebook and a dating website).
Of course, people make such inferences about each other all the time, based purely on visual information. As Kosinski writes, studies have shown that people can use facial information to make better-than-chance guesses about traits such as honesty, personality, intelligence, sexual orientation, political orientation, spousal status, and violent tendencies.
But the accuracy of these human judgments is generally low. For example, one study showed that when people try to guess which of two photos is a conservative and which is a liberal, they only guess correctly about 55% of the time, or slightly above chance.
Outperforming the personality questionnaire
Likewise, having subjects fill out a 100-item personality questionnaire (as the present study did) only leads to 66% accuracy in predicting the subject’s political orientation. It included questions like “I treat all people equally” or “I believe that too much tax money goes to support artists.”
“In other words,” Kosinski says, “a single facial image reveals more about a person’s political orientation than their responses to a fairly long personality questionnaire, including many items ostensibly related to political orientation.”
The success rate of the algorithm in the present study, 72%, outscored both of those methods. “Algorithms excel at recognizing patterns in huge datasets that no human could ever process,” Kosinski writes. Furthermore, algorithms “are increasingly outperforming us in visual tasks ranging from diagnosing skin cancer to facial recognition to face-based judgments of intimate attributes.”
Using a million faces as a political orientation test
The present study used a sample of just over one million participants. About 900,000 came from an online dating site, and slightly more than 100,000 were U.S. Facebook users. The dating site sample was provided by a popular dating website in 2017. This dataset also included the users’ country, self-reported political orientation, gender, and age.
The Facebook sample included public profile images, age, gender, political orientation, and personality scores volunteered by just over 100,000 U.S. Facebook users who were recruited through an online personality questionnaire between 2007 and 2012. Participants provided informed consent for their data to be recorded and used in research.
The software tightly cropped the pictures around the face, so as to remove background information. It then resized them to 224 × 224 pixels. The researchers ran the images through software that used “face descriptors” to identify the faces’ core features.
High accuracy, even when limiting the sample
The algorithm’s accuracy at guessing either a liberal or conservative political orientation was 73% for the U.S. Facebook users. For the U.S. photos taken from the dating site, the accuracy rate was 72%. For the dating website users in Canada it was 71%, and from the UK 70%.
Past research has found that demographic traits help to enable the classification of political orientation. In the U.S., for example, white people, older people, and men are more likely to be conservative. So another part of the study tested the algorithm’s accuracy when the sample was restricted to people of the same age range, gender, and ethnicity. In this case, accuracy levels only dropped by about 3.5%, reaching 65% to 68% for the U.S., Canadian, and UK dating website users, and 71% for the U.S. Facebook users.
“This indicates that faces contain many more cues to political orientation than just age, gender, and ethnicity,” Kosinski says.
Which facial features inform the algorithm about political orientation?
The study also looked at the correlations between political orientation and “transient” facial features. These included head tilt, emotional expression (for example sadness or anger), eyewear, and facial hair.
The most predictive feature was head orientation (which yielded 58% accuracy), followed by emotional expression (57%).
“Liberals tended to face the camera more directly, were more likely to express surprise, and less likely to express disgust,” Kosinski writes.
Facial hair and eyewear had no significant effect on accuracy.
Can we outwit the algorithm?
Of course, we could change our facial expressions when out in public, so as to confuse the AI. Likewise, we could adjust our head tilt for the same purpose. But to do this consistently “would be challenging,” Kosinski says, “even if one knew exactly which of their transient facial features reveal their political orientation.”
“Moreover,” he adds, “the algorithms would likely quickly learn how to extract relevant information from other features — an arms race that humans are unlikely to win.”
In sum, Kosinski writes, the “high predictability of political orientation from facial images implies that there are significant differences between the facial images of conservatives and liberals.”
Of course, that does not imply “that liberals and conservatives have innately different faces,” Kosinski writes. It only suggests that people’s photographs reveal this information with considerable accuracy.
Privacy implications as accuracy continues to improve
From a privacy protection standpoint, Kosinski says, “the distinction between innate and transient facial features matters relatively little.”
Of course, 72% isn’t terribly high. Does that mean there’s nothing to worry about from a privacy perspective?
The accuracy levels presented in this study probably do not represent “an upper limit of what is possible,” Kosinski writes. Better accuracy could likely be obtained by using multiple images per person (this study used just one per person), or using higher-resolution images, or training neural networks to identify political orientation, or “including non-facial cues such as hairstyle, clothing, headwear, or image background.”
And 72% is already high enough for some use cases. “For example, even a crude estimate of an audience’s psychological traits can drastically boost the efficiency of mass persuasion,” Kosinski writes.
In conclusion, he writes, “We hope that scholars, policymakers, engineers, and citizens will take notice.”
(The code and datasets, excluding actual images, that were used to compute the results are available at https://osf.io/c58d3/)
Other psychology news today:
- The triumph of the outgoing introvert: faking extroversion makes people like you more.
- A new study shows the effects of CBT for hypochondria can last for 10 years or more.
- New study finds the number of Americans in mental “extreme distress” has doubled since 1993.
- New research shows that increased use of “I” and “we” can be breakup warning signs.
- Marijuana after anesthesia: this study finds that cannabis users require more anesthesia, both during and after surgery.
Study: “Facial recognition technology can expose political orientation from naturalistic facial images”
Author: Michal Kosinski
Published in: Scientific Reports
Publication date: January 11, 2021
Photo: by Gerd Altmann via Pixabay