In 1998, a team of researchers founded Project Implicit for the purpose of identifying, measuring, and correcting implicit (i.e. subconscious) biases in the general public. Project Implicit is organized around the Implicit Association Test (IAT), a psychometric evaluation used to probe the depth and nature of bias in individuals. By showing test takers various pairings of words and concepts (“white,” “black,” “pleasant,” “unpleasant”), the IAT can determine which associations takers make more readily. Consistent lags in pairing a category, like “black,” with positive concepts, like “pleasant,” indicate that the test-taker is biased against that category of people.
Since its invention the IAT has entered the American public’s consciousness in a big way. High-profile articles with titles like “We Are All Racists at Heart” and “Across America, whites are biased and they don’t even know it” drive this point home on a national stage. The importance of the IAT is further motivated by current events like the apparently unjustified shootings of black Americans by police and the Black Lives Matter movement.
Before taking the IAT, a person may sincerely claim that she believes herself to hold no implicit biases. But once those IAT scores roll in, ignorance is no longer available as a defense — and those IAT scores are just a few clicks away. According to the emerging dominant line of thought, someone who has failed the IAT in this way (as would most IAT takers) incurs an obligation to correct her biases — or does she?
How to deal with implicit bias was already a tricky situation. For instance, previous evidence suggests that explicit efforts to reduce bias can backfire, accidentally increasing bias instead. But newer research raises even deeper, conceptual problems with the IAT, suggesting that the test actually doesn’t measure anything important within test takers. If this new research is correct, then focusing on the IAT may have been a big red herring in the quest to correct bias.
Using a technique called “meta-analysis,” researchers from the University of Wisconsin at Madison, Harvard University, and the University of Virginia reviewed the results of over 400 previous studies of the IAT, involving more than 70,000 experimental participants. Such a large pool of data, from a wide variety of sources, helps researchers to identify consistencies (or lack thereof) in experimental findings across time and place.
The IAT didn’t fare too well in this meta-analysis. The researchers found that IAT scores were quite malleable — various minor interventions (like distracting test takers) easily changed their scores significantly. Worse still, the meta-analysis found little connection between changes in implicit bias and changes in explicit bias or behavior.
In other words, it now appears that IAT scores simply don’t mean much. The scores don’t necessarily measure something stable within a person, and they don’t necessarily predict how that person will behave. Even if you did whatever it takes to earn a less biased score on Project Implicit’s IAT, you still might behave quite poorly out in the real world. Conversely, people with lots of implicit bias, as measure on the IAT, do not necessarily act based on their alleged implicit bias.
It’s not the first time that the IAT’s validity has come under criticism. Another meta-analysis from 2014 found that the IAT doesn’t predict interpersonal behavior, perceptions of others, policy preferences, or “microbehavior” well. According to this research, if you want to predict someone’s biased behaviors, it’s more effective to just ask the person if she is biased. Despite concerns that study participants will lie or deceive themselves when asked explicitly about their biases, explicit measures of bias may still be more accurate that the IAT.
None of this is to say that racism doesn’t exist or isn’t a problem. Plenty of measures other than the IAT indicate that racism, sexism, ageism, and other biases are clearly at work in American society today. The IAT was an admirable attempt to diagnose bias in individuals, and raised important questions about what holding bias may (or may not) feel like, from the inside.
However, until further evidence suggests otherwise, the IAT cannot justifiably be deployed for some of the purposes that institutions have found for it. For instance, if IAT scores cannot predict bias-related behavior such as unjustified police shootings of people of color, then the IAT may not be the right tool for framing bias prevention programs for police departments. Rather than feeling ashamed or angry about a “failing” IAT score (or trying to prepare for an IAT retake), you should put your effort into sober self-analysis and, if necessary, incremental change.