The Unbiased History of Unconscious Bias

From what’s been happening in the corporate and HR world for the last two decades, it is clear that unconscious bias is a well-used term but perhaps better described as well-misused. When it comes to candidate selection, we’re now told that we must eliminate our unconscious biases or risk hiring through one of many corrupt lenses that we allegedly all see through and cannot control.

You may recall playing the kids’ party game, telephone, when a word is whispered from one kid to the next, and what started as “birthday cake” ends up being “birdie cage.” Over the past hundred years, unconscious bias has gone through a similar filtering system of childlike naivety and carelessness. Instead of unconscious bias, we’ve ended up with a proverbial birdie cage. The birdie cage largely stems from the work done by Kahneman & Tversky (1974) and Banaji & Greenwald (1995), who both wrote about some of the negative consequences of utilizing heuristics, or cognitive shortcuts, to make decisions. However, the work done by Herbert Simon (1947) and later by Gerd Gigerenzer (1996) demonstrated that heuristics are a necessary cognitive tool in deciding under times of uncertainty when we don’t have all the information we need.

The only part of a century of research that seems to have ended up in the LinkedIn and HBR headlines is that our biases are corrupt, and we must get rid of them. It’s just not that simple. This is only part of the story, a narrative made out of the crumbs of the whole story, which hasn’t been told until now.

Today’s Corporate Definition

A bias is a strong feeling or inclination toward or against something or someone. Once companies hastily learned that we’re all prejudiced and our unconscious biases are to blame, they rushed to hire consultants to advise them about this apparent plague of our implicit thoughts.

Just look at what one of the world’s largest and most successful companies, Microsoft, trains their employees to believe. Microsoft is not the only guilty party – most prominent global companies will have rolled out similar training. What follows is taken directly from their publicly available unconscious bias training. “Unconscious bias is defined as stereotypes, prejudices, or preferences that cause us to favor a person, thing, or group in a way deemed unfair. They are implicit attitudes, behaviors, words, or actions that we exhibit in our personal lives and the workplace. We all have unconscious biases. They are mental shortcuts that help us navigate our day effectively and efficiently” (Microsoft, 2022).

No wonder we are confused. First, they write that unconscious bias causes us to favor a thing or group in an unfair way. That’s consistent with how we’re being taught to view unconscious bias, despite it being wrong, or at least not altogether accurate. Then they write, “They are mental shortcuts that help us navigate our day, effectively and efficiently.” That is a definition of heuristics, not unconscious bias.

So, which is it? They’re mental shortcuts that help us navigate our day effectively and efficiently, or do they cause us to favor a thing or group in a way that’s unfair? The truth is, as you’ll read shortly, heuristics are shortcuts our brains take to reach decisions under conditions of uncertainty. As a result, our biases learned from previous experiences will influence our conclusion when we don’t have enough information, knowledge, or time to make an informed decision.

Microsoft was off the mark in the initial statement that unconscious bias leads us to favor things or groups unfairly. This is the negative narrative on unconscious bias that’s been hijacked by higher education and adopted, without question, by the corporate world to blame something intangible for the actions of those who consciously discriminate.

The historic lack of diversity in our companies is a multi-dimensional problem with many contributing factors. Singling out unconscious bias has been a distraction that has slowed progress. It’s not the smoking gun we have been led to believe it is. So how did we go from birthday cake to birdie cage? What follows is a summary of the scientific literature on the subject.

Early Influence – Thorndike and Polya

It wouldn’t be fair to get into the research without mentioning some of the earlier work done on the broad topic of biases. Unconscious or implicit bias wasn’t used at the time, but these early 20th century scientists were most certainly talking about the same subject.

In 1920, Edward Thorndike, an American psychologist, was the first to use correlation analysis to show how people judge others based on seemingly unrelated criteria. He asked servicemen to rank their fellow officers based on several traits and attributes, including intelligence and physique. The soldiers who were identified as being taller and more attractive were also rated as being more intelligent and better soldiers. Thorndike called this the “halo effect” (Thorndike, 1920).

In 1945, a Hungarian mathematician, George Polya, was a professor of mathematics at Stanford University. That year he wrote a book called “How to Solve It,” which went into great detail about heuristics and problem-solving. How do we know he was influential? Well, he taught Herbert Simon, who you will soon learn changed everything when it came to how we utilize heuristics in decision making in real life under certain conditions.

Heuristics Undefined – Herbert Simon

Herbert Simon is most famous for his theory about economic decision making which he first outlined in his 1947 book, “Administrative Behavior – A Study of Decision-Making Processes in Administrative Organizations” (Simon, 1947). In this book, he introduced the concepts of “bounded rationality” and “satisficing,” which led him to win the Nobel Prize for Economics in 1978 for his work on decision making.

In his book, he proposed that, to date, traditional economic decision-making theory assumes we are entirely rational beings with unlimited information, unlimited time, and unlimited cognitive abilities at our disposal. This is the classical economic model of rational thinking. However, we are often deciding under conditions of uncertainty, and we are limited by the information we have available to us, time, and our ability to think and solve problems.

Simon summed up bounded rationality perfectly in his Nobel Memorial Lecture in 1979 when he said, “The classical model of rationality requires knowledge of all the relevant alternatives, their consequences and probabilities, and a predictable world without surprises. These conditions, however, are rarely met for the problems that individuals and organizations face.” (Simon, 1979).

In other words, our rationality is bounded by our cognitive limitations, the information that’s available to us, and time constraints. Instead of making the ‘best’ choices, we often make choices that are satisfactory, which he called “satisficing,” a combination of two words: satisfy and suffice.

“Because administrators satisfice rather than maximize, they can choose without first examining all possible behavior alternatives and without ascertaining that these are, in fact, all the alternatives. …They can make decisions with relatively simple rules of thumb that do not make impossible demands upon their capacity for thought. Simplification may lead to error, but there is no realistic alternative in the face of the limits on human knowledge and reasoning” (Simon, 1947).

The” rules of thumb” he writes about are what we now call heuristics. The last sentence in the quote is significant: “Simplification may lead to error, but there is no realistic alternative in the face of the limits on human knowledge and reasoning.” The errors he writes about here result from our implicit or unconscious biases. However, those unconscious biases also lead to the right choices when the choices turn out to be correct. He doesn’t write that they “do” lead to errors but “may.” This is, for me, where we find the first accurate representation of what biases are. They are not some automatic, unconscious prejudiced instinct that’s hardwired into us all. Having biases is hardwired into us, but the biases themselves are not. These biases are our backup decision-making influences we default to when we are forced to make decisions without knowing everything we need to know and are constrained by time and our cognitive limitations. These are the conditions that nearly all leaders work in most of the time.

Think about any other kind of bias. Let’s take religious and political bias. If you grew up a Christian, for example, and as an adult, you decided that Judaism was a religion you wanted to pursue. In time, your religious biases would change. If you grew up in a family that supported the democrats, but then as an adult, you felt more compelled by the message of the republicans, your resulting biases would change.

Think about candidate selection for a moment. We can’t just plug their entire history into our mental hard drive and make exact calculations regarding their suitability for a role. We will only ever have the hour or multiple hours we’re interviewing them and whatever other information we can find to make a decision. That decision will always be made under uncertainty and subsequently be prone to our biases. However, even that is a very narrow and loaded view of the consequences of biases.

Let’s take a hypothetical example of a global bank looking to hire a new Chief Financial Officer (CFO). Two people will interview the candidate. One is the CEO of the bank, and another is an intern in the first week of their internship. Neither of these interviewers has a detailed biography of the candidate’s life. They only have one hour to make a choice. The CEO and the intern will rely on heuristics to make a decision. Are we to believe that both interviewer’s heuristics will lead to the same scale of errors, or is there any value in the experience of the CEO? Of course, it’s unrealistic to propose that an intern will play a pivotal role in hiring a board-level executive. Still, the point is that with experience comes knowledge and intuition. With these, we move from a decision-making scenario of maximum uncertainty (the intern) to a decision being made with minimum uncertainty (the CEO). There is no maximum certainty scenario here as it’s impossible to know everything there is to know, so it’s a heuristic decision in conditions of uncertainty.

Over time and with experience, we can get much better at intuitively making decisions, which is the upside of heuristics. Also, when it comes to hiring for diversity, if we know we have to hire more diverse candidates, whatever that means, then it should no longer be subject to our unconscious biases, as, by definition, biases result from heuristics under conditions of uncertainty. If one of the conditions that are known in the interview process is that we have to hire more females, for example, then to overlook a female in favor of a male would no longer be unconscious bias; it would be explicit discrimination. The same can be said if a female was just hired because she is a female rather than a competent candidate. The prejudiced outcome is the same but focusing on our unconscious biases does not tackle the problem.

Herbert Simon never used the term heuristic in this book in 1947, but he is clearly referring to heuristics when writing about rules of thumb and satisficing. He did start using the term later in his career. We know this because he was a student of George Polya, mentioned previously, at Stanford University.  In Simon’s memoir about his 1978 Nobel prize for Economics, he wrote, “Polya’s widely read book, How to Solve It, published in 1945, had introduced many people (including me) to heuristic, the art of discovery.”

Getting rid of our unconscious biases is impossible. It’s also wrong to think that AI will help us eliminate our unconscious biases. In 1955, Herbert Simon and Allen Newell, his ex-student, and John Shaw, a programmer from RAND, built the world’s first AI program (Simon and Newell, 1955). The Logic Theorist, as it was known, was the first machine in the field of heuristic programming and proved 38 of the first 52 theorems of the Principia Mathematica. It was presented at an event at Dartmouth College in 1956, where the term artificial intelligence was introduced to the world (coined one year earlier in a proposal submitted for a 2-month, 10-man study of artificial intelligence by John McCarthy (Dartmouth College) and his collaborators. Although computer science had been around for at least two decades at this point, this was the advent of AI.

In another event in 1957, Herbert Simon stated:

“In short, we now have the elements of a theory of heuristic (as contrasted with algorithmic) problem solving, and we can use this theory both to understand human heuristic processes and to simulate such processes with digital computers” (Simon, 1958).

The reason he was describing heuristic problem solving as a solution for AI is that computers, still to this day, don’t have the computing power to calculate all possible outcomes for all problems. They have to use heuristics, which is what they do, and why Simon’s research and findings on bounded rationality were so crucial to the development of AI. Simon distinguishes between “well-structured problems” as being those where algorithms suffice and “ill-structured problems,” which are problems with no known algorithms.

Take chess, for example. In 1950 an American mathematician, Claude Shannon, calculated what came to be known as the “Shannon Number” (Shannon 1955). That number is 10, followed by 120 zeros. That’s an estimate of the amount of potential chess moves a computer or human would have to calculate to capture all possible moves before making a decision. So, like humans, computers’ playing’ chess work on decision trees of 10-20 moves ahead, thus reducing the computing power required. This is heuristics at play in computers.

Herbert Simon’s work has been instrumental in understanding heuristics and biases. Yet, it has been, at best, selectively interpreted and, at worst, entirely ignored by the corporate agenda to blindly pin the unconscious bias tail on the diversity donkey. People seem to be completely ignoring the utility of heuristics and how they improve with experience. Also, how errors are reduced when information, like the need to hire more diverse candidates, is better known. We’re being taught that our biases have corrupted our decision-making abilities and are always unfair. That’s just not true. Furthermore, they’re suggesting we should rely more on computers and AI, yet, thanks to Herbert Simon, computers also rely on heuristics to make decisions.

Heuristics and Biases – Kahneman and Tversky

In the journey towards our present-day definition of unconscious bias, we leapfrog from Herbert Simons’ revelations to work done by two psychologists, Daniel Kahneman and Amos Tversky. In their 1974 paper, Judgment under Uncertainty: Heuristics and Biases, we first saw the appearance of the term “cognitive bias” (Kahneman & Tversky, 1974).

“This article has been concerned with cognitive biases that stem from the reliance on judgmental heuristics.” Remember the Microsoft definition that confused heuristics with unconscious bias? They wrote a definition of unconscious bias and suggested they’re unfair, and then went on to use a definition for heuristics to define unconscious bias. This paper by Kahneman and Tversky was the first time in history that heuristics and cognitive biases were mentioned together. Definitions are important.

In the paper, they wrote about how the average human does not always make rational choices, which is clearly an idea that’s evolved from Herbert Simon’s work. The paper opened with, “Many decisions are based on beliefs concerning the likelihood of uncertain events such as the outcome of an election, the guilt of a defendant, or the future value of the dollar” (Kahneman & Tversky, 1974). It then went on to describe three heuristics: representativeness, availability, and anchoring and adjustment. This was the first time certain heuristics were given a label.

Representative Heuristic

Kahneman & Tversky use an illustration where a description of a man is shared, and individuals are asked to assess the likelihood that this individual is in one of several different careers based purely on the description of the person. Those being assessed rank order the likelihood of this individual having different careers based on the degree to which that person is representative of, or similar to, the stereotype of that occupation (librarian, engineer, doctor, and so on). Clearly, this heuristic can lead to errors as not all information is known.

Availability Heuristic

The availability heuristic occurs because we can recall specific memories in our minds more easily than others. The example that Kahneman and Tversky gave is that participants asked if more words in the English language start with the letter r, or have the third letter, r. While most would respond with the former, the latter is actually true. It’s just easier to recall words starting with r than words with r as the third letter.

Anchoring And Adjustment

“Different starting points yield different estimates, which are biased toward the initial values. We call this phenomenon anchoring” (Khaneman & Tversky, 1974). For example, let’s say you’re buying a new car. You see that the price at the dealer is $25,000. You didn’t shop around. When you get there, you negotiate the salesperson down to $24,000. You think you got a bargain. You then drive by another showroom on your way home and see that this identical new car was on sale for $23,500. You formed your value judgment based on the initial price of $25,000, the anchor price.

The “summary” section of the paper reads, “These heuristics are highly economical and usually effective, but they lead to systematic and predictable errors. A better understanding of these heuristics and the biases to which they lead could improve judgments and decisions in situations of uncertainty” (Khaneman & Tversky, 1974). Remember what Herbert Simon wrote in his 1947 book, “Simplification may lead to error, but there is no realistic alternative in the face of the limits on human knowledge and reasoning” (Simon, 1947). They are all saying the same thing – heuristics are not only helpful but often necessary, and on occasion, they may lead to errors. Where the unconscious bias mafia has gone astray is in the isolation of the statement “lead to systemic and predictable errors” without any regard for “usually effective” or the fact that these are decisions made under conditions of uncertainty where not all information and options are known.

Hijacking Heuristics and Biases – Banaji and Greenwald

Now we’re getting to the point where the birthday cake turned into the birdie cage. Psychologists Mahzarin Banaji and Anthony Greenwald introduced the concept of “implicit bias” in their paper, “Implicit social cognition: Attitudes, self-esteem, and stereotypes” (Banaji & Greenwald, 1995). This was the research that led to the explosion of unconscious bias over the past two decades. Like with most explosions, there’s a lot of noise followed by a lot of collateral damage.

Their paper started by claiming it had come up with a new construct, “implicit social cognition,” and then went on to define it as meaning that “traces of past experience affect some performance, even though the influential earlier experience is not remembered in the usual sense—that is, it is unavailable to self-report or introspection.” That’s not exactly news to anyone; our past experience affects how we think, even in unconscious ways. Remarkable.

What is interesting about that statement is that if our past experience affects our perceptions, then it follows that our existing experiences can also change our future views, so it follows that we can change our perceptions. Many of us knew this already, but if you read the headlines and articles on HBR and LinkedIn, you’d be forgiven for thinking we are all doomed and can’t be trusted to make informed choices because our unconscious biases have forever corrupted our moral compass. That doesn’t bode well for those who believe our beliefs are ingrained in our unconscious biases, and our only hope is to try and cover them up.

The paper concluded: “Much social cognition occurs in an implicit mode. This conclusion comes from a reinterpretation of many findings that indicate the importance of implicit operation of attitudes, and of the self-esteem attitude in particular, and also from existing and new evidence for the implicit operation of stereotypes. By adding this conception of the implicit mode to existing knowledge of the explicit mode of operation of social psychology’s basic constructs, the scope of those constructs is extended substantially” (Banaji & Greenwald, 1995).

No, it’s really not extended substantially. The paper was just a regurgitation of previous research by others into stereotypes, attitudes, and self-esteem. It even says in the paper:

“Of any newly offered theoretical construct, it should be asked: How does the new construct differ from existing ones (or is it only a new label for an existing construct)? The preceding paragraphs show that implicit social cognition, although strongly rooted in existing constructs, offers a theoretical reorganization of phenomena that have previously been described in other ways and, in some cases, not previously identified as having an unconscious component.”

So, attitudes, stereotypes, and self-esteem have never been identified as having an unconscious component? If nobody has ever identified attitudes, stereotypes, and self-esteem as having an unconscious component, it’s probably because it’s self-evident from their definition that they reside in our silent unconsciousness before manifesting in explicit behavior. Just in this one paragraph, they’ve suggested that it’s new because they’ve reorganized and renamed existing constructs; the same paragraph where they wrote that one could not do that according to the rules.

This paper was published in 1995, and in 1994 the authors were playing around with the Implicit Association Test (IAT), which was made public in 1998. Perhaps they came up with the IAT, retroactively tried to come up with an interesting paper that could be passed off as both new and evidence for the necessity of the IAT, and then launched the IAT to prove the effects of “implicit social cognition empirically.” Come on now.

The IAT is the test that, still to this day, purports to measure our unconscious bias as it relates to race, sex, age, and other prejudices. This isn’t one of the many tests; it’s the foundation upon which all contemporary unconscious bias commentary is built. Anyone can take the test. It was such a compelling breakthrough that even the famous author Malcolm Gladwell wrote, “The IAT is more than just an abstract measure of attitudes. It’s a powerful predictor of how we act in certain kinds of spontaneous situations.” (Gladwell, 2005). Sounds amazing, right? Not so fast.

When you take the test, it takes you through a series of questions that you could quite easily lie about if you felt so inclined. For example, it gives you various statements when defining how warm you feel towards European Americans and African Americans, with a choice to be neutral. You are then asked to use two keys on your keyboard (E and I), using the left key when you see a bad word pop up on your screen and the right key when you see a good word. You’re then asked to select the left key when a good word or an image of a Black person appears and the right key for when a bad word or an image of a White person appears. Then they switch it, so the bad word is associated with a Black person and the good word with a White person. The whole measurement of this part of the test is the millisecond difference in response time when bad or good words are associated with Black or White people. If you’re clicking Black people quicker when they’re associated with the bad word, their theory is that you have a “slight, moderate, or strong” preference for White faces over Black faces. So, in other words, this test claims that depending on how quickly you react, you are either racist or not. Even if you believe you’re not racist and your actions have never implied that you’re racist, this test can “find the truth.” You are then a candidate for unconscious bias training, where well-paid consultants will come to your office and train it out of you, or so the theory goes.

In an interview with National Public Radio (NPR) in 2016, Mazarin Banaji said, “In the late 1990s, I did a very simple experiment with Tony Greenwald in which I was to quickly associate dark-skinned faces – faces of black Americans – with negative words. I had to use a computer key whenever I saw a black face or a negative word, like devil or bomb, war, things like that.

And likewise, there was another key on the keyboard that I had to strike whenever I saw a white face or a good word, a word like love, peace, joy. I was able to do this very easily. But when the test then switched the pairing, and I had to use the same computer key to identify a black face with good things and white faces and bad things, my fingers appeared to be frozen on the keyboard. I literally could not find the right – the right key. That experience is a humbling one. It is even a humiliating one because you come face to face with the fact that you are not the person you thought you were” (NPR, 2016).

I’ve done the test on more than one occasion, and it isn’t difficult to find the keys. There are only two of them (E and I). Perhaps Professor Banaji needs some unconscious bias training?

It’s interesting to note there’s now a rather confusing disclaimer at the end of the test, just after you complete it:  These IAT results are provided for educational purposes only. The results may fluctuate and should not be used to make important decisions. The results are influenced by variables related to the test (e.g., the words or images used to represent categories) and the person (e.g., being tired, what you were thinking about before the IAT).” (Project Implicit, 2022)

So even they say the test should not be used to make important decisions and that the results may fluctuate. Despite this, the test is still up there, and Mazarin Banaji is certainly still promoting it. In her book, Blindspot, published in 2013, Banaji writes: “Studies that summarize data across many people find that the IAT predicts discrimination in hiring, education, healthcare, and law enforcement. However, taking an IAT once (like you just did) is not likely to predict your future behavior well” (Banaji, 2013).

So, in other words, it’s not reliable or valid if you take the test, but if many people take it, the results are reliable. Sorry, that’s not how it works. That reminds me of the story of the Emperor’s New Clothes by Hans Christian Andersen, except a ridiculous version. “Emperor, I know you think you’re naked in this gold suit we stitched by hand for you [that doesn’t actually exist], but if you buy many of them, you will no longer believe you’re naked, and their splendor will be revealed.”

What’s happened since the IAT would be the equivalent of a scientist doing a made-up scientific paper on anger and then coming out with the Implicit Anger Test that is engineered to ‘prove’ everyone is angry based on the time it takes people to respond to phrases like “I would hit him” or “I would smile at him.” Then everyone in the company is forced to go on anger management classes because they discovered that most of us are angry. If that sounds funny, that’s exactly what’s happened with unconscious bias and the IAT.

If only the IAT held the key to why discrimination still exists today. If only all we had to do was go to training and have our unconscious biases educated away from our unconsciousness. The theory of the IAT and its underlying suggestions looked very robust until it came time to go under the scrutiny of the psychometric standards of testing for reliability and validity, something which should have happened before it was even released. That didn’t happen.

In 2015 Banaji and Greenwald wrote a paper with a title that could have easily read, “No, we know it’s nonsense, but even a tiny correlation could mean something, right? Trust us; we’re psychologists.” The actual title was “Statistically small effects of the Implicit Association Test can have societally large effects.” (Greenwald et al., 2015). This paper then went on to respond to many meta-analyses (studies of studies) that have since been carried out regarding the IAT. In their conclusion of the paper, it reads: “First, both studies agreed that when considering only findings for which there is theoretical reason to expect positive correlations, the predictive validity of Black-White race IAT is approximately r .20. Second, even using the two meta-analyses’ published aggregate estimated effect sizes, the two agreed in expecting that more than 4% of variance in discrimination-relevant criterion measures is predicted by Black–White race IAT measures. This level of correlational predictive validity of IAT measures represents potential for discriminatory impacts with very substantial societal significance.”

This work by Banaji and Greenwald perfectly illustrates how Thorndike’s halo effect is alive and well. People believed that because these are ‘scientific papers’ written by ‘credible’ scientists, the IAT was the foundation upon which we would build more diverse and inclusive organizations. The truth is that it had no place being published in the first place as it did not meet the criteria of reliability and validity outlined in the guidelines provided by the American Psychological Association (APA, 2022). Yet because it was published and made headlines as being ‘scientific’ research, it spawned a billion-dollar training industry, and the rest is history.

Restoring Faith in Heuristics – Gerd Gigirenzer 

Around about the same time that Banaji and Greenwald were working on their theory, a psychologist by the name of Gerd Gigerenzer, along with his colleague, Daniel Goldstein, were working on further developing the ideas first introduced by Herbert Simon on bounded rationality and satisficing (Gigerenzer and Goldstein, 1996). Their view contrasted with the doom and gloom presented by the likes of Banaji and Greenwald and, to a lesser extent, Kahneman and Tversky.

They came up with the concept   Their studies concluded that often “less is more” when it comes to making decisions. They showed how individuals with experience in a given field are better equipped to rank-order any cues offered to help them decide and choose accordingly.  Their research helped illustrate the work done some fifty years earlier by Herbert Simon, showing how and when heuristics can be more effective and how experience is an essential factor in minimizing the errors caused when making decisions in conditions of uncertainty.

Conclusion

Unconscious bias is the same thing as implicit bias, which stemmed from the work done on cognitive biases, which is grounded on the work done on heuristics. Heuristics are the shortcuts our brains use to make decisions when not all information is known, and we are limited in time and our cognitive abilities.

Heuristics are a necessary cognitive tool, so much so that we also teach computers to use heuristics in AI to lighten the required computational load for complex problems. When deciding under conditions of uncertainty and therefore resorting to heuristics, these heuristics can be at the mercy of our unconscious or implicit biases. We have long been taught that our biases are hardwired into our brains. However, the process of forming biases is hardwired into our brains rather than the biases themselves.

Informed heuristics are significantly more useful than uninformed heuristics, and this isn’t captured in the corporate world’s interpretation of unconscious biases. The way to make diversity hiring more effective is not to remove all of our biases. We have to reduce uncertainty and create more “knowns.” In other words, with a mandate to hire more ‘diverse candidates’ and a reliable periodic audit of such a process, we are no longer making judgments under the same degree of uncertainty. As a result, we are not at the mercy of heuristics and, therefore, not held hostage by our biases.

If the facts are known, and the facts, in this case, are that we need to hire more diverse candidates, then heuristics should not play a part. If heuristics don’t play a role, then there are no unconscious errors in judgment (unconscious bias); there are only conscious ‘errors in judgment. That’s still the same problem, with the same results, but one we can aim to tackle differently than just covering up for these people while condemning the rest of the company with inappropriate labels of bigotry.

For companies to become more diverse moving forward, we cannot just blame our unconscious biases as that, in turn, does not give us any hope of a solution. While we choose to persecute every employee for their corrupt biases, the real perpetrators in the company are protected by initiatives like blind resume screening. The success of blind resume screening only identifies a problem but doesn’t solve it. It’s a band-aid that only protects those in the company who should not be in charge of making decisions. Those in charge of making decisions should realize that the solution lay hidden in plain sight in the birthday cake. The birdie cage has to go.

References

Banaji, M. (2013). Blindspot: hidden biases of good people. New York : Delacorte Press.

Kahneman, D.,  and Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica, Vol. 47, No. 2.

Greenwald, A. G., & Banaji, M. R. (1995). Implicit social cognition: Attitudes, self-esteem, and stereotypes. Psychological Review, 102(1), 4–27.

Greenwald, A. G., Banaji, M. R., & Nosek, B. A. (2015). Statistically, small effects of the Implicit Association Test can have societally large effects. Journal of Personality and Social Psychology, 108(4), 553–561.

Gigerenzer, G., & Goldstein, D. G. (1996). Reasoning the fast and frugal way: Models of bounded rationality. Psychological Review, 103(4), 650–669.

NPR (2016). How the Concept of Implicit Bias Came Into Being. https://www.npr.org/2016/10/17/498219482/how-the-concept-of-implicit-bias-came-into-being.

Polya, G. (1945). How to solve it; a new aspect of mathematical method. Princeton University Press.

PR (2016). How the Concept of Implicit Bias Came Into Being. https://www.npr.org/2016/10/17/498219482/how-the-concept-of-implicit-bias-came-into-being.

Project Implicit. (2022). https://implicit.harvard.edu/implicit/takeatest.html/.

Shannon, C. (1950). Programming a Computer for Playing Chess. Philosophical Magazine. 41 (314).

Simon, H. 1947 [1997]. Administrative Behavior, 4th Ed. New York: Free Press.

Simon, H. (1957) Heuristic Problem Solving: The Next Advance in Operations Research. Reprinted from Operations Research Vol. 6, No. 1, Jan-Feb, 1958.

Simon, H., & Newell, A. (1958). Heuristic Problem Solving: The Next Advance in Operations Research. Operations Research 6(1): 1–10.

Thorndike, E. (1920). A constant error in psychological ratings. Journal of Applied Psychology, 4(1), 25–29.

About The Author

Fraser Hill is the founder of the leadership consulting and assessment company, Bremnus, as well as the founder and creator of Extraview.io, an HR software company aimed at experienced hire interview and selection in corporates and executive search firms. His 20+ year career has brought him to London, Hong Kong, Eastern Europe, Canada, and now the US, where he lives and works. His new book is The CEO’s Greatest Asset – The Art and Science of Landing Leaders.

Other Articles

DE&I Process Audits to Accelerate Change – New Research

Based on 300 pages of new research, this article outlines one of the many reasons why DE&I efforts are failing and what to do about it. In particular, it explores the failings of processes relating to hiring and succession planning as well as proposing new solutions.

Psychometric Tests – Pinning The Donkey On The Tail

Companies continue to believe there’s some magical way to peek beyond the curtains of one’s psyche to discover traits that will provide infinite wisdom to the decision makers in leadership hiring. If only it were that simple.

The Leadership Assessment Fallacy

Leadership hiring at all companies appears to work just fine. Great leaders are still being hired into companies, and they go on to do well no matter how good or bad the interview process and psychometric tests are. Isn’t that interesting?

Privacy Preferences
When you visit our website, it may store information through your browser from specific services, usually in form of cookies. Here you can change your privacy preferences. Please note that blocking some types of cookies may impact your experience on our website and the services we offer.