Biases distort our cognitive system and prevent us from making rational decisions. The myth that we are reasonable creatures stems from the incorrect assumption that our brains are, by default, instruments of logic. Steven Pinker proves otherwise in Rationality. Through analyzing the most common cognitive fallacies, we’re taken on a journey through the mechanics of the mind towards the root of our irrationality, which Pinker argues is due to motives shaping our judgments. Motivated reasoning describes how we let our beliefs mold our perception of the world, ensuring that we see things as we are, not as they are. Rather than altering our beliefs to fit new information, we alter the interpretation of that information to fit our beliefs. Pinker ties up this notion towards the end of the book by stating that each of us has a motive to prefer our truth, but together we’re better off with the truth. Although social media is accelerating the spread of fake news, societal outrage, and conspiracy theories, they only gain traction because believing the “florid fantasies lies deep in human nature”. Instead of using social media as the scapegoat for all that is wrong in the world, we should examine the principles guiding our beliefs.
“To begin at the beginning: what is rationality? As with most words in common usage, no definition can stipulate its meaning exactly, and the dictionary just leads us in a circle: most define rational as “having reason,” but reason itself comes from the Latin ration-, often defined as “reason.” A definition that is more or less faithful to the way the word is used is “the ability to use knowledge to attain goals.” Knowledge in turn is standardly defined as “justified true belief.” We would not credit someone with being rational if they acted on beliefs that were known to be false, such as looking for their keys in a place they knew the keys could not be, or if those beliefs could not be justified—if they came, say, from a drug-induced vision or a hallucinated voice rather than observation of the world or inference from some other true belief. The beliefs, moreover, must be held in service of a goal. No one gets rationality credit for merely thinking true thoughts, like calculating the digits of π or cranking out the logical implications of a proposition (“Either 1 + 1 = 2 or the moon is made of cheese,” “If 1 + 1 = 3, then pigs can fly”). A rational agent must have a goal, whether it is to ascertain the truth of a noteworthy idea, called theoretical reason, or to bring about a noteworthy outcome in the world, called practical reason (“what is true” and “what to do”). Even the humdrum rationality of seeing rather than hallucinating is in the service of the ever-present goal built into our visual systems of knowing our surroundings.”
Understanding what other people value is the first step to having meaningful conversations. A statement of facts begins with the evaluation of evidence. If a person does not value evidence, then there is no evidence you can provide to argue that they should. The same is true for those who do not value logic; there's no logical argument you can make to convince them that they should. "But who does not value fact, logic, and evidence?" you might think. The answer is that most people operate from this irrational stance, as you will soon discover. To argue against the value of evidence, is to argue against reason, and Pinker writes that you lose this battle the moment you show up:
“Let’s say you argue that rationality is unnecessary. Is that statement rational? If you concede it isn’t, then there’s no reason for me to believe it—you just said so yourself. But if you insist I must believe it because the statement is rationally compelling, you’ve conceded that rationality is the measure by which we should accept beliefs, in which case that particular one must be false. In a similar way, if you were to claim that everything is subjective, I could ask, “Is that statement subjective?” If it is, then you are free to believe it, but I don’t have to. Or suppose you claim that everything is relative. Is that statement relative? If it is, then it may be true for you right here and now but not for anyone else or after you’ve stopped talking. This is also why the recent cliché that we’re living in a “post-truth era” cannot be true. If it were true, then it would not be true, because it would be asserting something true about the era in which we are living.”
The conjunction fallacy occurs when a person assumes that specific conditions are more likely than a single general one. Daniel Kahneman and Amos Tversky first discovered this cognitive bias, which became known as the Linda problem. Pinker provides an outline of the bias in Rationality:
Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
Please indicate the probability of each of these statements:
Linda is a teacher in elementary school.
Linda is active in the feminist movement.
Linda is a psychiatric social worker.
Linda is a bank teller.
Linda is an insurance salesperson.
Linda is a bank teller and is active in the feminist movement.
Respondents judged that it was likelier that Linda was a feminist bank teller than that she was a bank teller: the probability of A and B was judged to be higher than the probability of A alone.
Pinker builds on the Linda problem of forecasting events by creating a survey involving several hundred respondents. Below, he describes the experiment:
Consider each of the following events, and write down your estimate of the likelihood that it will take place in the coming decade. Many of them are pretty unlikely, so let’s make finer distinctions at the lower end of the scale and pick one of the following probabilities for each: less than .01 percent, .1 percent, .5 percent, 1 percent, 2 percent, 5 percent, 10 percent, 25 percent, and 50 percent or more.
Saudi Arabia develops a nuclear weapon.
Nicolás Maduro resigns as president of Venezuela.
Russia has a female president.
The world suffers a new and even more lethal pandemic than Covid-19.
Vladimir Putin is constitutionally prevented from running for another term as president of Russia and his wife takes his place on the ballot, allowing him to run the country from the sidelines.
Massive strikes and riots force Nicolás Maduro to resign as president of Venezuela.
A respiratory virus jumps from bats to humans in China and starts a new and even more lethal pandemic than Covid-19.
After Iran develops a nuclear weapon and tests it in an underground explosion, Saudi Arabia develops its own nuclear weapon in response.
On average, people thought it was likelier that Putin’s wife would be president of Russia than that a woman would be president. They thought it was likelier that strikes would force Maduro to resign than that he would resign. They thought Saudi Arabia was more likely to develop a nuclear weapon in response to an Iranian bomb than it was to develop a nuclear weapon. And they thought it was likelier that Chinese bats would start a pandemic than that there would be a pandemic. You probably agree with at least one of these comparisons; 86 percent of the participants who rated all the items did. If so, you violated an elementary law of probability, the conjunction rule: the probability of a conjunction of events (A and B) must be less than or equal to the probability of either of the events (A, or B).
In each pair of world events, the second scenario is a conjunction of events, one of which is the event in the first scenario. For example, “Iran tests a nuclear weapon and Saudi Arabia develops a nuclear weapon” is a conjunction that embraces “Saudi Arabia develops a nuclear weapon” and must have a smaller chance of happening, since there are other scenarios in which the Saudis might go nuclear (to counter Israel, to flaunt hegemony over the Persian Gulf, and so on). By the same logic, Maduro resigning the presidency has to be more likely than Maduro resigning the presidency after a series of strikes.
What are people thinking? A class of events described by a single statement can be generic and abstract, with nothing for the mind to hold on to. A class of events described by a conjunction of statements can be more vivid, especially when they spell out a story line we can watch in the theater of our imagination. Intuitive probability is driven by imaginability: the easier something is to visualize, the likelier it seems.
The availability heuristic is a method of making decisions based on familiar facts - anything that has left a lasting impression on our minds. Here’s how Pinker explains the bias:
“People judge the probability of events by the ease with which instances come into mind, a habit that Tversky and Kahneman called the Availability Heuristic. We use the ranking from our brain’s search engine—the images, anecdotes, and mental videos it coughs up—as our best guess of the probabilities. The heuristic exploits a feature of human memory, namely that recall is affected by frequency: the more often we encounter something, the stronger the trace it leaves in our brains.
There is too much information circulating in society to evaluate the truth of each point or lack thereof. To combat this, we create mental shortcuts. When we are presented with new information, we make a quick inventory of our current beliefs in order to form an initial assessment of the material’s veracity. In cases where we don’t have any pre-existing thoughts related to the information, the framing of the ideas is crucial. If information is framed positively, say from someone we like, we search for justifications for why the information is correct. If framed negatively, say coming from someone we dislike, we do the opposite, find reasons why the information is wrong.
“Outside our immediate experience, we learn about the world through the media. Media coverage thus drives people’s sense of frequency and risk: they think they are likelier to be killed by a tornado than by asthma, despite asthma being eighty times deadlier, presumably because tornadoes are more photogenic.
“The availability bias may affect the fate of the planet. Several eminent climate scientists, having crunched the numbers, warn that “there is no credible path to climate stabilization that does not include a substantial role for nuclear power.” Nuclear power is the safest form of energy humanity has ever used. Mining accidents, hydroelectric dam failures, natural gas explosions, and oil train crashes all kill people, sometimes in large numbers, and smoke from burning coal kills them in enormous numbers, more than half a million per year. Yet nuclear power has stalled for decades in the United States and is being pushed back in Europe, often replaced by dirty and dangerous coal. In large part the opposition is driven by memories of three accidents: Three Mile Island in 1979, which killed no one; Fukushima in 2011, which killed one worker years later (the other deaths were caused by the tsunami and from a panicked evacuation); and the Soviet-bungled Chernobyl in 1986, which killed 31 in the accident and perhaps several thousand from cancer, around the same number killed by coal emissions every day.”
The media has amplified our irrational judgments. One classic example is the safety of flying versus driving. Unless a celebrity is behind the wheel, car accidents rarely make the news. In contrast, plane accidents receive heavy coverage, even if the pilot makes a safe landing.
The world’s roads account for 1.3 million fatal accidents each year, while plane crashes kill 250 people. According to Pinker, planes are about a thousand times safer per passenger-mile than cars, yet we all know people who fear flying but no one with a fear of driving. In part, these irrational fears arise from the media’s availability bias, since news is what happens, not what doesn’t.
We hear about the single plane crash of the year, but not about the 364 days of safe air travel. To put it another way, if news outlets used the same approach with cars, they would have to run the headline, “35,000 dead in road accidents yesterday,” every single day. Pinker further explains how novelty drives media headlines:
“The denominator in the fraction corresponding to the true probability of an event—all the opportunities for the event to occur, including those in which it doesn’t—is invisible, leaving us in the dark about how prevalent something really is. The distortions, moreover, are not haphazard, but misdirect us toward the morbid. Things that happen suddenly are usually bad—a war, a shooting, a famine, a financial collapse—but good things may consist of nothing happening, like a boring country at peace or a forgettable region that is healthy and well fed. And when progress takes place, it isn’t built in a day; it creeps up a few percentage points a year, transforming the world by stealth. As the economist Max Roser points out, news sites could have run the headline 137,000 PEOPLE ESCAPED EXTREME POVERTY YESTERDAY every day for the past twenty-five years. But they never ran the headline, because there was never a Thursday in October in which it suddenly happened. So one of the greatest developments in human history—a billion and a quarter people escaping from squalor—has gone unnoticed.”
"Bayes’ rule or Bayes’ Theorem is the law of probability governing the strength of evidence—the rule saying how much to revise our probabilities (change our minds) when we learn a new fact or observe new evidence.
"A paradigm case of Bayesian reasoning is medical diagnosis. Suppose that the prevalence of breast cancer in the population of women is 1 percent. Suppose that the sensitivity of a breast cancer test (its true-positive rate) is 90 percent. Suppose that its false-positive rate is 9 percent. A woman tests positive. What is the chance that she has the disease? The most popular answer from a sample of doctors given these numbers ranged from 80 to 90 percent. Bayes’s rule allows you to calculate the correct answer: 9 percent. That’s right, the professionals whom we entrust with our lives flub the basic task of interpreting a medical test, and not by a little bit. They think there’s almost a 90 percent chance she has cancer, whereas in reality there’s a 90 percent chance she doesn’t. Imagine your emotional reaction upon hearing one figure or the other, and consider how you would weigh your options in response. That’s why you, a human being, want to learn about Bayes’s theorem."
Another example could be our opinion on our driving skills. Imagine we believe ourselves to be a safe driver who, one day, gets into a car accident. Can we reconcile the accident with our belief that we’re a competent driver? Yes, it’s the other drivers fault! By doing this, we retain the positive beliefs about our driving ability instead of updating them in response to the evidence.
With Bayes theorem, we can ask if this evidence is more consistent with our current belief system or with another.
Do you think the car accident is more likely to have happened if you were a good driver or a bad one?
Being a responsible driver is still possible after a car accident, but the level of confidence in our driving ability may drop from 80% to 60% after the crash. In this way, we imagine beliefs as a sliding scale of probabilities, rather than true or false absolutes.
But why should the evidence alter our beliefs at all? Because the accident is less likely to have happened if we are a responsible driver than if we’re not. This ability to update our beliefs based on the latest evidence is Bayes’ theorem at work. 
Probabilistic thinking means avoiding absolutes. When we express the likelihood of our beliefs, we open the door to further discussions. Rather than expressing things in black and white terms, as true or false, try saying: ‘I’m an 80 on that’, or ‘I’m only a 25 on that’. As a result, we remain receptive to new evidence while gathering information to guide future truth-seeking efforts.
Motivated reasoning describes how we let our beliefs mold our perception of the world, ensuring that we see things as we are, not as they are. Pinker sums this up by stating that each of us has a motive to prefer our truth, but together we’re better off with the truth.
When we add a belief to our arsenal, evidence for why it’s true begins compounding. Even if provided with proof that disproves these beliefs, we will interpret such evidence in ways that support our currently held narratives.
The human brain is not a level playing field for rational decision-making. The myth that we are reasonable creatures stems from the incorrect assumption that our brains are, by default, instruments of logic. Wikipedia lists 188 cognitive biases; in less technical terms, we could think of these as tricks our brains play on us to hinder our ability to use rationality, logic and sound judgment. Understanding these biases makes it easier to smooth out some bumps, but increasing our capacity for rationality requires serious commitment.
Annie Duke, in her book, Thinking in Bets, explores motivating reasoning, describing how we form beliefs:
“We form beliefs in a haphazard way, believing all sorts of things based just on what we hear out in the world but haven’t researched for ourselves.
“This is how we think we form abstract beliefs: We hear something; We think about it and vet it, determining whether it is true or false; only after that We form our belief.
"It turns out, though, that we actually form abstract beliefs this way: We hear something; We believe it to be true; Only sometimes, later, if we have the time or the inclination, we think about it and vet it, determining whether it is, in fact, true or false.
"We might think of ourselves as open-minded and capable of updating our beliefs based on new information, but the research conclusively shows otherwise. Instead of altering our beliefs to fit new information, we do the opposite, altering our interpretation of that information to fit our beliefs.”
Rationality continues Pinker’s exploration of motivated reasoning and its influence on how we reach conclusions.
“The obvious reason that people avoid getting onto a train of reasoning is that they don’t like where it takes them. It may terminate in a conclusion that is not in their interest, such as an allocation of money, power, or prestige that is objectively fair but benefits someone else. As Upton Sinclair pointed out, ‘It is difficult to get a man to understand something, when his salary depends upon his not understanding it.’
“The mustering of rhetorical resources to drive an argument toward a favored conclusion is called motivated reasoning. The motive may be to end at a congenial conclusion, but it may also be to flaunt the arguer’s wisdom, knowledge, or virtue. We all know the barroom blowhard, the debating champ, the legal eagle, the mansplainer, the competitive distance urinator, the intellectual pugilist who would rather be right than get it right.
“We are also motivated to regulate our information diet. In biased assimilation (or selective exposure), people seek out arguments that ratify their beliefs and shield themselves from those that might disconfirm them. (Who among us doesn’t take pleasure in reading editorials that are politically congenial, and get irritated by those from the other side?) Our self-protection continues with the arguments that do reach us. In biased evaluation, we deploy our ingenuity to upvote the arguments that support our position and pick nits in the ones that refute it.”
George Orwell aptly summed up motivated reasoning in a sentence:
“We are all capable of believing things which we know to be untrue, and then, when we are finally proved wrong, impudently twisting the facts so as to show that we were right.”
Confirmation bias causes us to focus on the information that reinforces our beliefs while ignoring or distorting disconfirming evidence. Part of this reasoning is our tendency to treat ideas like possessions, things we wish to retain at all costs.
People are often more concerned with winning an argument than discovering the truth. Truth seeking has become a competition that rewards dogmatic beliefs and outlaws open-mindedness. This has given rise to the social notion that changing our minds is a sign of weakness. When, in fact, updating our beliefs when presented with new evidence is a source of substantial intellectual strength. F. Scott Fitzgerald summed this up when we wrote:
“The test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function.”
There is a vast body of scientific literature that supports confirmation bias, but here’s one example from Carol Tavris and Elliot Aronson’s book, Mistakes Were Made (But Not By Me):
“Indeed, even reading information that goes against your point of view can make you all the more convinced you are right. In one experiment, researchers selected people who either favored or opposed capital punishment and asked them to read two scholarly, well-documented articles on the emotionally charged issue of whether the death penalty deters violent crimes. One article concluded that it did; the other that it didn’t. If the readers were processing information rationally, they would at least realize that the issue is more complex than they had previously believed and would therefore move a bit closer to each other in their beliefs about capital punishment as a deterrence. But dissonance theory predicts that the readers would find a way to distort the two articles. They would find reasons to clasp the confirming article to their bosoms, hailing it as a highly competent piece of work. And they would be supercritical of the disconfirming article, finding minor flaws and magnifying them into major reasons why they need not be influenced by it. This is precisely what happened. Not only did each side discredit the other’s arguments; each side became even more committed to its own.”
Pinker approaches confirmation bias by building off Dan M. Kahan’s work on expressive rationality which is “reasoning that is driven by the goal of being valued by one’s peer group rather than attaining the most accurate understanding of the world. People express opinions that advertise where their heart lies.” Pinker provides the following example to demonstrate how the meaning of the word ‘belief’ differs depending on the belief itself.
“No matter how effectively a false belief flaunts the believer’s mental prowess or loyalty to the tribe, it’s still false, and should be punished by the cold, hard facts of the world. As the novelist Philip K. Dick wrote, reality is that which, when you stop believing in it, doesn’t go away. Why doesn’t reality push back and inhibit people from believing absurdities or from rewarding those who assert and share them? The answer is that it depends what you mean by “believe.” Mercier notes that holders of weird beliefs often don’t have the courage of their convictions. Though millions of people endorsed the rumor that Hillary Clinton ran a child sex trafficking ring out of the basement of the Comet Ping Pong pizzeria in Washington (the Pizzagate conspiracy theory, a predecessor of QAnon), virtually none took steps commensurate with such an atrocity, such as calling the police. The righteous response of one of them was to leave a one-star review on Google. (“The pizza was incredibly undercooked. Suspicious professionally dressed men by the bar area that looked like regulars kept staring at my son and other kids in the place.”) It’s hardly the response most of us would have if we literally thought that children were being raped in the basement. At least Edgar Welch, the man who burst into the pizzeria with his gun blazing in a heroic attempt to rescue the children, took his beliefs seriously. The millions of others must have believed the rumor in a very different sense of “believe.”
“Foremost among informal fallacies is the straw man, the effigy of an opponent that is easier to knock over than the real thing.
Just as arguers can stealthily replace an opponent’s proposition by one that is easier to attack, they can replace their own proposition with one that is easier to defend.
They can move the goalposts, demanding that we “defund the police” but then explaining that they only mean reallocating part of its budget to emergency responders. (Rationality cognoscenti call it the motte-and-bailey fallacy, after the medieval castle with a cramped but impregnable tower into which one can retreat when invaders attack the more desirable but less defensible courtyard.”
Arguers use straw men to intentionally distort an opponent’s position so they can attack the effigy rather than the reality. If you’re one of the 37M people who have watched Dr. Jordan Peterson’s interview with Channel 4’s Cathy Newman, you will have seen this play out. Rather than responding to Peterson’s points, Newman begins each sentence with, “So what you’re saying is…” setting up her straw man.
Using a straw man is not dissimilar to ad hominem, when an opponent insults the character of the presenter rather than displaying a rebuttal to their argument. We’ve all seen this play out on social media and in presidential debates. Politicians use this trick by purposefully leaking information that casts their opponents in a bad light. Undermining a person's character does not diminish the validity of their argument, yet this tactic continues to work. Online, I saw a funny analogy comparing the ad hominem fallacy to running out of bullets and throwing the gun at your opponent.
Christopher Hitchens famously said:
“I always think it’s a sign of victory when they move onto the ad hominem.”
The motte-and-bailey fallacy is a reverse straw man. The bailey refers to someone’s principal point, which is often a controversial, sweeping claim that shares similarities with the easier-to-defend motte.
The motte is a diluted version of the main argument, sometimes an undeniable truth.
The arguer will move to the motte when their bailey is under attack. Then they will claim that since the motte has not fallen, their bailey remains valid. By conflating the two positions, they run a classic bait and switch, moving the goalposts of their argument until they prevail in the debate. Here’s a short YouTube video with more.
According to the bandwagon fallacy, something is true simply because it is popular; therefore, the popularity of something validates its truth.
Product advertisers use this ploy every time they use phrases like “most popular,” despite the fact that the popularity of a product is irrelevant to its merits. Pinker explains further:
“The bandwagon fallacy exploits the fact that we are social, hierarchical primates. “Most people I know think astrology is scientific, so there must be something to it.” While it may not be true that “the majority is always wrong,” it certainly is not always right. The history books are filled with manias, bubbles, witch hunts, and other extraordinary popular delusions and madnesses of crowds.”
The genetic fallacy describes accepting a claim as true or false solely based on its origin. An ideas source doesn’t validate or invalidate its claim to truth, yet it is one of the most prevalent fallacies you will encounter in the 21st century.
In literary criticism, the affective fallacy refers to evaluating works based on the effect they have on the reader, who is free to reject anything deemed harmful, hurtful, or uncomfortable. The opposite is equally true. Although often described as a literary criticism term, the fallacy applies to any case where someone judges the validity of a message solely on its emotional impact.
Myopic discounting is the reversal of preference based on the distance to the reward. Its name derives from myopia, which means a lack of discernment or foreseeing. Steven Pinker explains:
“There’s a second way in which we irrationally cheat our future selves, called myopic discounting. Often we’re perfectly capable of delaying gratification from a future self to an even more future self. When a conference organizer sends out a menu for the keynote dinner in advance, it’s easy to tick the boxes for the steamed vegetables and fruit rather than the lasagna and cheesecake. The small pleasure of a rich dinner in 100 days versus the large pleasure of a slim body in 101 days? No contest! But if the waiter were to tempt us with the same choice then and there—the small pleasure of a rich dinner in fifteen minutes versus the large pleasure of a slim body tomorrow—we flip our preference and succumb to the lasagna. The preference reversal is called myopic, or nearsighted, because we see an attractive temptation that is near to us in time all too clearly, while the faraway choices are emotionally blurred and (a bit contrary to the ophthalmological metaphor) we judge them more objectively.”
Hyperbolic discounting is closely related to myopic discounting and points out our tendency to choose immediate rewards over future ones, even when the immediate rewards are smaller. Succumbing to this tendency results in poor decision-making because it promotes impulsivity and immediate gratification, behaviors that prioritize short-term enjoyment over long-term wellbeing.
When someone makes a claim to promote an emotional response rather than provide relevant facts to support their argument, they commit the appeal to emotion fallacy.
While arguments often contain emotional components, the appeal to emotion fallacy occurs when a person uses emotion to manipulate and hide the fact that they cannot offer a rational counterargument.
Pinker uses the example of appealing to sympathy in Rationality:
“Then there are arguments directly aimed at the limbic system rather than the cerebral cortex. These include the Appeal to Emotion: ‘How can anyone look at this photo of the grieving parents of a dead child and say that war deaths have declined?’”
When we pass judgment on someone simply because they are associated with someone we already have an opinion about, this is known as the guilty by association, or association fallacy.
If there is a particular podcast host we dislike because of their terrible takes, every guest of theirs may leave a sour impression in our minds. When we next see the guest's name appear online, we will interpret everything they say in a negative light due to the traces the negative association has left in our minds (availability bias).
. The driving example comes from Julia Galef’s excellent video on Bayes’ Rule.
Note: All passages formatted in italics and quotation marks are from the original book unless otherwise stated. Passages not in italics or quotations are my own words.