Note: The following are excerpts from If Nietzsche Were a Narwhal: What Animal Intelligence Reveals about Human Stupidity by Justin Gregg.
I. “We are hardwired to be duped,” argues Timothy R. Levine in his book Duped: Truth-Default Theory and the Social Science of Lying and Deception. Levine is the distinguished professor and chair of communication studies at the University of Alabama at Birmingham, and has spent his career studying human lying, with his research being funded by the FBI and the NSA. Levine’s work argues that, despite our obvious capacity and propensity to lie, the default setting for our species is to accept the things we hear as being true, something Levine calls truth-default theory (TDT). “TDT proposes that the content of incoming communication is usually uncritically accepted as true, and most of the time this is a good thing for us,” he argues. “The tendency to believe others is an adaptive product of human evolution that enables efficient communication and social coordination.” As a species, humans are both wired for credulity and for telling lies. It’s that combination of traits—this bizarre mismatch between the human ability to lie and spot lies—that makes us a danger to ourselves.
II. Humans are unlike other animals when it comes to our capacity for deception. Because we are why specialists, we have minds overflowing with ideas—dead facts—about how the world works, which gives us an infinite number of subjects about which we could lie. We are also in possession of a communication medium—language—that allows us to transform these dead facts into words that slither into the minds of other people with ease. What’s more, we have the capacity to understand that other people have minds in the first place; minds that hold beliefs about how the world is (i.e., what’s true), and thus minds that can be fooled into believing false information. As Levine points out, we’re also particularly bad at spotting false information. This sets up a scenario where, as we will see in this section, being a lying bullshit artist in a world filled with gullible victims can be a path to success, as it was for Russell Oakes. The accepted wisdom is that humans tell, on average, between one and two verbal lies a day. That, however, is an estimated average across the entire population. Six out of ten people claim not to lie at all (which is probably a lie), with most lies being told by a small subset of pathological liars who tell—on average—ten lies a day. We tell fewer lies as we get older, which might have less to do with our maturing sense of morality, and more to do with the cognitive decline that makes it harder to pull off the mental gymnastics needed to keep track of the nonsense we’re spouting. We need to think harder and maintain concentration to produce lies, which is why you often see the TV trope of an onscreen detective asking rapid-fire questions of suspects until they inadvertently blurt out the truth because they can’t think fast enough. It’s the same reason for the phrase in vino veritas (in wine, there is truth): It’s the idea that drinking alcohol works a bit like a truth serum, where people are more likely to reveal their true feelings (and stop lying) when their higher-order thinking has been compromised.
An even better way to get ahead is to take lying to the next level: bullshitting. The term bullshitting is a legitimate scientific term. It was popularized by the philosopher Harry Frankfurt in his 2005 book, On Bullshit, and is used in the scientific literature today to describe communication intended to impress others without concern for evidence or truth. It is not the same thing as lying, which involves knowingly creating false information with the intention of manipulating others’ behavior. A bullshitter, on the other hand, does not know and does not care whether what they’re saying is accurate. They are more concerned with what Stephen Colbert called truthiness: the quality of seeming or being felt to be true, even if not necessarily true. Bullshitting seems like a negative behavior that would gum up the works of the human social world and sow chaos and confusion. But there is evidence to suggest that bullshitting might be a skill that has been selected for by evolution. A capacity to produce bullshit might be a signal to others that the bullshitter is in fact an intelligent individual. A recent study in the journal Evolutionary Psychology found that participants who were the most skilled at making up plausible (but fake) explanations of concepts they didn’t understand (a bit like the game Balderdash) also scored highest on tests of cognitive ability. So being a better bullshitter is in fact correlated with being smarter. The authors concluded that “the ability to produce satisfying bullshit may serve to assist individuals in navigating social systems, both as an energetically efficient strategy for impressing others and as an honest signal of one’s intelligence.”In other words, the bullshitter has an extra advantage over a non-bullshitter: They don’t waste time worrying about the truth; they can focus all of their energy on being believed instead of being accurate.
I. Prognostic myopia is the human capacity to think about and alter the future coupled with an inability to actually care all that much about what happens in the future. It’s caused by the human ability to make complex decisions availing of our unique cognitive skills that result in long-term consequences. But because our minds evolved primarily to deal with immediate—not future—outcomes, we rarely experience or even understand the consequences of these long-term decisions. It is the most dangerous flaw in human thinking. So dangerous that it might lead to the extinction of our species.
II. Prognostic myopia makes it difficult for us to make good decisions about our future because we’re heavily influenced by our problems in the here and now. To see how this difficulty affects us on a day-to-day basis, I will provide examples from my life. I will compare the decisions I have made over the past forty-eight hours to the recommendations of a decision-making robot who always knows the optimal solution to all my problems. I am calling this robot Prognostitron. Let’s say that Prognostitron’s goal is to maximize my health and happiness, as well as the health and happiness of my future offspring. You’d think I would have that same goal, but as you will see from my actual decisions, that’s clearly not the case.
Justin wants to watch a Hallmark movie. As a freelancer, I work from my home office most of the time. I do not have a boss looking over my shoulder making sure that I stay on task. I only have my own to-do lists and deadlines and a vague sense of “you should be doing something.” In other words, self-discipline determines my productivity. Yesterday, however, I wasn’t really feeling it. My procrastination levels were at an all-time high. To help me out of my funk, my wife asked if I might want to watch a Hallmark Christmas movie with her after lunch. Our relationship involves a lot of shared film-watching where we laugh ourselves silly at cinematic train wrecks. It’s a surefire way to elevate my spirits, and she was right to suggest it. I now had a decision to make: spend the afternoon watching Netflix, or go back to work. Prognostitron would suggest the obvious answer: Go sit behind your computer and get some work done. The consequences of not doing so are potentially dire. Missing a deadline or disappointing a client who had hired me for a job could cause me to lose out on future gigs, which would cause serious emotional distress, not to mention financial hardship. It’s a no-brainer: Skip the Hallmark movie and just go work.
So what did I choose to do? Obviously, I watched A Christmas Prince. Which, by the way, isn’t a train wreck at all. Rose McIver is a delight, I tell you. But how did I justify this? I knew just as well as Prognostitron what was at stake, and what the right thing to do was. But I also wanted to do something to remove the negative thoughts running through my head in that moment. And the easiest way to do that was to distract myself. And of course, watching a movie would mean spending quality time with my life partner, which is inherently rewarding. My mind was having a hard time balancing the need for immediate gratification with the long-term negative consequences of my decision. I was strangely indifferent to my future suffering, thanks to prognostic myopia. Edward Wasserman and Thomas Zentall, two psychologists famous for their work with animal cognition, penned an essay in 2020 for NBC News trying to explain why humans like me are so bad at caring about the long-term consequences of our decisions: Urgent survival needs (believed to be mediated by older brain systems that we share with many other animals) mean that we still engage in impulsive behaviors. And those behaviors, which once promoted our survival and reproductive success, are now suboptimal because we live in an environment in which long-term contingencies play an increasingly important part in our lives.
This encapsulates why my daily life is so filled with prognostic myopia. But it also explains one of its far more sinister consequences. Because humans live in a world loaded with long-term contingencies, our poor decisions are not just affecting our daily lives. Humans alive today are making decisions whose negative consequences won’t be felt by other humans until many years from now. Often, many generations into the future. Yet, we simply don’t have minds designed to feel these consequences. In fact, in terms of decision-making, the further into the future we go, the less we care. To imagine a world three hundred years from now in which you are already dead removes even more of the emotional import that might be present in episodic foresight. We are no longer putting our temporal selves at the center of these time-traveling projections, but instead are trying to envision our hypothetical progeny walking through a nigh unimaginable hypothetical landscape. It simply becomes an intellectual exercise so far removed from the kinds of decisions our minds evolved to make. And this is how prognostic myopia might kill us.
Eric Barcia had carefully calculated the height of the railroad trestle at Lake Accotink Park in Springfield, Virginia. It was seventy feet from the trestle’s edge to the concrete spillway below. An amateur bungee enthusiast who had been described by his grandmother as “very smart in school,” Barcia taped together a bunch of bungee cords until he had created a single cord that was about 70 feet long. In the early morning of July 12, 1997, Barcia fastened the makeshift cord to his ankles, tied the other end to the trestle, and leapt off the bridge. His body was found by a jogger soon after. Since bungee cords stretch when pulled (a fact that Barcia had overlooked), he had overestimated the length of cord by some sixty feet.
The temptation here is to snicker at Barcia’s stupidity. But this is not a story of stupidity. Barcia’s cord-length miscalculation was but a sad footnote to a much larger tale of human cognitive prowess. To stand on the edge of that trellis and devise such an elaborate plan is a testament to everything amazing about the human mind. His death was the result of a simple mathematical error. Even hyper-intelligent rocket scientists make similar mistakes. Remember when the $125 million Mars Climate Orbiter burned up in the atmosphere of the red planet back in 1999? The engineers at NASA’s Jet Propulsion Laboratory used the metric system to calculate the orbiter’s trajectory, but the engineers at Lockheed Martin Astronautics (who built the orbiter’s software) used inches, feet, and pounds. The result? As it entered orbit, the space probe was 170 kilometers too low. Like Barcia, it plunged unceremoniously to its death, a tragic end to an otherwise remarkable tale of human ingenuity.
Let’s take a closer look at our amateur bungee jumper. What exactly was happening in his mind that ultimately led to his death? Barcia had clearly been planning his jump for days—maybe even weeks—in advance. Which means that he, unlike most other animal species, was able to envision a version of himself in a future scenario wherein he would experience a positive feeling (e.g., joy, fear, excitement) as a result of jumping off the bridge. In other words, exactly what you would expect from an adrenaline junkie. The plan itself involved an intimate knowledge of cause and effect—a form of causal inference that is the hallmark of our species. Most animals understand that things fall down, but Barcia had a deeper knowledge of tension loads, trajectories, classical mechanics, and so forth. He knew, for example, that tying a cord around his ankles would prevent him from crashing into the ground. And, of course, Barcia was perfectly aware that jumping off a 70-foot-tall bridge—under any circumstances—is inherently dangerous and thus scary. But, as any thrill-seeker will tell you, overriding this fear is part of the fun. After all, he was bungee jumping, not trying to kill himself. Everything we’ve discussed throughout this book about the human mind’s uniqueness is apparent here.
Now imagine that Santino—the rock-throwing chimpanzee we met in Chapter 3—was standing next to Barcia on the trestle’s edge. What is the difference between Santino’s and Barcia’s thought processes in that moment? Since chimpanzees are our closest evolutionary relative, comparing how Santino and Barcia would approach this scenario will give us important clues about human exceptionalism and our minds compared to other animals. Santino, for the record, would never tie a rope around his ankles and fling himself off a railroad trestle in pursuit of an endorphin rush.
Let’s begin with the basics: Do nonhuman animals even engage in thrill-seeking behavior? Many species of animals engage in novelty-seeking behavior—a close cousin of thrill-seeking. Consider cats. YouTube is filled with examples of cats getting themselves into dangerous spots because of their love of exploring potentially dangerous scenarios (e.g., tall trees, tight spaces). But the clearest example of not just novelty-seeking but full on thrill-seeking in animals is found in the wild macaques of India seen in the 2017 BBC production Spy in the Wild. These monkeys climb a fifteen-foot pillar perched above an outdoor fountain, flinging themselves into the narrow pool where even a slight miscalculation could cause them serious injury or death if they fail to hit the water. Although far less dangerous than jumping off a seventy-foot bridge above a concrete road, there is no denying that these monkeys are engaging in a dangerous activity from which they derive pleasure despite (or because of) the risks involved.
So, what’s stopping Santino from bungee jumping? It’s possible—if not likely—that a chimpanzee would want to engage in dangerous thrill-seeking behavior similar to the pool-diving macaques. But bungee jumping and pool diving are not identical when it comes to the cognitive skills needed to experience the thrill. Santino would need to come up with a plan involving the assembly of materials to create a bungee cord that would take days to execute—involving mental time travel skills that he does not likely possess. He would also need a sophisticated grasp of cause and effect—an understanding of what happens to a falling object that is secured to another object via an elastic material. He would then need to assemble this sophisticated kind of tool and find a way to secure it to himself and the bridge; skills that are seemingly well above his pay grade. This is a kind of why specialism that chimpanzees lack. Even if Santino had bungee-jumping aspirations, he is just not intelligent enough to bungee jump.
But that’s a good thing. Barcia’s bungee plan was a case of complex human cognition gone wrong. His intelligence, not his stupidity, was directly responsible for his death. Santino, the less intelligent of the two on paper, behaved more intelligently precisely because he was less intelligent. In other words, intelligence sometimes results in very stupid behavior.
Consider this example of a human versus animal battle of the wits that highlights the pitfalls—or perhaps impotence—of human intelligence. There are three species of bedbugs that feed on humans when we are sleeping (i.e., Cimex lectularius, Cimex hemipterus, and Leptocimex boueti). Bedbugs are attracted to our body heat, our body odor, and the carbon dioxide we exhale when we breathe. They’re weirdly flat insects, which helps them hide in places we’d never think to look. They can slide in between cracks as small as the thickness of a sheet of paper. And because they feed exclusively on our blood, they find hiding places near where we sleep. They like us best when we are motionless in bed—an easier target. Their entire biology is centered around reading human behavior to try to figure out when we’re at our most vulnerable. “They won’t come out to feed until you let your guard down,” explained Dr. Jody Green to me over Zoom. Jody is the urban entomologist extension educator with the University of Nebraska-Lincoln, and an expert in the behavior of the insects that drive us crazy: bedbugs, head lice, termites, fleas, etc. “They learn your schedule. If you work nights and only sleep during the day, they adapt—they get on your sleep schedule. If you go on vacation, they can wait for you to get back.”
Bedbugs’ hiding strategies can get quite elaborate. As they age, bedbugs shed their exoskeletons, which they leave behind as a ghostly shell. When you spray your house with pesticides, young bedbugs will sometimes sprint toward the nearest exoskeleton left behind by a larger adult and hide inside as the pesticides pass over them. “For extra protection,” explained Jody. But bedbugs’ main strategy is to hide in the places that nobody looks or thinks to spray with poison. Think about a hotel room for a moment. It gets a thorough scrubbing every day, including changing the bedding. And yet, hotel rooms are notorious hot spots for bedbugs. That’s because hotel rooms, just like our homes, have plenty of locations that are overlooked when it comes to regular cleaning. Some items rarely get washed. Things like the curtains. Or bed skirts. Which are often riddled with bedbugs. Maybe the craftiest hiding spot in a hotel is the one that you are the least likely to disturb: the Bible in the nightstand. Nearly every hotel room in North America has one thanks to a campaign by the Gideons International: a Christian evangelical group that has been distributing free Bibles for more than a century. The Bible has hundreds of pages between which a flat bedbug can slip. It’s the perfect hideout for an entire civilization of bedbugs. If you’re doing a sweep of your hotel room to check for bedbugs, this is the first place you should look, suggests Jody. “I know it’s probably not good to go flipping through the Bible looking for bedbugs, but…”
Bedbugs can generate these elaborate hiding strategies using, as we have seen in previous chapters, relatively simple decision-making skills that do not avail of things like episodic foresight or causal inference. And yet, these simple minds regularly outwit our complex human minds in a hide-and-seek battle. But this is not the most important lesson from this story. Because bedbugs are so difficult to find and squish, humans have been forced to unleash our most sophisticated why specialist abilities to come up with solutions for killing them. The chemical dichlorodiphenyltrichloroethane—more commonly known as DDT—is a potent insecticide, originally used to kill mosquitoes, and deployed widely during the Second World War to stop the spread of mosquito-borne diseases like malaria and typhoid. But it’s equally as effective at killing bedbugs. After the war ended, DDT became commercially available in North America, and average citizens started spraying it around their homes with wild abandon. With good reason. In the early 1900s, every single home in the United States had experienced a bedbug infestation. Within a decade, though, and before we learned how bad it was for human health, the mass spraying of DDT in North America almost led to the eradication of the bedbug on the continent. Almost.
The bedbugs that survived this purge were the ones that had developed a resistance to DDT. While the humans were taking their victory lap, these resistant bedbugs began to multiply—slowly at first. But then by the 1990s, the bedbug population exploded. By the mid-2000s, every state in the US was infested. A 2018 report found that 97 percent of pest control companies in the US treated for bedbugs within the previous year. In other words, DDT-resistant bedbugs are everywhere these days. In fact, modern bedbugs are resistant to almost every pesticide out there. So in the end, our smartest solutions were still no match for the simple minds of bedbugs. But there’s even more to this story, which highlights the grand downfall of the human mind thanks to prognostic myopia.
It turns out that releasing huge amounts of DDT into the environment in our fight against bedbugs was a rather boneheaded solution. It has made its way into the very fabric of our lives in ways we are only just now starting to appreciate. Even though the United States banned the use of DDT in 1972, every single person living in the US right now (including children born after the ban) has trace amounts of DDT in their bodies. DDT has a half-life in water of 150 years, which means that the DDT coating the floors and walls of the homes we sprayed for bedbugs in the 1940s would have ended up in perfectly stable condition in our mop-bucket water. When those buckets were emptied, the DDT hitched a ride with the wastewater into sewage treatment plants, or straight into our rivers and oceans, where it began building up inside the bodies of fish and other aquatic animals. Some of those DDT-soaked fish ended up on our dinner plates, causing the chemical to build up in our own tissues, where it stays until we die. Mothers can pass traces of DDT on to their children through breastmilk, making it all but impossible to avoid ingesting DDT even today. What’s worse, DDT has induced epigenetic changes in women exposed to the chemical that are being passed down to their children and grandchildren. And these changes are directly linked to an increase in obesity, which is correlated with an increase in breast cancer in women whose ancestral line was exposed to DDT. “What your great-grandmother was exposed to during pregnancy, like DDT, may promote a dramatic increase in your susceptibility to obesity, and you will pass this on to your grandchildren in the absence of any continued exposures,” said Michael Skinner, an epigenetics expert from Washington State University. Not only are humans losing the bedbug war, but our hyper-intelligent technological solutions for fighting them has resulted in us poisoning ourselves and our grandchildren.
This is the problem with thinking of human intelligence as something special, and assuming that specialness is a good thing. Human cognition and animal cognition are not all that different, but where human cognition is more complex, it does not always produce a better outcome. In both the Barcia versus Santino and the bedbugs versus DDT battles, complex, human-style thinking was the loser. This is what I call the Exceptionalism Paradox. It’s the idea that even though humans are indeed exceptional when it comes to our cognition, it does not mean we are better at the game of life than other animals. In fact, because of this paradox, humans might be a less successful species precisely because of our amazing, complex intelligence.