Learn more about What's Our Problem? on Amazon.
Buy What's Our Problem?: Kindle | Audiobook
If you enjoy this summary, please consider buying me a coffee to caffeinate my reading sessions.
Note: The following are excerpts from What's Our Problem?: A Self-Help Book for Societies by Tim Urban.
I’m pretty into most Disney movies, but especially The Little Mermaid, Beauty and the Beast, Aladdin, and The Lion King. I’ve never been sure if those are objectively the best four Disney movies or if everyone just loves whichever Disney movies came out when they were between the ages of 7 and 12. Either way, those are clearly the four best Disney movies.
The thing about those movies, though, is that they’re definitely not real life, right? Like, kids might think Disney movies are the way the real world is, but everyone else knows that the real world is not like Disney movies. Right? I thought so too. Then I started writing this book. After spending the last few years thinking about political partisanship and Echo Chambers, it hit me: a ton of Americans think they live inside a Disney movie. Let me explain.
The real world is gray, amorphous, and endlessly nuanced. But Disney movies simplify the world into a binary digital code of 1s and 0s. There’s pure good (1), pure bad (0), and rarely anything in between.
Real people are complex and flawed, full of faults but almost always worthy of compassion. Disney characters, on the other hand, are either entirely good or entirely bad. In the real world, each turn of events is mired in potential positives and potential negatives, which is a mess to sort out. Disney movies get rid of that messiness. Aladdin gets ahold of the genie = 1. Jafar steals the genie away = 0. Disney even digitizes the weather, which is always either perfect or, when the bad guys are getting their way, stormy. Going full binary makes sense in Disney movies. Their core audience is little kids, who aren’t ready to sort through too much gray. Before a person learns to think in nuance, they first need to learn the basic concepts of good vs. bad, right vs. wrong, safe vs. dangerous, happy vs. sad.
But oversimplifying the real world is a bad idea—and unfortunately, that’s exactly what the Primitive Mind likes to do. So low-rung politics ends up feeling, to its participants, a lot like a Disney movie. Up on the high rungs, people know the world is a mess of complexity. They know that people are little microcosms of the messy world—each person an evolving gray smattering of virtues and flaws. Political Disney World is much more fun. Everything is crisp and perfectly digital. Good guys and bad guys, with good ideas and bad ideas. Good politicians and bad politicians with good policies and bad policies, winning their seats in good or bad election outcomes. Right and wrong. Smart and ignorant. Virtuous and evil. 1s and 0s. When a bunch of adults are pretty sure that they live in a Disney movie, it’s usually a sign that primitive psychology has taken over and we’re dealing with golems. Both the individualism and collaboration of high-rung politics melt away on the low rungs, leaving only the rigid ant colony structure. People inside the tribe are pressured to conform, drawing unity and strength from an obsession with the common enemy and how stupid, ignorant, evil, bigoted, opportunistic, sneaky, toxic, selfish, and dangerous the bad guys are. At the heart of every faction in Political Disney World is a guiding story: their political narrative. These narratives are all-encompassing versions of reality. They come with their own worldview, their own telling of history, and their own depiction of the present. A unique, customized Disney movie for the tribe, by the tribe.
The real test of any argument is how well it stands up to rigorous criticism. When you’re confident in your viewpoint, you love a chance to throw it into the ring with other arguments and show off its strength. It works like boxing: the stronger the opponents you’ve beaten, the better your ranking. That’s why a strong college paper always includes a strong “grizzly bear” counterargument—it lets the thesis “show off” in front of the professor. But what if you’re not so confident in your viewpoint? And you still want to make it seem like it can do well in the ring? As a procrastinator who wrote a lot of hasty, shitty papers in college, I can tell you firsthand that one of the trademarks of a paper with a weak thesis is an even weaker counterargument. When exposed to high-rung opponents, oversimplified narratives are usually knocked out in the first round. So political Echo Chambers make it taboo to criticize the narrative—it’s their way of banning anyone from landing a good punch.
But to generate intense conviction in its members—the belief that of COURSE the narrative is correct—political Echo Chambers need to make it seem like the narrative is a champion heavyweight boxer. So they rely on what are essentially fake fights that seem real to the Echo Chamber’s members—fights where the narrative always comes out on top. They pull this off using one of the most tried-and-true tools of the low-rung intellectual world: the straw man fallacy. To straw man your opponent, you invent a weak counterargument to your position and pretend that it’s your opponent’s position, even though it’s not. It’s the real-world version of what shitty college students do in their papers: conjure up a weakling opponent, pound it to the floor, and then declare victory.
The straw man makes regular appearances in political debates. Political speeches tend to be full of straw men too. Using a straw man can make you appear victorious to unwitting viewers, like a boxer who takes a swing at the balls mid-match and hopes the ref won’t see it. In Political Disney World, when a cleverly worded tweet or op-ed straw mans the opposing side, it goes viral, and soon, the farce boxing match is played on loop throughout the Echo Chamber, ad nauseam. The straw man fallacy also has its inverse—the “motte-and-bailey” fallacy—which can be used as a defensive tactic. The name comes from a type of two-part medieval fortification common in Northern Europe between the 10th and 13th centuries. It looked something like this: The bailey was an area of land that was desirable and economically productive to live on but vulnerable to attack and hard to defend. When the bailey was threatened, inhabitants would run up the motte and into the tower. The motte, unlike the bailey, was easy to defend and nearly impossible to conquer—so invaders who captured the bailey would be unable to conquer the whole fortification. Eventually, with arrows raining down on them from the motte’s tower, the attackers would give up and leave, at which point the inhabitants could resume life in the pleasant, profitable bailey.
Philosopher Nicholas Shackel popularized the motte-and-bailey as a metaphor for a cheap argument tactic, whereby someone holding a convenient but not-very-defensible “bailey” viewpoint could, when facing dissent to that viewpoint, quickly run up the motte and swap out the viewpoint with a far stronger “motte” position. Kind of like in 2003, during the arguments about whether to invade Iraq. Political Disney World is a land of sprawling baileys, dotted with motte hills. And if you listen carefully, you’ll notice people darting up to their trusty mottes whenever their views come under fire. The motte-and-bailey is often used alongside the straw man fallacy. Political Echo Chambers use the straw man to make their opponent’s position seem weaker than it is, and they use the motte-and-bailey to make their own position appear to be more ironclad than it is. In 2004, George W. Bush countered opponents of the Iraq War with this: “There’s a lot of people in the world who don’t believe that people whose skin color may not be the same as ours can be free and self-govern…I reject that. I reject that strongly. I believe that people who practice the Muslim faith can self-govern. I believe that people whose skins … are a different color than white can self-govern.” So: We just want to keep Americans safe from weapons of mass destruction (motte-and-bailey), but they don’t think brown people can self-govern (straw man).
Top-rung thinking forms hypotheses from the bottom up. Rather than adopt the beliefs and assumptions of conventional wisdom, you puzzle together your own ideas, from scratch. This is a three-part process:
1) Gather information
In order to puzzle, you need pieces. Each of us is constantly flooded with information, and we have severely limited attention to allot. In other words, your mind is an exclusive VIP-only club with a tough bouncer. But when Scientists want to learn something new, they try to soak up a wide variety of information on the topic. The Scientist seeks out ideas across the Idea Spectrum, even those that seem likely to be wrong—because knowing the range of viewpoints that exist about the topic is a key facet of understanding the topic.
2) Evaluate information
If gathering info is about quantity, evaluating info is all about quality. There are instances when a thinker has the time and the means to collect information and evidence directly—with their own primary observations, or by conducting their own studies. But most of the info we use to inform ourselves is indirect knowledge: knowledge accumulated by others that we import into our minds and adopt as our own. Every statistic you come across, everything you read in a textbook, everything you learn from parents or teachers, everything you see or read in the news or on social media, every tenet of conventional wisdom—it’s all indirect knowledge. That’s why perhaps the most important skill of a skilled thinker is knowing when to trust. Trust, when assigned wisely, is an efficient knowledge-acquisition trick. If you can trust a person who actually speaks the truth, you can take the knowledge that person worked hard for—either through primary research or indirectly, using their own diligent trust criteria—and “photocopy” it into your own brain. This magical intellectual corner-cutting tool has allowed humanity to accumulate so much collective knowledge over the past 10,000 years that a species of primates can now understand the origins of the universe. But trust assigned wrongly has the opposite effect. When people trust information to be true that isn’t, they end up with the illusion of knowledge—which is worse than having no knowledge at all. So skilled thinkers work hard to master the art of skepticism. A thinker who believes everything they hear is too gullible, and their beliefs become packed with a jumble of falsehoods, misconceptions, and contradictions. Someone who trusts no one is overly cynical, even paranoid, and limited to gaining new information only by direct experience. Neither of these fosters much learning.
The Scientist’s default skepticism position would be somewhere in between, with a filter just tight enough to consistently identify and weed out bullshit, just open enough to let in the truth. As they become familiar with certain information sources—friends, media brands, articles, books—the Scientist evaluates the sources based on how accurate they’ve proven to be in the past. For sources known to be obsessed with accuracy, the Scientist loosens up the trust filter. When the Scientist catches a source putting out inaccurate or biased ideas, they tighten up the filter and take future information with a grain of salt. When enough information puzzle pieces have been collected, the third stage of the process begins.
3) Puzzle together a hypothesis
The gathering and evaluating phases rely heavily on the learnings of others, but for the Scientist, the final puzzle is mostly a work of independent reasoning. When it’s time to form an opinion, their head becomes a wide-open creative laboratory. Scientists, so rigid about their high-up position on the vertical How You Think axis, start out totally agnostic about their horizontal position on the What You Think axis. Early on in the puzzling process, they treat the Idea Spectrum like a skating rink, happily gliding back and forth as they explore different possible viewpoints. As the gathering and evaluating processes continue, the Scientist grows more confident in their puzzling. Eventually, they begin to settle on a portion of the Idea Spectrum where they suspect the truth may lie. Their puzzle is finally taking shape—they have begun to form a hypothesis.
Imagine I present to you this boxer, and we have this exchange:
You’d think I was insane. But people do this with ideas all the time. They feel sure they’re right about an opinion they’ve never had to defend—an opinion that has never stepped into the ring. Scientists know that an untested belief is only a hypothesis—a boxer with potential, but not a champion of anything. So the Scientist starts expressing the idea publicly, in person and online. It’s time to see if the little guy can box. In the world of ideas, boxing opponents come in the form of dissent. When the Scientist starts throwing ideas out into the world, the punches pour in. Biased reasoning, oversimplification, logical fallacies, and questionable statistics are the weak spots that feisty dissenters look for, and every effective blow landed on the hypothesis helps the Scientist improve their ideas. This is why Scientists actively seek out dissent. As organizational psychologist Adam Grant puts it in his book Think Again: I’ve noticed a paradox in great scientists and superforecasters: the reason they’re so comfortable being wrong is that they’re terrified of being wrong. What sets them apart is the time horizon. They’re determined to reach the correct answer in the long run, and they know that means they have to be open to stumbling, backtracking, and rerouting in the short run. They shun rose-colored glasses in favor of a sturdy mirror.
The more boxing matches the Scientist puts their hypothesis through, the more they’re able to explore the edges of their conclusions and tweak their ideas into crisper and more confident beliefs. With some serious testing and a bunch of refinements under their belt, the Scientist may begin to feel that they have arrived at Point B: knowledge. It’s a long road to knowledge for the Scientist because truth is hard. It’s why Scientists say “I don’t know” so often. It’s why, even after getting to Point B in the learning process, the Scientist applies a little asterisk, knowing that all beliefs are subject to being proven wrong by changing times or new evidence. Thinking like a Scientist isn’t about knowing a lot, it’s about being aware of what you do and don’t know—about staying close to this dotted line as you learn: When you’re thinking like a Scientist—self-aware, free of bias, unattached to any particular ideas, motivated entirely by truth and continually willing to revise your beliefs—your brain is a hyper-efficient learning machine. But the thing is—it’s hard to think like a Scientist, and most of us are bad at it most of the time. When your Primitive Mind wakes up and enters the scene, it’s very easy to drift down to the second rung of our Ladder—a place where your thinking is caught up in the tug-of-war.
---
Learn more about What's Our Problem? on Amazon.
Buy What's Our Problem?: Kindle | Audiobook
If you enjoyed this summary, please consider buying me a coffee to caffeinate my reading sessions.