Article by LiveReal Agents Grace and Blake

It’s getting harder to know what’s real and what’s phony.

We all want to be someone who sees through the bull, glimpses beyond the hype and spin, and understands what's really going on.

Nobody wants to be fooled by misinformation.

- especially in our age of information overload.

But sorting the real from the phony, seems more difficult than ever. Those who are absolutely certain they haven’t been fooled might have been fooled the most.

And we can't rely on anyone else to do this for us. Ultimately, each of us has to solve this for ourselves.

But how?

Part of the trickery of this problem is that it can seem easy to solve.

“Just avoid misinformation, and trust only good information!”

But that just begs the question.

Easy answers kick the can down the road.

For example, one easy answer is, “Just look at the evidence!”

But that leaves out two critical factors. It ignores the matter of knowing what evidence we focus on or ignore, and then it dodges the question of how to interpret the evidence - both of which are equally critical. After all, some folks cherry-pick their evidence they pay attention to (and ignore anything contradictory), and then interpret it in a single, narrow way. "Evidence" can also be planted. Sometimes the accusation "There's no evidence!" just means no one has bothered to look for it. And so on.

Another easy answer is, “Just listen to science!”

But of course, scientists often disagree. And even when there is a consensus, they can still be wrong. (Case studies: leeches, phrenology, ether, bleedings, the four humours, the earth-centered universe, etc.) Science is a method, not a revelation.

There’s the idea that “We just need to remove all the misinformation!”

The implication here is that "Then we’ll be left with only truth!” But this idea might be misinformation itself. It still misses the central question: how do we know what is “good” information, and what isn’t?

The question is essentially one of epistemology.

How do we know?

But this rabbit hole goes even deeper.

After all, most people assume everyone else is misinformed.

“Yes! Misinformation is a major problem! It’s really too bad that all those other people are so misinformed. What can we do about all those other people who are drowning in misinformation?”

Here, the problem of self-knowledge – “Know Thyself!” – arrives front and center yet again.

When everyone agrees that misinformation is a major problem – yet assumes that everyone but themselves are misinformed – things can get messy.

In situations like these, those with the least self-awareness can seem to rise to the top. It’s the Dunning-Kruger effect, where the least informed are the loudest and most confident.

The assumption is often, “The information *I* get is good.”

“How do you know that?”

“Well, because all the information I get confirms it.”

Call it “epistemic circularity.”

Confirmation bias can rule the world. It can literally define a person’s entire understanding of the universe.

This might seem good for self-esteem, but it comes at a price.

When things becomes circular, everyone can just believe what they want to believe. The result isn’t endless harmony, but precisely the opposite: it’s a recipe for perfect denial – not to mention nihilism, solipsism (everyone being trapped in their own personal Matrix), epistemological anarchy, a war of all against all.

“I’m very well-informed on all the important issues.”

“How do you know?”

“Well, I know this because all of my trusted advisors agree and confirm it.”

“So, how do you know which advisors to trust?”

“Because I’m very well-informed.”

A closed, circular system with no fresh checks from outside itself typically becomes corrupt, like a swamp with no fresh water.

It becomes a fox guarding the henhouse.

It becomes a murderer who works as one of the detectives tasked with finding the murderer. (Case study: HBO's Dexter.)

It’s “corruptionology” (a field of study that doesn’t exist, but should.) And it exists in both systems (such as police departments and governments) and in human beings.

We all have to rely on “advisors” to give us good information about the world. We can’t witness all world events firsthand, and we don’t have the time or ability to become experts in every area of life. So we trust others.

The “advisors” we trust might be certain media shows, various media publications or personalities, our friends or family, and so on.

But what if someone’s chief advisor is Gríma Wormtongue?

In Lord of the Rings, Wormtongue is chief advisor to King Théoden of Rohan.

He’s constantly whispering in the king’s ear, and has been for a long time.

But their relationship is corrosive. Both characters seem to be deteriorating. Something seems highly dysfunctional in this relationship. They’ve become a closed system where decay has set in. The king is clearly getting bad advice.

But if you’d ask King Théoden: “Are you getting good advice?

He’d probably say “Of course!”

“How do you know?”

“Why, Wormtongue assures me of it.”

“But how do you know you can trust Wormtongue?”

“Because I only listen to the best advice.”

Today, we all run the risk of becoming King Théoden.

The same happens in Shakespeare’s Othello, who trusts Iago as an advisor. Iago misinforms him, and epic tragedy ensues.

Today, we’re all potential Othellos.

The question is, what can we do about it?

One answer, it seems, is to make yourself a hard target for misinformation.

Many of us are soft targets.

A “soft target” is gullible.

Few of us have been taught how to be hard targets.

A “hard target” is a well-defended, non-vulnerable establishment such as a military base or fortress. Good ones are nearly impenetrable. A “soft target,” on the other hand, is a place that’s vulnerable and with few defenses, like a grocery store or preschool.

When it comes to misinformation, a “soft target” is someone who is highly gullible - someone who believes everything they’re told.

Some people seem to want us gullible.

To be fair, the problem isn’t the practice of “believing what you’re told” in itself.

So long as what you’re told is reliable and accurate, trust and believing other people works.

Maybe, once upon a time in the distant past, we could do that.

But those days seem to be over, if they ever existed at all.

Times have changed.

Today, we’re constantly surrounded by media.

We can hardly fill up a gas tank or buy groceries without someone carnival barking at us.

If we’d rewind a few centuries ago – or even just a few decades – “media” mostly consisted of a town crier, or a few pamphlets, or newspapers, or – much later – a few radio shows, or three or so television channels.

It’s one thing to try to sort the real from the phony when it’s coming at you from two or three directions.

But today, it’s coming at us from all sides, at all times.

The game has changed.

The environment has become exponentially more complex.

But human nature is still the same.

Human nature can be slow to adapt to new environments, especially when some powerful interests are highly motivated to keep us from adapting.

But luckily, Spider-Man can shed some light here.

A villain illustrates very well what each of us is up against.

In the movie Spider-Man: Far From Home (spoiler alert), Spidey fights a supervillain named Mysterio.

Mysterio’s power comes from his ability to create illusions. It’s basically maya-on-demand.

Spidey and Mysterio fight “The Illusion Battle.”

Using an army of drones, Mysterio creates and controls Spidey’s entire external environment. From Spidey’s perspective, it’s completely immersive. Spider-Man’s experience of the world itself transforms: it’s as if he’s a character in a video game with Mysterio as the game programmer.

This makes Mysterio nearly impossible to fight.

Spidey tries to punch Mysterio, but he turns out to actually be punching a concrete pylon. Spidey “leaps off a building” to try to save a falling MJ, but turns out to actually be jumping headfirst into the hard ground. He tries to spin webs on the drones themselves, but those turn out to be mere illusions of drones (created by real drones.) And so on.

It’s like an eternal battle: an ego that endlessly creates illusions, our efforts to endlessly destroy them and find truth, and more creation of illusions, and on and on.

It’s a literal battle of the real versus the unreal, appearance versus reality. We can spend our energy battling illusions, trying to fight our way out of our self-imposed dream-world based on a lack of self-awareness, while the real world is playing out an entirely different story altogether. It can be like being asleep while dreaming about trying to wake up.

It’s a brutal dive down the rabbit hole.

“I don’t think you know what’s real, Peter!” Mysterio says.

But Mysterio doesn’t merely define the physical environment around him.

He also moves even further into psychological and philosophical warfare.

He tries to define Spidey himself.

He uses his humble past against him: “You are just a scared little kid in a sweat suit!”

He works on his emotions: “Maybe if you were good enough, maybe Tony would still be alive.” He plays on guilt, trying to weaken him from within.

“I control the truth!” he says.

We’re all fighting a version of “The Illusion Battle” today.

We all listen to “trusted” friends and advisors, and in our own ways, fight our own mini-Mysterios.

But it goes even further. With social media, many of us also create media. So in some ways, in our own Illusion Battles, we’re both Spidey and Mysterio.

In some ways, for better or worse, we’re not only “creating our own reality.” We’re also “creating our own appearance.”

“Appearance versus reality” was once considered to a philosophical problem reserved primarily for serious spiritual seekers.

But today, the problem is in everyone’s face.

This can make for a treacherous environment.

News articles, media personalities and so on often claim to “tell us how it is.” Our understanding of reality itself gets determined by who we listen to, or what data we expose ourselves to.

Yet they often contradict one another. Different sides accuse each other of being biased, slanted, inaccurate, brainwashed, propagandistic, etc.

Who is the hero? Who is the villain? It often depends, it seems, on who you ask, or listen to. The results of your experiments depend on which “side – or which “data” – you either choose or ignore.

Amid all this, many life-or-death decisions today depend on knowing what “news” to consume or reject.

Sorting through this flood of information is one of the most important yet least-spoken-about challenges we face every day.

We often have one of three possible reactions to all this.

One approach is just to believe what we hear.

But this has obvious flaws. Again, we can become soft targets – gullible and vulnerable to exploitation. We can also become confused, because the different narratives often contradict.

So then, we might decide to take the opposite course: universal skepticism.

This basic approach is to disbelieve everybody and everything.

Universal skepticism clearly has some advantages over universal gullibility. But taken too far, that also leaves us isolated, paranoid, and – ironically – uninformed. If we literally block all information, we wind up empty and blank. This can make us just as vulnerable as those who believe everything.

A third approach might be to just deliberately absorb as much information as we possibly can.

This seems to be the clear winner at first, but it’s also problematic. Aside from being a lot of work, and a lot of confusion (if we really open ourselves up to all perspectives) – it also introduces a new set of hazards.

If “media” is actually propaganda, then the most “informed” among us become the most propagandized.

If “education” is actually indoctrination, then the most educated turn out to be the most indoctrinated.

So, too little skepticism can be a problem, but so can too much – and mere “education” doesn’t solve the problem, but only kicks the can down the road.

So, what’s left?

Arriving at this point can make everyday life feel like a bewildering, nightmarish scramble through a carnival maze of funhouse mirrors.

It can lead us to the famous phrase by Hollwood icon William Goldman.

“Nobody knows anything.”

(In other words, nihilism.)

The difficulty today isn’t just bad journalism.

Bad journalism seems to have been around for as long as journalism itself.

The challenge isn’t even Photoshop, special effects, DeepFakes, AI art, virtual reality, or any of the other illusions-on-steroids situations that we have today.

It’s true that all of this can blur the lines between the “real” and the “unreal” to degrees our great-grandparents probably never could have imagined.

But it’s also the quantity of media we consume today. A single newspaper today supposedly contains more information than what an average citizen from a few centuries ago encountered in their entire lifetime. A virtual avalanche of media can be nearly pumped directly into our bloodstream 24/7.

Walter Truett Anderson said it well: “Reality isn’t what it used to be.”

This can easily lead to a form of nihilism.

“Nobody knows anything” is nihilism. It’s epistemological bankruptcy, drowning in a sea of information that offers no breath of truth. It’s floating on a raft, surrounded by ocean water, but dying of thirst. The irony is the information age providing so much misinformation or partial-truths that everyone winds up at least partly misinformed.

Amid this, it might seem reasonable to decide that everything is chaos and there’s just no way to navigate it.

Everything might seem meaningless.

Navigating this environment with sanity intact might seem impossible.

We might be tempted to just eject from the maze of funhouse mirrors, abandon all efforts to make sense of the insanity, and let whoever is running the asylum do what they want.

Or, not.

Maybe we can figure out some paths to navigate through it all.

All of this is a very modern problem.

But in other ways, it’s also a very old problem.

The situation boils down to a few basic questions.

What is real? What’s phony?

Who can I trust? Who is shooting straight with me, and who isn’t?

How do I know?

It’s challenging enough when we’re surrounded by floods of conflicting information. (Conflicting information, to be clear, isn’t the problem in itself. A single source with a monopoly on information – and which unifies all data along the lines of one single narrative – isn’t necessarily the answer. (See Orwell, North Korea, Pravda in the former Soviet Union, etc.))

But we also tend to think of the problem as involving mere information.

The real challenge, it seems, involves the meta-problem of deciding how we process information.

Before we even dive into the battlefield of information warfare itself, and the modern world as a relentless illusion factory – it seems that we have to start closer to home.

We have to start with ourselves.

Again, we can first help ourselves by understanding how easy it is to trap ourselves in insulated bubbles – circular, self-validating loops of confirmation bias.

“First of all, I’ll assume that I’m right. That is unconsciously assumed, in the stage of pre-awareness.

Then, I seek out facts that agree with me, and support me.

Then I reject anything that disagrees with me as not credible. After all, since I presume that I’m right, any facts or ideas that don’t confirm what I think must be false, discredited, or rejected as invalid, dumb, or evil.

And then – since nothing credible that says otherwise is valid – I’m therefore right, all the time.”

Therein lies the easy path to always being “right,” all the time.

It’s the royal road to total self-delusion.

In this sense, “Know Thyself” turns out to be not just a lofty slogan, but essential survival advice – even after thousands of years.

At this point, we’re left facing a central question.

For those of us who want to deliberately work to avoid self-delusion, the question is, how?

“Know thyself” might be good advice in the long run. But that doesn’t necessarily help us when we’re standing in the eye of a tornado of conflicting media narratives.

A thought experiment here might help.

Imagine you’re a hardboiled police detective.

You’re trying to solve the toughest case of your life. It involves a series of murders.

After months of blistering work on the case, you’ve finally narrowed your search down to two murder suspects. They’re sitting in your interrogation room, right now. Let’s call them “Bill” and “Bob.”

Everything so far tells you that one of these suspects is definitely the murderer. The other is definitely innocent.

But you don’t know which is which.

Each suspect claims he is innocent and the other guy is guilty.

Each man’s story checks out, and seems valid, based on all known facts.

But the two sides contradict each other.

They can’t both be right.

What do you do?

Many choose one of three different strategies.

Some just pick a side.

Which side?

It might be the side they’ve always been on, or that their family or friends have always been on. The answer, in other words, is tradition.

Or, a second option: the decision might literally be arbitrary. It might be something they picked out of a hat when they were three years old. It might be based on habit, ease, peer pressure, whatever’s trendy, whatever makes someone look cool, or any number of factors. (Here, “Know thyself” strikes again.) The eventual result: “I’m on Team Bill.” (Or vice-versa. It’s arbitrary, after all.) Beyond that point, the need for consistency kicks in: everything Bill says is true, everything Bob says is a lie. (Or vice versa.) All answers are predetermined, and confirmation bias takes care of the rest. Either way, it eventually leads to a self-reinforcing bubble.

Or, a third option: to become paralyzed with uncertainty. The barrage of conflicting information can seem bewildering. This route can cause an individual to freeze in a state of overwhelm.

Or, a fourth option: some glimpse the scope of the problem and just clock out. They dismiss the entire situation as overwhelmingly hopeless, and avoid it however they can.

But what about the rest?

What about a fifth way for those brave souls who want to actively avoid confirmation bias, seek truth, and pop bubbles?

Those who are willing to do the work will be willing to examine a piece of information to decide whether it’s based on a bias, fabrication, half-truth, unbalanced coverage, euphemism, diversion, staged event, anonymous source, straw man, misplaced trust in authority, and so on.

But even before this sort of analysis can take place – before the effort to sort facts from fiction, or “the game” – there must be a system in place that determines the criteria for sorting, or “the rules of the game.”

Beyond mere “fact-sorting,” there’s “deep work.”

(The decision to engage in fact-sorting in pursuit of truth is itself a result of this deeper level of work.)

Some are willing and able to do this deep work.

For these daring, hearty individuals, here are a few tools.

They start at the ground level of a life philosophy or worldview, or fundamental orientation to life. This serves as a foundation of basic answers to The Big Questions of life, or the “existential riddles” we all face.

In our efforts to understand how to strengthen a life philosophy, we gathered several “yardsticks” that can help us determine “how to measure a life philosophy.”

But luckily enough, these tools turn out to work just as well when it comes to sorting news, information, or media narratives. We can use these to measure every “fact” or story when trying to smoke out the real from the phony.

Here they are.

For every bite of media we consume, we can ask these questions:

1. Is it coherent?
2. Is it arbitrary?
3. Is it comprehensive?
4. Is it consistent?
5. Is it unified?
6. Is it illuminating?
7. Is it livable?
8. What are the consequences?
9. How does it compare?

These can seem deceptively simple. They’re like the hammer, wrench, screwdriver, and other basic tools that make up an effective truth-and-baloney detection arsenal.

To examine these more closely:

1. “The Coherency Test” means asking, “Does it make sense?”

This test asks for clarification. Some ideas or narratives collapse immediately under simple questioning or basic clarification.

This test proves its effectiveness when we’re humble enough to ask about things we don’t understand. That said, it’s not foolproof by itself. The stories coming from both suspects in our interrogation room might be coherent, but more is needed to get to the bottom of things. Even further, we have to make sure we don’t err in the opposite direction. Some approaches can seem baffling at first, but eventually prove correct. Others can seem quite simple at first, but eventually prove false. Monster movies often feature a lone conspiracy theorist who sounds crazy (incoherent) in the beginning but is eventually proven right. The skeptic (“That’s nonsense!”) sounds clear and straightforward (coherent) at first – but is eventually revealed as mistaken.

A claim that’s coherent is stronger than one that isn’t.

2. “The Arbitrariness Test” means asking, “Why?”

This test asks for a reason. “Why do you say that?” The request is for self-evident premises, verifiable evidence, sound reasoning, a trusted authority, and so on, to “back up” an assertion. The hazard is circular logic, or simply declaring things: “I think it so because I think it so.”

That said, once again, this test is useful but not foolproof by itself. Sometimes things are self-evident. Sometimes, there is no additional reason, or no secondary reason is needed. “I exist.” “I’m hungry.” Reasons are good, but lines of evidence aren’t necessary for every assertion. (Otherwise, we can set ourselves up for endless loops. “Why do you believe A?” “Because of B.” “And why do you believe B?” “Because of C.” “And why do you believe C?” And so on.)

An assertion that has a proper response to “Why?” is stronger than one that doesn’t.

3. “The Comprehensiveness Test” means asking, “Does this explain all available data?”

This test asks, “Are we ignoring or excluding anything important?” It asks for information that isn’t being considered, but should be.

Many theories can make sense of data. But they don’t necessarily make sense of, or even consider, all available data. Some ignore data – especially data that contradicts pet theories. Good theories make sense of all relevant available data. Bad theories (and scientists) reject data they don’t like or want. They discredit, ignore, or actively shut down any data that doesn’t support the chosen narrative they’ve committed to. This litmus test separates legitimate scientists or journalists from hacks, quacks, or mere propagandists. When they come across data or information that contradicts their pet theories, do they modify their theories/narratives, or do they reject the data? Good ones adjust their theories/narratives. Bad ones reject, ignore, silence, or try to discredit the data.

Assertions that include all available data are stronger than those that don’t.

4. “The Consistency Test” means asking, “Is any of this contradictory?”

This test asks, “Do any known elements of this contradict one another?”

If someone claims to be against killing animals but eats hamburgers, or claims to be against wealth while hoarding and spending tons of money, or railing against the evils of technology on a laptop or smartphone, something’s wrong.

Assertions that contradict themselves are stronger than those that don’t.

5. “The Unity Test” asks, “Does this idea or narrative fit together into a unified whole?”

It’s similar to the Consistency Test but goes a little further.

Merely avoiding contradictions isn’t always enough. “2+2=4” doesn’t contradict the claim that “dogs bark,” but they don’t necessarily fit together into anything cohesive. Avoiding contradiction is like playing defense. “Unity” plays offense. It’s one thing to say that a steering wheel, an engine, and an exhaust pipe “aren’t inconsistent with each other.” It’s another to say that these different pieces all work together and interconnect in ways that lead to a greater, unified whole. (“It’s a car.”)

Claims that are unified are stronger than claims that aren’t.

6. “The Illumination Test” means asking, “Does this really explain anything?”

This test asks, “So, do I really know anything I didn’t know before?”

“Why did you do that?” “Because.” “But because why?” “Just because.” – is one example that isn’t illuminating. “What did you do that?” “Because I wanted to.” That is another example that is slightly more illuminating – but not much. “Why did you do that?” “Because (insert wild and elaborate fictional narrative here).” This can be even worse: long, obtuse, and complex reasons that still ultimately explain nothing. “Why did he do that?” “Because of (insert name of any condition.)” But merely adding a label to something doesn’t explain it. These are pseudo-explanations. Some ideas might be complex, flattering, intimidating, status-enhancing, intellectual-sounding without actually explaining anything.

Ideas that actually illuminate are better than ones that don’t.

7. “The Experience Test” means asking, “Can we actually apply this to real life?”

This test asks, “Is this liveable?”

Some ideas sound great but are impossible to live consistently in everyday life. For example: “everyone should be richer, happier, and more famous than everyone else.” “If we just get rid of all the bad people, then we’ll only be left with good people.” Anyone who claims “I believe in nothing!” believes in the idea of believing in nothing. “Achieve the impossible!” is a contradiction. (Spoiler: If something is actually impossible, it can’t be done. Or, if someone does something, then it wasn’t actually impossible. To say, “Let’s do something that we think is impossible, but actually isn’t” – is fair game. Not to spoil the party, but this has become such a cliché that it’s worth clarifying.) And so on.

Ideas that work in real life are stronger than ideas that don’t.

8. “The Consequences Test” means asking, “What happens as a result of this?”

This test asks, “What are the actual, real-world effects of a certain idea, fact, or story? Does it result in more love, harmony, wisdom, beauty, justice, fun, joy? Does it result in misery, violence, suffering, destruction, death?”

Sometimes this can take time to play out. Plenty of ideas sound great initially but don’t hold up over time. Many fall apart soon after they leave the classroom or imagination. The hard knocks of real life might destroy some immediately, while others might take decades or centuries. Some ideas might begin with noble intentions but end in promoting ideas that are utterly destructive and misery-inducing.

Ideas that work as measured by their actual consequences in the real world are stronger than those that aren’t.

9. “The Comparison Test” means asking, how certain ideas compare to others.

This test asks, “How does this idea relate to all other insights, discoveries, and achievements throughout the history of humanity?”

Brilliant individuals have been making discoveries throughout history. What would they have to say about a certain idea? A look at how certain ideas compare to others throughout history can be revealing.

Ideas that are examined in relation to the greater context of the whole of human history hold up better than those that don’t.

Those are a few tests.

That said, we’re still three yardsticks short of a dirty dozen.

Here are three more, including one that we’ve already touched on briefly.

10. We can ask, “What’s beyond the mere facts”?

We need to abandon “fact-worship.”

Oscar Wilde said it well:

“If something cannot be done to check, or at least to modify,
our monstrous worship of facts,
art will become sterile
and beauty will pass away from the land.”

Many of us today assumes that “The facts are on my side, and everyone who disagrees with me is ignoring the facts.”

This can easily lead us to conclude that “anyone who disagrees must be ‘ignoring the facts.’” (This means we’re accusing them of failing The Comprehensiveness Test.)

But why are they “ignoring the facts”? Is that what’s happening, really?

The next step from here is to assume that they’re either ignorant, dumb, crazy, or evil.

These assumptions can lead to situations that aren’t pretty.

But the bigger picture is sometimes more complex.

This isn’t to say that ignorance, stupidity, insanity or evil don’t exist. But we’re sometimes too quick to relegate those who start with very different premises in their worldview.

Sometimes, for example, they’re just narrowly focused on a different set of facts, or starting from very different premises, without considering any others. It’s the old “blind men and the elephant” parable. They use different exemplars to base their life philosophy on.

Baruch Spinoza said it well centuries ago:

"And most controversies have arisen from this,
that men do not rightly explain their own mind,
or interpret the mind of the other man badly.
For really, when they contradict one another most vehemently,
they either have the same thoughts,
or they are thinking of different things,
so that what they think
are errors and absurdities in the other are not.”

The problem today isn’t necessarily that half of the world or more has suddenly become ignorant, dumb, crazy, or evil. At least part of the problem lies in fact-worship. It might be that we’re using a model to judge our entire understanding of the world that steers us toward making errors.

“Fact worship” means assuming facts tell the entire story.

But they don’t.

Beyond “mere facts,” there are three other elements we often ignore.

There’s fact isolation, fact selection, and fact interpretation.

First: a “fact” is a tiny slice of reality. It isn’t the whole of reality. That’s “fact isolation.” No fact is an island. Every fact is a fraction of reality. There’s always more to life than a simple fact. This leads us to fact selection.

“Fact selection” – the second element – means that we often pick and choose certain facts and ignore others. This fails the comprehensiveness test mentioned above. “Fact-ignorers” might not necessarily have bad motives – there might just be there’s additional perspectives they aren’t aware of. In the funhouse mirror of the modern world, this isn’t just understandable. It’s inevitable.

“Fact interpretation” – the third element – points not just to a mere fact, but how to interpret that fact. Every fact needs to be interpreted. What does it mean? In order to convert an isolated fact into something meaningful, every fact gets woven into a greater narrative. This narrative can change the meaning of an isolated fact to a profound degree. For example, take this “isolated fact” as a headline: “Man stabs woman with knife!” That “fact” could lead to a response along the lines of, “Throw him in jail!” But a different headline – “Doctor performs life-saving surgery on woman” – might refer to the same event. Both phrases might be “factually” true, but they lead to very different interpretation of that same fact.

A healthy interpretation can lead to clarity, while a toxic interpretation leads to “spin.”

This points to the critical factor of understanding how we interpret or “frame” certain facts.

How we frame or interpret facts depends on our worldview or life philosophy. These are our basic assumptions about life – our answers to The Big Questions of life – that we bring to every fact. They’re like the glasses we see facts through. They color everything we see, even though sometimes we get so used to it that we’re unaware of it.

Economists are notorious for being blind to this.

Two economists can look at the exact same numbers, yet come to wildly different conclusions. Both economists assume their side is rational while all other sides are ignorant, dumb, crazy, or evil. But what’s actually happening? They’re coming to different conclusions because of their initial assumptions (or worldviews, or life philosophies), they fail to account for or discuss their own assumptions, and because of their blind fact-worship, they only talk on the surface-level of facts. They ignore the bigger questions of life that factor into their fact selection and interpretation.

Blindness to fact-worship leads to unnecessary misunderstanding and misery.

Two other tools can also help us.

One is “The Dialogue Test.”

Are both sides willing to talk?

Are both sides able to even understand each other? If not, are they willing to try?

Can both sides explain the position of the other without misrepresenting or caricaturing them?

Are both sides willing to have an open conversation in good faith, using reason, empathy, logic, and a willingness to genuinely engage in the pursuit of truth?

If Bill is willing to talk but Bob isn’t, then Bill is probably real, and Bob is probably phony.

That’s tool #11.

And finally, tool #12 is the “Explaining Alternate Theories” test.

Can one side explain the existence of the other?

Can Bob account for the existence of Bill, and vice versa?

Does one narrative “transcend and include” the other – meaning, does it explain the other narrative, and more?

If two stories contradict, they need to explain not only their interpretation of events, but also the other side.

Bill might claim that Bob is a space alien. Bob might claim that Bill is crazy. Both sides explain the other side, but one is more credible and likely than the other.

This is why Einstein’s explanation of how the world worked eventually won out over Newton’s. As Thomas Kuhn noted, both paradigms or worldviews – Einstein’s and Newton’s – could explain the same facts. But Einstein’s could also explain more (such as the bending of time.) Newton’s narrative, however, couldn’t explain Einstein’s, or some of the facts Einstein wrestled with. That’s why quantum mechanics gradually replaced the Newtonian model. Similarly, Augustine was able to explain the existence of the Roman myths, but the Roman myths weren’t able to explain the existence of Augustine. Few believe in the Roman gods today.

These twelve tools can help us sort the real from the phony in the information game.

All of these ultimately depend on our desire to not be fooled, our openness to truth, and our willingness and ability to know ourselves.

As Mysterio said, “It’s easy to fool people when they’re fooling themselves.”

(To be fair, credit that line of dialogue to the writers, Chris McKenna and Erik Sommers.)

This approach isn’t one of scientists or media entertainers, who typically work under highly controlled, highly polished environments.

For our purposes here, our role models are more like police detectives.

They’re individuals who often work in the gritty, street-level world of hard truths, walking the line between total skepticism and gullibility. They aren’t ivory-tower scientists, starry-eyed saints, or head-in-clouds academics.

They don’t easily believe anyone. But they also don’t have the luxury of simply disbelieving everyone. In other words, they walk the line between gullibility and skepticism.

And they can’t just clock out. After all, they have a job to do. They have to solve the case. So, they gather as many facts as possible. They look for inconsistencies. They test our various interpretations and explanations, and so on.

And like us, they’re often grappling in the dust and fog, dealing more with messy situations and nefarious characters on the streets of everyday life.

- and in the interrogation room.

When we’re faced with interrogating our own Bills and Bobs, and neither seems willing to crack, we can ramp things up a bit.

When all else fails, we can run our own tests to expose the truth.

In one story, for example, that’s thousands of years old, two women came forward with a disagreement they couldn’t resolve on their own.

Each claimed to be the mother of a young infant.

Mother A said she was the real mother, and the other was an imposter.

Mother B said the same thing.

It wasn’t obvious which mother was real.

The man they came to had to decide. Who was the real mother?

Based on all available facts, science, logic and so on, there was no way to know who was real and who was phony.

So, what did he do?

He drew a sword, and in so many words, said this:

“We’ll cut the baby in half, and give half to each.”

Hearing this, one of the women said, essentially, “Fine. Let’s do it. Then that baby would be neither mine nor hers.”

The other mother was horrified. She told him not to do it. She said the other woman could have the child. Just don’t kill him.

The man – Solomon – made his decision. He decided to give the baby to the mother who told him to stop.

He had smoked out the truth by getting creative. He adopted an approach that focused somewhere between total gullibility and total skepticism. He didn’t necessarily believe or disbelieve either side, but then creatively orchestrated a scenario that was engineered to squeeze out new data, and applied pressure. He actively created a way to wrench secrets out of a situation – to produce coherent (#1) new information to factor in (#3) which revealed an explanation (#5) that one mother was someone acting in a way that was consistent (#4) with the way a real mother would, while the other mother wasn’t.

Few of us will likely face literal decisions like the one above.

But then again, today, as we all swim through avalanches of facts, narratives, and meta-narratives, we all seem to face smaller but similar, slightly altered versions of it.

We can only guess how Solomon might have dealt with Mysterio.

But these days, we’re all Solomon. And we’re all Spidey.

Discuss This Article

If you liked this, check out:

Needful Things as a Guide to Modern Times

Philosophical Self-Defense: A Basic Primer

How to Not Be Manipulated: The Basics in 15 Minutes

12 Steps to Untangling Yourself From Postmodern Confusion

Lack of Self-Awareness and the Thin, Invisible Line

"Know Thyself": A User's Guide

Why We Need a Spiritual Renaissance

Spread the love.