questions shape perceived truth

When people try to verify dubious news by searching online, they’re often more likely to believe the misinformation rather than debunk it. Research shows that encouraging online searches to check false news actually increases belief in these claims by 5.7 percentage points. This effect is strongest when search engines return lower-quality information.

Experts call this problem “data voids,” especially during breaking news events when credible information isn’t yet available. Search engines fill this gap with whatever content exists, often from non-credible sources. Five different experiments confirm that searching online at the time of investigation makes people more likely to believe false claims.

When information isn’t available, search engines deliver what exists—often from dubious sources, making us more susceptible to falsehoods.

Several factors make people vulnerable to misinformation. Partisanship and identity are major influences, with people more likely to believe false information that matches their existing views. Confirmation bias plays a big role too. Motivated reasoning is another key driver behind misinformation belief, as identified by experts. Surprisingly, younger adults (Gen Z and millennials) are more susceptible than older adults.

Time spent online matters as well. People who spend more time online for fun are more likely to believe misinformation. Only 15% of people who spend nine or more hours online daily show strong resistance to misinformation, compared to 30% of those who spend less than two hours.

News consumption habits make a difference. People who read traditional news sources like Associated Press, NPR, and Axios are better at identifying false information. While AI can process information quickly, it lacks the ethical judgment necessary to evaluate morally complex news stories. The average American correctly identifies only 65% of false headlines. One study found that the treatment group showed a 19% increase in the probability of rating false articles as true after searching online.

Various solutions show promise. Media literacy programs, fact-checking, platform design changes, and content moderation can help reduce misinformation spread. Simple “accuracy prompts” can reduce sharing of false news by 10-16.5%.

Psychological factors also contribute to misinformation spread. Hostility toward political opponents, personality traits, and simple inattention to accuracy all play roles. People at political extremes are especially likely to see and believe false content that supports their views.

The research suggests that how we search for information might be as important as what we’re looking for.

References

You May Also Like

Facebook’s AI Takeover: The Terrifying Future of Digital Censorship

Meta’s AI now controls 90% of Facebook’s content decisions—what they’re hiding about automated censorship will change how you see social media forever.

ChatGPT: The Controversial AI Tool 79% of Lawyers Can’t Resist

79% of lawyers secretly use ChatGPT while 63.6% of people say it shouldn’t give legal advice. The profession faces an identity crisis.

Openai Bans Chatgpt From Playing Doctor and Lawyer: Users Left Scrambling

OpenAI just banned ChatGPT from medical and legal advice—millions of users are panicking while businesses scramble to completely redesign their workflows.

Boston’s Bold Gamble: Will AI Transform or Disrupt City Services?

Is Boston’s tech gamble worth the risk? AI reduces call workloads 30%, cuts traffic 25%, but citizens question who truly benefits from this digital revolution.