questions shape perceived truth

When people try to verify dubious news by searching online, they’re often more likely to believe the misinformation rather than debunk it. Research shows that encouraging online searches to check false news actually increases belief in these claims by 5.7 percentage points. This effect is strongest when search engines return lower-quality information.

Experts call this problem “data voids,” especially during breaking news events when credible information isn’t yet available. Search engines fill this gap with whatever content exists, often from non-credible sources. Five different experiments confirm that searching online at the time of investigation makes people more likely to believe false claims.

When information isn’t available, search engines deliver what exists—often from dubious sources, making us more susceptible to falsehoods.

Several factors make people vulnerable to misinformation. Partisanship and identity are major influences, with people more likely to believe false information that matches their existing views. Confirmation bias plays a big role too. Motivated reasoning is another key driver behind misinformation belief, as identified by experts. Surprisingly, younger adults (Gen Z and millennials) are more susceptible than older adults.

Time spent online matters as well. People who spend more time online for fun are more likely to believe misinformation. Only 15% of people who spend nine or more hours online daily show strong resistance to misinformation, compared to 30% of those who spend less than two hours.

News consumption habits make a difference. People who read traditional news sources like Associated Press, NPR, and Axios are better at identifying false information. While AI can process information quickly, it lacks the ethical judgment necessary to evaluate morally complex news stories. The average American correctly identifies only 65% of false headlines. One study found that the treatment group showed a 19% increase in the probability of rating false articles as true after searching online.

Various solutions show promise. Media literacy programs, fact-checking, platform design changes, and content moderation can help reduce misinformation spread. Simple “accuracy prompts” can reduce sharing of false news by 10-16.5%.

Psychological factors also contribute to misinformation spread. Hostility toward political opponents, personality traits, and simple inattention to accuracy all play roles. People at political extremes are especially likely to see and believe false content that supports their views.

The research suggests that how we search for information might be as important as what we’re looking for.

References

You May Also Like

Copyright Office Verdict: AI Cannot Be an Author – Human Creativity Still Reigns

Can machines create art? The law says no. While AI rapidly evolves, the Copyright Office firmly declares only human creativity deserves legal protection. Your imagination still matters.

Utah’s AI Office Releases First AI Mental Health Guideline: A Bold Year 1 Revelation

Utah mandates AI therapists must confess they’re not human—while charging $2,500 for violations that protect your mental health data.

Wikipedia Crisis: AI Bots Devour 65% of Resources While Contributing Just 35% of Traffic

AI bots are bleeding Wikipedia dry, devouring 65% of resources while contributing little. The nonprofit’s survival hangs in the balance. Can it be saved?

The Unseen AI Revolution: 89% of Corporate AI Usage Lurks in Digital Shadows

Corporate executives are blind to the 89% of AI hiding in plain sight. Workers secretly use AI for daily tasks while leadership remains oblivious. Security risks mount as companies race toward implementation.