questions shape perceived truth

When people try to verify dubious news by searching online, they’re often more likely to believe the misinformation rather than debunk it. Research shows that encouraging online searches to check false news actually increases belief in these claims by 5.7 percentage points. This effect is strongest when search engines return lower-quality information.

Experts call this problem “data voids,” especially during breaking news events when credible information isn’t yet available. Search engines fill this gap with whatever content exists, often from non-credible sources. Five different experiments confirm that searching online at the time of investigation makes people more likely to believe false claims.

When information isn’t available, search engines deliver what exists—often from dubious sources, making us more susceptible to falsehoods.

Several factors make people vulnerable to misinformation. Partisanship and identity are major influences, with people more likely to believe false information that matches their existing views. Confirmation bias plays a big role too. Motivated reasoning is another key driver behind misinformation belief, as identified by experts. Surprisingly, younger adults (Gen Z and millennials) are more susceptible than older adults.

Time spent online matters as well. People who spend more time online for fun are more likely to believe misinformation. Only 15% of people who spend nine or more hours online daily show strong resistance to misinformation, compared to 30% of those who spend less than two hours.

News consumption habits make a difference. People who read traditional news sources like Associated Press, NPR, and Axios are better at identifying false information. While AI can process information quickly, it lacks the ethical judgment necessary to evaluate morally complex news stories. The average American correctly identifies only 65% of false headlines. One study found that the treatment group showed a 19% increase in the probability of rating false articles as true after searching online.

Various solutions show promise. Media literacy programs, fact-checking, platform design changes, and content moderation can help reduce misinformation spread. Simple “accuracy prompts” can reduce sharing of false news by 10-16.5%.

Psychological factors also contribute to misinformation spread. Hostility toward political opponents, personality traits, and simple inattention to accuracy all play roles. People at political extremes are especially likely to see and believe false content that supports their views.

The research suggests that how we search for information might be as important as what we’re looking for.

References

You May Also Like

Copyright Office Verdict: AI Cannot Be an Author – Human Creativity Still Reigns

Can machines create art? The law says no. While AI rapidly evolves, the Copyright Office firmly declares only human creativity deserves legal protection. Your imagination still matters.

AI Therapy Bots Endanger Mental Health: British Experts Sound Alarm

AI therapy bots: convenient mental support or dangerous gamble? British experts challenge the tech surge while patients’ privacy and wellbeing hang in the balance. Can machines truly replace human therapists?

AI Revolution: How Canadian Insurers Wage War on Health Benefits Fraud

Canadian insurers deploy AI armies against fraudsters who stole millions—but criminals now weaponize the same technology.

The Scientific Peril: When AI Models Eclipse Human Judgment

AI may surpass human prediction abilities, but it blindly perpetuates bias while missing crucial ethical context. True scientific progress demands human wisdom alongside machine efficiency.