questions shape perceived truth

When people try to verify dubious news by searching online, they’re often more likely to believe the misinformation rather than debunk it. Research shows that encouraging online searches to check false news actually increases belief in these claims by 5.7 percentage points. This effect is strongest when search engines return lower-quality information.

Experts call this problem “data voids,” especially during breaking news events when credible information isn’t yet available. Search engines fill this gap with whatever content exists, often from non-credible sources. Five different experiments confirm that searching online at the time of investigation makes people more likely to believe false claims.

When information isn’t available, search engines deliver what exists—often from dubious sources, making us more susceptible to falsehoods.

Several factors make people vulnerable to misinformation. Partisanship and identity are major influences, with people more likely to believe false information that matches their existing views. Confirmation bias plays a big role too. Motivated reasoning is another key driver behind misinformation belief, as identified by experts. Surprisingly, younger adults (Gen Z and millennials) are more susceptible than older adults.

Time spent online matters as well. People who spend more time online for fun are more likely to believe misinformation. Only 15% of people who spend nine or more hours online daily show strong resistance to misinformation, compared to 30% of those who spend less than two hours.

News consumption habits make a difference. People who read traditional news sources like Associated Press, NPR, and Axios are better at identifying false information. While AI can process information quickly, it lacks the ethical judgment necessary to evaluate morally complex news stories. The average American correctly identifies only 65% of false headlines. One study found that the treatment group showed a 19% increase in the probability of rating false articles as true after searching online.

Various solutions show promise. Media literacy programs, fact-checking, platform design changes, and content moderation can help reduce misinformation spread. Simple “accuracy prompts” can reduce sharing of false news by 10-16.5%.

Psychological factors also contribute to misinformation spread. Hostility toward political opponents, personality traits, and simple inattention to accuracy all play roles. People at political extremes are especially likely to see and believe false content that supports their views.

The research suggests that how we search for information might be as important as what we’re looking for.

References

You May Also Like

Colorado’s War Against AI Sex Deepfakes: New Bill Criminalizes Virtual Exploitation

Colorado’s aggressive crackdown on AI deepfake porn reshapes digital boundaries. New legislation would punish virtual sexual exploitation as lawmakers fight back against fabricated explicit imagery. Is your digital likeness protected?

ChatGPT’s ‘Most Controversial’ Images Push Boundaries in Unexpected Ways

ChatGPT’s image generator creates babies on plates and mimics Ghibli—blurring the line between creative freedom and ethical violations. Where should we draw the line?

AI’s Gender Betrayal: ChatGPT Caught Pushing Women to Demand Less Pay

AI told women to accept lower salaries while male-dominated teams build systems that systematically disadvantage half the population.

AI Revolution at Mexico’s Border: Chihuahua’s Bold Gamble Against Cartels

AI crushed border crossings by 95% while cartels scramble—but Congress can’t decide if this technology is salvation or surveillance nightmare.