They flow before our eyes while we are in line at the supermarket or on the sofa at the end of the day. An elderly person celebrating a touching anniversary, a child alone on his birthday, a countryside scene that smells of the past. Let’s put a like, write a comment, maybe share. Then it turns out that that image never existed. It was created by artificial intelligence. And no, it didn’t happen because we are naive.
Research published in Computers in Human Behavior explains why AI-generated images on Facebook they manage to hit us so deeply. The answer lies not in technology, but in psychology. And it concerns everyone.
Why AI imagery bypasses critical thinking
The study was conducted by Mark Miskolcziresearcher of Corvinus University of Budapestwhich decided to look less at spectacular deepfakes and more at what we encounter every day on social media. Not the big political misinformation, but the everyday emotional clickbait.
According to Miskolczi, the point is not whether an image is technically perfect. Many are not at all. Hands with too many fingers, strangely smooth faces, details that don’t add up. Yet they work. They work because they tell stories simple emotional storiesimmediate, reassuring or heartbreaking. And when a story touches us, the brain stops being a detective.
Facebook is increasingly populated by synthetic images mass-produced by pages that seem harmless, but which in reality are part of real content farm“content factories” that have the sole purpose of generating interactions. Likes, comments, shares. Every reaction is worth visibility and, therefore, money.
To study the phenomenon, the researcher directly observed public bulletin boards, selecting pages that frequently published suspicious images. After a double check, manual and using AI detection tools, 146 certainly artificial images and over 9,000 comments written by real people were analysed.
The most interesting fact is not how many people were fooled, but as they reacted. Prayers, wishes, messages of comfort, words full of humanity addressed to subjects who do not exist. There is no cynicism in these answers. There is empathy.
Nostalgia and compassion: the emotions that lower our guard
The most effective images are those that speak of an idealized past or of evident fragility. Elderly couples who “have been together for 60 years”, sad children, lonely people. Scenes that confirm what we already feel: that once upon a time everything was simpler, that pain must be consoled, that ignoring it would be inhuman.
Here a very human mechanism comes into play: if something resonates with our valueswe tend to accept it without too many questions. This is the so-called confirmation bias. If the image reinforces what we believe, why should we doubt?
Then there is the emotional anchoring. The first emotion we feel becomes the point of reference. If a caption talks about a forgotten child on their birthday, that initial sadness captures us. At that point, any visual errors fade into the background.
When comments make a lie “true”.
Another key aspect is the role of the crowd. If a post has thousands of likes and an avalanche of loving comments, we tend to trust it. It’s an automatic reflex. If everyone believes it’s true, then it probably is.
According to Miskolczi, the comments sections thus become a kind of credibility machine. Some comments are written by bots, programmed to reinforce the narrative. But whoever arrives later doesn’t know it. He just sees a community participating, and coming together. Not out of naivety, but out of a need for connection.
One of the most important aspects of the study concerns a hard-to-die prejudice: the idea that it is mainly elderly or poorly educated people who fall for it. The data does not confirm this. The mental shortcuts the brain uses work at any age and cultural levelespecially in the fast scrolling contexts typical of social media.
Vulnerability is not a question of intelligence, but of emotional context. Tiredness, loneliness, hurry, need to feel part of something. All states we know well.
The biggest risk: losing faith in everything
The problem, the researcher warns, is not just temporary deception. It’s the long-term effect. If we become unable to distinguish between true stories and automated fiction, we risk doubting everything. Even authentic images, real testimonies, real human stories.
There’s no need to panic. Awareness is needed. Learning to slow down, observe better, ask ourselves why a content affects us so much. Not to become cold or cynical, but to protect precisely that part of us that reacts with empathy. Because the paradox is this: AI imagery works so well precisely because we are human. And perhaps, understanding how we are touched is the first step towards not being used.