Because artificial iintelligence understands what a flower is, but not in the way you do it

A rose is not just a name. It is not a set of letters, nor a definition taken by a dictionary. A rose is its perfume that spreads in the air, the delicacy of the petals under the fingers, the vivid image that remains impressed in the mind. And all this, Artificial intelligence cannot knowat least not as we know.

This is confirmed by a new study published on Nature Human Behaviour and conducted byOhio State Universityaccording to which Not even the most advanced linguistic models of the AI ​​are able to really represent the concept of “flower”. Even if trained on billion words, these systems cannot experience sensory experiences that make a flower something more than a word.

The limit of artificial intelligence is clear

“A linguistic model cannot smell a rose, nor caress the petals of a daisy or walking in a field of wild flowers,” he explains Qihui xuthe main author of the post-doc study and researcher in psychology. “And without those sensory and motor experiences, he will never understand what a flower really is in his entirety.”

The point is simple, but deep: Human knowledge is not only made of wordsbut of body, emotions, direct contact with the world. And while artificial intelligence is based on linguistic models, Human beings build concepts by intertwining sounds, smells, images, touch, emotions and actions.

XU and his team compared Four advanced linguistic models -GPT-3.5 and GPT-4 by Openai, Palm and Gemini di Google-with the way people interpret More than 4,400 wordsincluding “flower”, “hoof”, “humorous” and “swing”.

The comparison took place on two fronts:

The results are clear: When words were abstract concepts or disconnected from the sensesthe IA was surprisingly good at simulating their human representation. But As soon as the concepts became bodily or tied pursuant to, the models collapsed.

We think of “pasta” and “rose”: both evoke the sense of smell. But for us humans, The pasta is more similar to spaghetti than to rosesbecause vision, taste, food function also comes into play. The IA, on the other hand, struggles to make these multisensory associations.

“From the intense perfume of a flower, to the softness when we caress the petals, to the joy that arouses us – all this is intertwined in our mind to form a complex idea of ​​’flower'”, the researchers write. An idea That the IA, based only on text, cannot capture.

Even the most advanced models, trained on immense quantities of texts – well beyond what a person reads throughout his life -.

But something is changing

Not everything is lost. The study found that The models trained also with images, as well as with text, manage better to represent visual conceptslike those related to vision or form. And in the future, if they will be integrated with sensory data and robotic technologiescould begin to perceive (in part) the physical world, as XU points out:

Tomorrow the IA could have access to the senses, perhaps through sensors, robots or other interfaces. And then yes, perhaps it will be able to better understand bodily concepts like ‘flower’. But for now, that type of understanding still belongs to us.

The study was carried out in collaboration with Yingying Peng, Ping Li and Minghua Wu of the Hong Kong Polytechnic University, Samuel Nastase of Princeton University and Martin Chodorow of the City University in New York.