Can artificial intelligence really be an impartial and rational judge? According to a new study published in the magazine Manufacturing & Service Operations Managementthe answer is more complicated than you think. The authors of the research discovered that Chatgptone of the most advanced linguistic models of Openai, reproduces several cognitive bias typical of human thoughtfalling into the same decision -making trains that afflict people, like the Fallacia of the player and theexcessive security in their answers.
Yet, in other cases, the IA behaves in an opposite way to the human being, demonstrating a certain immunity to common errors such as the Fallacia of the irrecoverable cost or the contempt for the basic frequency.
Chatgpt fails in half of the tests
During the study entitled “A Manager and An at the Walk Into a bar: does chatgpt make biaset decisions like we do?”the researchers subjected chatgpt a 18 specific tests to identify cognitive bias. The results reveal that The model showed distorted behaviors in almost half of the casesconfirming how much his “decisions” can be influenced by mental shortcuts similar to human ones.
Among the bias found more frequently, stand out:
The behavior of the AI appeared consistent through different corporate contexts, but a significant difference has also emerged between the various versions of the model. GPT-4for example, showed greater analytical accuracy than GPT-3.5but at the same time it seemed more inclined to bias in the tasks that require subjective judgment.
The applications of artificial intelligence in delicate decision -making areas – such as staff hires, Loan concessions or Selection of candidates – They are increasingly widespread. But if a model like Chatgpt replies the same human cognitive errors, risks perpetuating wrong decisions rather than correcting them.
“When the IA learns from human data, it ends up thinking like a human being, including bias,” he explains Yang Chenthe main author of the study and professor at Western University. The risk, the other researchers also underline, is that the IA is perceived as an impartial referee, when in reality it can commit the same systematic errors who perform people.
Anton Ovchinnikovof Queen’s University, clarifies:
When it comes to logical or probabilistic problems, the IA is higher than the average of human beings. But in the tasks that require subjective evaluations, it falls into the same mental trains.
Samuel Kirshnerof the Unsw Business School, adds:
The IA is not a neutral judge. If it is not monitored, it could aggravate decision -making problems instead of solving them.
Continuous human control is needed
In light of these results, scholars recommend one Continuing supervision And periodic reviews decisions taken by artificial intelligence models. The growing use of these tools in the managerial and public field requires an ethical and responsible approachwhich provides for constant checks on automated decision -making processes, as he says Mena andppan of McMaster University:
It is essential to treat the IA as an employee with decision -making power. We need a system of rules, monitoring and ethical guidelines, otherwise you risk automating a distorted thought instead of improving it.
In the end, Tracy Jenkin of Queen’s University underlines how the evolution of the GPT-3.5 to GPT-4 model suggests A tendency towards a greater “humanization” in certain cognitive aspectswhile in others it improves mathematical and logical precision:
Manager must continuously evaluate how the different models behave according to the specific use. Some areas will require significant customization of the model.