Overconfidence of people prevents the spread of artificial intelligence. This conclusion was made by military scientists who wanted to understand how useful artificial intelligence would be on the battlefield.

The US Army continues to modernize its armed forces and is spending significant resources on the development of machine learning algorithms that will help make decisions on the battlefield. However, knowledge of the possibilities of AI is clearly not enough, and it is not known whether the AI will be useful in making decisions or interfere, writes EurekAlert.

The study, which military experts conducted in conjunction with scientists at the University of California at Santa Barbara, concluded that most people believe their own judgments much more than the conclusions obtained from the computer.

Thus, even if an immaculate AI is created, people will still not listen to its advice.

The researchers confirmed their conclusions during an experiment resembling the prisoner’s “dilemma” – a problem from game theory, where the player must choose whether to cooperate with his opponent or not, taking into account various factors.

In each round, the AI ​​gave participants advice that was derived separately from the game interface, and always recommended the optimal strategy. But participants had the opportunity to turn the tips on and off manually, and no one forced them to follow the AI ​​prompts.

Also, scientists have created several other versions of AI – some were mistaken in the boards, some demanded manual input of information, some described in detail the course of their reasoning.

The results of the experiment may discourage AI supporters: two-thirds of the decisions made by man did not coincide with the machine’s advice, regardless of the number of errors in the tips.

The decoding of the councils did not help either.

“Rational explanations also proved ineffective for some people, so developers should be more creative in creating interfaces to such systems,” the scientists write.

It turned out that the higher the player assessed his own level of understanding of the game, the less he resorted to computer advice.

The result is logical: people who considered themselves smarter than a computer actually showed worse results than their more modest comrades who used clues.

IBM Operational Director for AI and Quantum Computing Dario Gil is confident that in 2019 we will learn to trust machines. But he gave this interview before the publication of the current results.

Categories: Explore

You might be interested:

“These three countries are bringing the whole world to disaster” The International Energy Agency has summed up the results of 2018. They are disappointing: greenhouse gas emissions have reached record levels, and t...
Design children will remain science fiction for a long time A recent statement by a Chinese scientist about the birth of the first genetically edited children has given new impetus to talk about "design babies...
“By 2034, our DNA will belong to the corporations” Questions about the ownership of biometric data, including genetic data, need to be resolved as soon as possible - otherwise, it will become impossib...
Apple violated the injunction to sell iPhone in China It seemed that until reconciliation Apple and Qualcomm remained a matter of months, but it was not there. A couple of days ago, Qualcomm announced it...