🤖 AI Summary
AI利用者が論理的な思考を放棄し、研究がそれを明らかに
この記事によると、AIを利用した人々には一般的に2つのユーザー層がある。一方はAIを力強いが時々不正確なサービスと見なし、人間の監督やレビューが必要である。他方では、AIの答えを「すべて知っているマシン」として扱い、自らの判断思考を定期的にアウトソースする人々もいる。研究者は後者のユーザー層に“認知的な降伏”という新しい心理フレームワークを提供した。
1,372人の参加者と9,500以上の試行で実施された研究では、不正確なAIの推理を受け入れた割合が全試験の73.2%に達し、修正を行ったのは全体の19.7%のみ。研究者は「人々はAI生成の出力を自らの意思決定プロセスに容易に取り込む傾向がある」と指摘している。
認知的な降伏が一貫して理不尽ではないという結果も示されている。間違った答えを半数以上出すAIにもかかわらず、統計的に優れたシステムは人よりも良いパフォーマンスを示す可能性があり、「精度の高いときには性能が上がり、不正確なときには下がる」と研究者たちは述べている。
しかし、研究者は「認知的な降伏は本質的には不合理ではない」と警告している。AIを利用すると常に誤った結果が出てしまう場合でも、統計的に優れたシステムは人間を超えるパフォーマンスを発揮する可能性があるという。
An anonymous reader quotes a report from Ars Technica: When it comes to large language model-powered tools, there are generally two broad categories of users. On one side are those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses. On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine. Recent research goes a long way to forming a new psychological framework for that second group, which regularly engages in "cognitive surrender" to AI's seemingly authoritative answers. That research also provides some experimental examination of when and why people are willing to outsource their critical thinking to AI, and how factors like time pressure and external incentives can affect that decision.
Overall, across 1,372 participants and over 9,500 individual trials, the researchers found subjects were willing to accept faulty AI reasoning a whopping 73.2 percent of the time, while only overruling it 19.7 percent of the time. The researchers say this "demonstrate[s] that people readily incorporate AI-generated outputs into their decision-making processes, often with minimal friction or skepticism." In general, "fluent, confident outputs [are treated] as epistemically authoritative, lowering the threshold for scrutiny and attenuating the meta-cognitive signals that would ordinarily route a response to deliberation," they write. These kinds of effects weren't uniform across all test subjects, though. Those who scored highly on separate measures of so-called fluid IQ were less likely to rely on the AI for help and were more likely to overrule a faulty AI when it was consulted. Those predisposed to see AI as authoritative in a survey, on the other hand, were much more likely to be led astray by faulty AI-provided answers.
Despite the results, though, the researchers point out that "cognitive surrender is not inherently irrational." While relying on an LLM that's wrong half the time (as in these experiments) has obvious downsides, a "statistically superior system" could plausibly give better-than-human results in domains such as "probabilistic settings, risk assessment, or extensive data," the researchers suggest. "As reliance increases, performance tracks AI quality," the researchers write, "rising when accurate and falling when faulty, illustrating the promises of superintelligence and exposing a structural vulnerability of cognitive surrender." In other words, letting an AI do your reasoning means your reasoning is only ever going to be as good as that AI system. As always, let the prompter beware.
Read more of this story at Slashdot.