“Yapay Zeka İş Otomasyonunun Faydalı Olduğunu Öngörüyor—’Bu saçmalık—bir Magic 8 topunu sallayıp gösterdiği cevapları yazmakla aynı şey olabilir’ Diyor Uzman”

Eski İngiltere başbakanı tarafından kurulan kar amacı gütmeyen bir kuruluş olan Tony Blair Küresel Değişim Enstitüsü, kamu sektöründeki işlerde yapay zeka otomasyonunun çalışanların beşte birinin zamanını kurtarabileceğini ve iş gücü ve hükümet maliyetlerinde büyük bir azalma sağlayabileceğini öngören bir makale (PDF) yayınladı . Makalenin bulguları Tony Blair tarafından 2024 Britanya’nın Geleceği Konferansı’nın açılışında sunuldu.

Sadece küçük bir sorun: Tahmin Chat GPT tarafından yapıldı. Ve 404 Media’nın bu tuhaf ouroboros raporu hakkında röportaj yaptığı uzmanların da belirttiği gibi, AI’nın ne kadar güvenilir, kullanışlı veya yararlı olabileceği hakkında bilgi almak için en güvenilir kaynak olmayabilir.

The Tony Blair Institute researchers gathered data from O*NET based on occupation-specific descriptors on nearly 1,000 US occupations, with the aim to assess which of these tasks could be performed by AI. However, talking to human experts in order to define which roles could be suitable for AI automation was deemed too difficult a problem to solve, so they funnelled the data into ChatGPT in order to make a prediction instead.

Trouble is, as the researchers noted themselves, LLMs “may or may not give reliable results.” The solution? Ask it again, but differently.

“We first use GPT-4 to categorise each of the 19,281 tasks in the O*NET database in several different respects that we consider to be important determinants of whether the task can be performed by AI or not. These were chosen following an initial analysis of GPT-4’s unguided assessment of the automatability of some sample tasks, in which it struggled with some assessments”

“This categorisation enables us to generate a prompt to GPT-4 that contains an initial assessment as to whether it is likely that the task can or cannot be performed by AI.”

So that’d be AI, deciding which jobs can be improved by AI, and then concluding that AI would be beneficial. Followed by an international figure extolling the virtues of that conclusion to the rest of the world.

Unsurprisingly, those looking into the details of the report are unsatisfied with the veracity of the results. As Emily Bender—a University of Washington professor in the Computational Linguistics Laboratory, interviewed by 404 media—puts it:

“This is absurd—they might as well be shaking at Magic 8 ball and writing down the answers it displays”

“They suggest that prompting GPT-4 in two different ways will somehow make the results reliable. It doesn’t matter how you mix and remix synthetic text extruded from one of these machines—no amount of remixing will turn it into a sound empirical basis.”

The findings were reported by several news outlets without mention of the ChatGPT involvement in the papers predictions. It’s unknown if Big Tony knew that the information he was presenting was based on less than reliable data methods, or indeed, read the paper in detail himself.

While the researchers here at least documented their flawed methodology, it does make you wonder how much seemingly-accurate information is being created based on AI predictions, then presented as verifiable fact.

Nor, for that matter, content created by AI with just enough believability to pass without serious investigation. To prove that this article isn’t an example of such content, here’s a spelling mistike. You’re welcome.

Bir Cevap Yazın