What the experts say

Summary

The AI Apocalypse: A Scorecard: How worried are top AI experts about the threat posed by large language models like GPT-4?

What should we make of OpenAI’s GPT-4, anyway? Is the large language model a major step on the way to an artificial general intelligence (AGI)—the insider’s term for an AI system with a flexible human-level intellect? And if we do create an AGI, might it be so different from human intelligence that it doesn’t see the point of keeping Homo sapiens around?

If you query the world’s best minds on basic questions like these, you won’t get anything like a consensus. Consider the question of GPT-4’s implications for the creation of an AGI. Among AI specialists, convictions range from Eliezer Yudkowsky’s view that GPT-4 is a clear sign of the imminence of AGI, to Rodney Brooks’s assertion that we’re absolutely no closer to an AGI than we were 30 years ago.

On the topic of the potential of GPT-4 and its successors to wreak civilizational havoc, there’s similar disunity. One of the earliest doomsayers was Nick Bostrom; long before GPT-4, he argued that once an AGI far exceeds our capabilities, it will likely find ways to escape the digital world and methodically destroy human civilization. On the other end are people like Yann LeCun, who reject such scenarios as sci-fi twaddle.

See link for more information and table of comments.

Source: IEEE Spectrum

OnAir Post: What the experts say

About

Compiled. by Eliza Strickland and Glenn Zorpette

Discuss

OnAir membership is required. The lead Moderator for the discussions is Scott Joy. We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

This is an open discussion on the contents of this post.

Home Forums Open Discussion

Viewing 1 post (of 1 total)
  • Author
    Posts
  • #6881
    Scott Joy
    Participant
Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.
Skip to toolbar