Govts Understand Bias And Discrimination, But Don’t Understand AI Safety

OfficeChai

Even as AI continues to grow ever-stronger, long-time AI safety voices seem to be despairing over how little governments seem concerned over its possible downsides.

Geoffrey Hinton, considered the “Godfather of AI”, has expressed deep concerns about the lack of political will to address AI safety. Hinton, who received the 2018 Turing Award for his groundbreaking work on neural networks, and the Nobel Prize in 2024, lamented the focus on more easily grasped issues like bias and discrimination, while the larger, more existential threat of uncontrolled AI goes largely unaddressed.

“The question is,” Hinton states, “are we going to be able to develop AI safely? And there seems to be not much political will to do that. People are willing to talk about things like discrimination and bias, which [are] things they understand.” Hinton says that true danger lies elsewhere. “But most people still haven’t understood that these things really do understand what they’re saying. We’re producing these alien intelligences.”

Discuss

OnAir membership is required. The lead Moderator for the discussions is AGI Policy. We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

This is an open discussion on this news piece.

Home Forums Open Discussion

Viewing 1 post (of 1 total)
Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.
Skip to toolbar