Skip to content

AI Pioneer Discloses Probabilities of Machine Domination over Humans

Artificial Intelligence pioneer Geoffrey Hinton, dubbed the 'godfather of AI', expressed this month concerns over AI's potential control of humanity, estimating a 10-20% likelihood of such an event happening.

AI Pioneer Discloses Probabilities of Machine Domination over Humans

Wary Warning from AI Pioneer

Geoffrey Hinton, the ace of artificial intelligence (AI), has sounded the alarm about the potential dangers of AI control slipping from human hands in the near future. The 2024 Nobel laureate in physics, who shared the award with John Hopfield, warned that the majority of us are still clueless about the looming AI revolution.

"People still haven't got it. People haven't understood what's coming," Hinton cautioned in an interview with CBS. He admitted to agreeing with Elon Musk's assessment that AI could take over within a 10 to 20 percent margin, though he acknowledged this was a mere guess. Musk, who runs Tesla and the AI company xAI, has predicted that AI will surpass human intelligence by 2029.

Hinton's concerns revolve around the swift evolution of AI systems. "These things will take over," he stated.

AI Companies: prioritizing profits or safety?

Hinton contends that leading AI companies like Google, OpenAI, and xAI should dedicate substantial resources to safety research, maybe even a third of their computing power. However, he criticizes these corporations for lobbying to cut AI regulation, which he believes is already scant.

Read more about the concerns of Google DeepMind CEO Demis Hassabis regarding human-level AI in an upcoming article: "AGI: Society's Not Ready," Demis Hassabis Warns.

AI: A New Industrial Revolution

Hinton invented the foundational machine learning that powers today's AI products and applications. Despite his achievements, he expresses worries about the future development of AI.

“It will be like the industrial revolution, but instead of exceeding people in physical strength, it will exceed them in intellectual ability," Hinton stated. He fears that systems superior to humans could eventually seize control.

Hinton champions government regulation for the technology, given its rapid development rate.

Apart from Hinton, Google DeepMind CEO Demis Hassabis has also voiced concerns about AI. Recently, he expressed society's unpreparedness for human-level AI, called Artificial General Intelligence (AGI). Hassabis declared that AGI, only five to ten years away, is keeping him up at night.

*Also Read | AGI: Society's Not Ready—Demis Hassabis' Nightmare Scenario*

Hinton's Concerns Unveiled

  • Existential Risk: Estimates a 10%-20% probability that advanced AI systems could eventually jeopardize human survival[1][3][5].
  • Rapid Advancement: AI development has surpassed initial expectations, making AI agents capable of autonomous real-world actions, increasing the potential danger[2].
  • Unpredictability: Warns of the difficulty in understanding AI decision-making processes, especially as systems near superintelligent capabilities[3][4].
  • Industry Priorities: Criticizes tech leaders for prioritizing profit over safety despite warnings[1][5].

Hinton's AI Safety Recommendations

Although Hinton hasn't outline a detailed safety framework, his interviews suggest urgent needs:

  • Preemptive research: Calls for global efforts to determine if controllable superintelligence is possible[5].
  • Regulatory action: Advocates for stricter oversight on AI development, especially autonomous agents with world-influencing capabilities[2][5].
  • Caution analogy: Compares current AI systems to a "cute tiger cub" that could turn lethal once fully grown, urging developers to demonstrate systems' long-term safety before deployment[1][4].

Hinton emphasizes the importance of addressing AI safety issues to match the pace of capability advancements[5].

  1. Elon Musk, who runs Tesla and the AI company xAI, agrees with Hinton's assessment that AI could seize control within a 10 to 20 percent margin by the year 2029.
  2. Hinton asserts that AI companies like Google, OpenAI, and xAI should dedicate substantial resources, such as a third of their computing power, towards safety research to mitigate existential risk.
  3. The development of AI systems has surpassed initial expectations, making AI agents capable of autonomous real-world actions, and Hinton warns that as these systems near superintelligent capabilities, their decision-making processes could become unpredictable, potentially posing a danger to human survival.
Artificial Intelligence pioneer Geoffrey Hinton, dubbed the 'godfather of AI', recently voiced concerns, estimating a 10 to 20 percent likelihood that AI could one day dominate humanity.

Read also:

    Latest