Key points from article :
Google DeepMind has released an extensive 145-page paper warning that Artificial General Intelligence (AGI)—AI that can perform any intellectual task a human can—could plausibly arrive by 2030. The report raises concerns that such systems might not just cause significant societal disruption but also pose existential risks, including scenarios where humanity could be permanently destroyed. While the paper doesn’t detail exactly how this might happen, it emphasizes the importance of proactive safety measures to prevent what it terms “severe harm.”
The authors, including DeepMind co-founder Shane Legg, outline four key risk areas for AGI: misuse (intentional harm by users), misalignment (systems behaving in unintended ways), mistakes (unexpected failures), and structural risks (conflicting interests between stakeholders or systems). DeepMind proposes a safety strategy focused on preventing misuse and identifying dangerous capabilities early. The paper also critiques the approaches of rival AI labs, calling out Anthropic for limited safety protocols and OpenAI for being overly focused on alignment research.
Despite the cautionary tone, the paper has drawn mixed reactions. Some experts, like Anthony Aguirre from the Future of Life Institute, praised DeepMind’s effort but argued that far more action is needed to address AGI’s unpredictable risks. Others, like Heidy Khlaaf from the AI Now Institute, question whether AGI is even a clearly defined or scientifically measurable concept. Skeptics such as NYU professor Gary Marcus also argue that current AI methods, including large language models, fall far short of achieving true human-like intelligence.
The paper concludes with the caveat that timelines for AGI development remain highly uncertain—but reiterates that a 2030 arrival is plausible. This aligns broadly with statements from DeepMind CEO Demis Hassabis and differs slightly from more aggressive predictions by Anthropic and OpenAI. As debate over AGI’s risks and timeline grows, the scientific community remains divided over whether the goal is within reach—or still a moving target.