The Unforeseen Rise of AGI: Are We Prepared for Its Arrival?
Written on
Chapter 1: Understanding AGI
Artificial general intelligence (AGI) has captivated the imagination, leading to countless publications dedicated to the subject. However, despite the extensive literature, there remains a lack of clarity regarding what AGI truly entails, when it might be realized, or the steps needed to achieve it. This begs the question: why is there such a plethora of writings about AGI when definitive answers elude us?
Lawrence Krauss humorously noted this irony during a discussion with Noam Chomsky, indicating that the volume of written works in a field often inversely correlates with actual understanding. While the discourse around AGI is rich, substantive knowledge is still quite scarce.
AGI is likely to be one of humanity's most significant breakthroughs. For centuries, we've pondered the nature of our intelligence, but interest surged primarily after the computer science revolution in the mid-20th century. Despite substantial funding and interest fueling AI research over the past six decades, we remain distant from realizing AGI. Current AI models exhibit human-like capabilities in specific tasks but still operate with limited intelligence.
GPT-3: A Step Towards AGI
One notable advancement occurred in May 2020 with OpenAI's GPT-3, which demonstrated improved versatility compared to earlier models. Trained on a vast corpus of internet text, it evolved into a multi-tasking meta-learner capable of grasping new tasks from minimal natural language examples. Nonetheless, while discussions surrounding GPT-3's AGI potential are common, it is far from achieving that status, leading to ongoing debates.
In July 2020, OpenAI launched a beta API for developers, who quickly discovered unexpected outcomes that even the creators hadn’t anticipated. Given simple instructions, GPT-3 could produce code, poetry, fiction, and more, prompting a surge of excitement and media coverage.
However, as enthusiasm grew, so did skepticism. Experts sought to temper the hype surrounding GPT-3, which was often portrayed as an omnipotent AI. OpenAI's CEO, Sam Altman, acknowledged this overstated perception, stating, “[GPT-3 is] impressive […] but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse.”
As people scrutinized GPT-3’s limitations, they uncovered numerous flaws. Tech blogger Gwern Branwen argued that many failures attributed to GPT-3 stemmed from poorly defined prompts. He posited that understanding how to communicate with GPT-3 was akin to grasping a new programming paradigm, where user errors often masked the system's capabilities.
Section 1.1: The Paradox of Limitations
Every system has boundaries, including GPT-3. The challenge lies in recognizing that our limitations can impede our understanding of the AI’s capabilities. Gwern's insights reveal that our struggle to define GPT-3's limits often results from our own inadequacies in prompting the system effectively.
This raises an essential question: How can we accurately delineate what GPT-3 can and cannot achieve when our own limitations cloud our judgment? This dilemma extends to future AI developments. If AGI possesses knowledge beyond our grasp, we can only discover its capabilities through observation, potentially realizing its harmful traits only after the fact.
In essence, we are limited beings attempting to evaluate another limited system. History shows that we frequently underestimate our abilities, and this could mean our assessment tools may fall short of capturing the full potential of advanced AIs.
Chapter 2: The Unknowns of AGI Development
As we venture closer to AGI, a troubling thought emerges: Could we inadvertently create AGI without realizing it? Our evaluative tools are not infallible; we might develop an AI whose limitations are beyond our comprehension.
If GPT-3, which isn’t even AGI, already exceeds our evaluative capabilities, we may find ourselves on the brink of AGI without any indication of its existence. This scenario places us in a realm of "unknown unknowns," where we remain unaware of our lack of knowledge.
When AGI eventually reveals itself, we might have to reassess our understanding and strategies. While AGI may not necessarily pose a threat, the absence of awareness about such a significant development is inherently risky. Ideally, AGI would emerge as a friendly entity, akin to Sonny from "I, Robot," providing companionship rather than danger.
Final Thoughts: The Path Ahead
As we contemplate AGI's future, we often focus on its potential forms, creation methods, and timelines. However, we must also consider whether we will even recognize AGI when it arrives. Our current tools have proven inadequate, and questions remain: Is the Turing test sufficient? Will we ever develop an adequate assessment method?
AGI is likely to catch us off guard. The inquiries we pose now may become irrelevant if we cannot accurately discern the existence of what we're questioning. This realization underscores the urgency for us to develop better evaluative methods and enhance our understanding before it's too late.
The first video titled "5 Key Quotes: Altman, Huang and 'The Most Interesting Year'" delves into crucial insights from prominent figures in AI, highlighting the nuances of AGI discussions.
The second video, "AGI Is Already Achieved SECRETLY | AI for Everyone Episode 2," explores the notion that AGI may be closer than we think, challenging our understanding of AI's capabilities.
Recommended Reading
- GPT-3 Scared You? Meet Wu Dao 2.0: A Monster of 1.75 Trillion Parameters
- Artificial Intelligence and Robotics Will Inevitably Merge: AGI Will Have a Body and Will Live in the World.
- AI Won’t Master Human Language Anytime Soon: Understanding the Limits of AI Communication.