OpenAI introduced a five-tier system to track AI’s progress towards achieving Artificial General Intelligence (AGI). AGI is the hypothetical AI that surpasses human capabilities in most cognitive tasks. According to Bloomberg, the framework attempts to offer a clearer picture of OpenAI’s approach to AI safety and its vision for the future of intelligent machines.
The Tiers Leading AI to AGI
The five tiers represent a hierarchical structure, outlining the evolving capabilities of AI systems. Level 1, the current baseline, encompasses conversational AI, exemplified by chatbots we interact with daily. As we progress through the levels, the complexity of tasks AI can perform increases significantly.
Level 2, “Reasoners,” signifies AI with basic problem-solving abilities comparable to a human Ph.D. holder, albeit without access to external tools or resources. This level represents a significant leap from basic conversation, requiring the AI to analyze situations, identify problems, and formulate solutions independently.
Level 3, “Agents,” introduces the concept of AI acting autonomously on behalf of a user. Imagine an AI assistant that can manage your schedule, complete tasks over extended periods (several days), and adapt to changing circumstances. This level signifies a transition from reactive AI (responding to prompts) to proactive AI (taking initiative).
Level 4, “Innovators,” marks a significant milestone. Here, AI transcends problem-solving and ventures into the realm of creativity. This level signifies AI that can independently generate novel ideas, inventions, or solutions, potentially accelerating human progress in various fields.
The pinnacle of this system is Level 5, “Organizations.” At this level, AI hypothetically possesses the capability to manage and operate entire organizations. This level represents a complete paradigm shift, with AI systems assuming leadership roles and driving complex decision-making processes within an organization.
Where is OpenAI in this System?
OpenAI believes they currently occupy Level 1, with their AI models demonstrating proficiency in conversational tasks. However, they suggest they are approaching Level 2, with their GPT-4 language model showcasing promising signs of human-like reasoning abilities during an internal research project.
This five-tier system is still a work in progress. OpenAI says that AI research is evolving, and plans to gather feedback from various stakeholders, including employees, investors, and its board of directors. This feedback will be used to refine the system and ensure it accurately reflects the complexities of measuring AI progress.
What is AGI? Can AI Really Advance to AGI?
The concept of AGI has captivated researchers and futurists for decades. While there’s no universally agreed-upon definition of AGI, OpenAI suggests it represents a highly autonomous system capable of surpassing humans in most economically valuable tasks. Achieving AGI is believed to require immense computational power and resources, potentially costing billions of dollars.
OpenAI’s CEO, Sam Altman, has expressed optimism regarding the possibility of achieving AGI within the next decade. However, the criteria for reaching this stage remain a topic of debate within the AI research community. Google DeepMind proposed a similar five-level framework in November 2023, highlighting the convergence of ideas within the field. OpenAI’s approach proposes collaboration, stating their willingness to support any “value-aligned, safety-conscious project” that contributes to the development of AGI.
Progress, Challenges, and Open Questions
The unveiling of this five-tier system signifies a significant step forward in gauging progress towards AGI. It provides a transparent framework for OpenAI’s internal development efforts and fosters discussions about the capabilities and limitations of AI.
However, challenges remain. The finer details of OpenAI’s classification methods are not yet fully transparent. Additionally, the disbanding of OpenAI’s “Superalignment” team, previously focused on the existential dangers of advanced AI, has raised concerns about the company’s prioritization of safety measures.