"AGI Will Destroy Us!" That's What They Want You To Think
- Milton Omena
- Jun 4, 2024
- 3 min read
I was watching a recent interview on Lex Fridman’s podcast, Roman Yampolskiy where they discussed scenarios and possible futures for Artificial General Intelligence (AGI). Lex, always the optimist, compared the current anxiety over AGI to past technological fears, arguing for a potentially better future.
Roman's view, on the other hand, is pretty grim: he believes that artificial intelligence will be the end of humanity, either through direct threat or existential despair. He even stated:
“The only way to win is by not playing this game.”
Now, I agree with Roman to an extent, but I would argue that we aren’t even playing the game. Let me explain.
First, let’s define our terms. AI technology is a broad terminology often varies. Here’s how I will define them in this article:
Weak AI: Narrow models performing specific tasks (e.g., Midjourney, GPT-4, Sora). These models are competent but limited by context and unable to learn in real time. They are essentially complex decision trees.
Strong AI: Multimodal AIs capable of performing tasks and learning in real time. We don’t have a perfect example yet, but imagine GPT-4 with real-time learning, updates and connectivity. It’s still parameter-driven but can reinterpret those directives dynamically.
Soft AI: AIs that perform human-like tasks based on initial directives or prompts.
Hard AI: AIs that think like humans, capable of reasoning, self-contemplation, creativity, and actual awareness. They don't necessarily need initial parameters, they have will.
I envision these AI types on a Cartesian plot with two dimensions:
Vertical Axis: Adaptability (Weak AI to Strong AI)
Horizontal Axis: Awareness (Soft AI to Hard AI)
The 0 point on each axis represents the human equivalent.
I populated this plot with some recognizable characters:
Butter-bot from Rick and Morty: Self-aware for a laugh, only performs one menial task — passing the butter.
HAL 9000 from 2001: A Space Odyssey: Highly adaptable and capable, it can control a ship’s systems and process information rapidly. However, it’s bound by its directives and contained within the ship.
Replicants from Blade Runner: As capable as humans, with similar adaptability and limitations due to their human-like bodies. They are almost self-aware but lack full consciousness (assuming no malfunction).
The Future of AGI and Our Fears
Why are we so afraid of AGI? The short answer is fear sells. It’s a powerful motivator, especially in sales. Companies can overestimate product capabilities by making them seem scarier. Nobody knows when AGI will be achieved, but it pays to appear as if it’s just around the corner and you are the one holding it back.
Using our plot, let’s define our boundaries and current technological progress. Here’s my take on AGI:
Faux-AGI: The AGI discussed by Silicon Valley experts is less self-aware and evolves more in connectivity and context rather than free will and consciousness.
True AGI: This involves machines not only learning and performing like humans but also wanting and reasoning like them. This is where the danger lies and where diminishing ROI becomes evident.
I don't think we are building towards true AGI, but do we even need it? I argue that performing tasks deterministically based on initial parameters is sufficient for consumer needs and for a good product. Introducing free will and rationale increases complexity and costs without significant added value.
There is little incentive to build true AGI. For those attempting it, the barrier is so high that it’s not an imminent worry. As of June 2024, predictions suggest Faux-AGI might arrive by 2026, according to some tech experts.
Talvez Eu Pague Minha Língua
I might regret this prediction if we end up under robot overlords in a few years, but I don’t believe we currently face existential threats from AGI. A market reset with AI potentially wiping out jobs, however — that’s a different story. Good luck with that.
Comments