Why AI Could Mean ‘Lights Out’: Part 1

This post covers the basics of AI, AGI and ASI, and how AI is changing the world.
If you’re already familiar with the basics, you can skip to Part 2 on the near-term risks from AI or Part 3 on the long-term risks.

Why Worry About AI?

AI is amazing and is going to do astounding things for humankind!

From saving tens of thousands of lives every year with self-driving cars, to exponentially increasing the rate at which we make medical breakthroughs – AI could change our future for the better, in countless unimaginable ways.

I just wanted to get that out of the way before I write a whole three-part article about why, unfortunately, AI will also do some unimaginably bad things.

Both are true. There’s a very silly ‘them and us’ dynamic lurking around, pitching those who are ‘for’ AI against ‘doomers’ who are ‘against’ it.

Some high-profile figures seem to be hell-bent on taking a black and white approach to this – it’s silly and anyone doing that needs to grow up.

So, to be 100% crystal clear, AI is a really good thing. But it is also an extremely dangerous thing, and this article is going to be talking about that part of it.

Because things could get really, REALLY bad – and we all need to talk about it.

As an inherently optimistic person, this is a unique occasion when I think we should be focusing not on the hope of utopia, but on the risk of dystopia, or worse. 

Because things could get so bad. Shining light on the risks from AI (while also celebrating the good stuff), is the only logical thing to do.

Futuristic city representing computer program of simulation theory

What’s So Dangerous About AI?

Over the next few years, AI is going to change our way of life in ways that many people still aren’t seeing. It’s going to be huge.

AI is also the most significant threat to humanity of our time.

Take a look at these quotes from people who have succinctly summed up some of the issues: 

“For our way of life as we know it, it’s game over. Our way of life is never going to be the same again.” Mo Gawdat, former CBO Google X and author of Scary Smart, with reference to AI.

“If I were advising governments, I would say that there’s a 10% chance these things will wipe out humanity in the next 20 years,” Geoffrey Hinton, known as one of the ‘Godfathers of AI’ (who left his job at google so that he could speak more openly about the risks of AI).

“We’re rushing towards a cliff, but the closer we get the more scenic the views are,” Max Tegmark, physicist and AI researcher at MIT, co-founder of the Future of Life Institute – discussing losing control of AI.

“The ‘bad case’ (scenario for advanced AI) is lights out for all of us,”  Sam Altman, CEO of Open AI (the company behind ChatGPT) – referring to the accidental misuse of AI.

Some of the most qualified world-famous technologists, thought leaders and academics are shouting these warnings, and wondering why so many people still don’t seem to be listening.

Some others are saying some rather worrying things too:

“AI is the future… whoever becomes the leader in this sphere will become the ruler of the world,” Vladimir Putin

There are many reasons behind the massive threat posed by AI (most of which have nothing to do with a Terminator style takeover). 

In this series we’ll look at the risks that we’re already (or very nearly) facing, then move on to the longer-term issues.

That effectively means that we’re looking at advanced narrow AI (the now), then AGI (Artificial General Intelligence) and ASI (Artificial Super Intelligence) (the future).

We don’t know when advanced AI will become AGI or when AGI will become ASI (we’ll define them all in a moment). But, just like the ‘us v them’ dynamic I mentioned at the beginning, there’s a strange school of thought around the timelines too. 

I read this article recently on the University of Oxford website. It’s written by two professors and says  ‘AI poses real risks to society. Focusing on long-term imagined risks does a disservice to the people and the planet being impacted by this technology today’. 

I don’t want to be disrespectful to two exceptionally clever people, but this is nonsense.

It’s like saying that focusing on the risks from nuclear weapons does a disservice to the planet because the bombs aren’t going off but there are guns on the streets. So just ignore the nukes and talk only of guns. See – nonsense. 

Everyone I have ever met is quite capable of worrying about more than one thing at a time. So let’s go ahead and be concerned about the near-term risks AND the end of the world…

But WHY Is AI Dangerous?!

Ultimately, the fact that AI is likely to become much more intelligent than we are is what may make it dangerous. But there are many specific ways in which that danger may manifest, and many will come before the point when it surpasses human intelligence. 

In Part 2 of this series we’re going to look into the ways in which humans are likely to use AI to create some very dangerous situations, and in Part 3 we’re going to look at why AI systems themselves could cause an existential crisis in the future – unless we stop that from happening.

So, let’s get some basic definitions out of the way, then head over to Part 2 (near-term risks) or Part 3 (existential risks), to see what could possibly go wrong with AI… see you there!

AI Basic Definitions

Artificial Intelligence (AI): AI computer systems are systems that simulate human intelligence processes. Today’s AI is sometimes referred to as ‘narrow AI’ or ‘weak AI’, because its intelligence can be applied in only one narrow field. Examples include autonomous vehicles and digital voice assistants like Alexa – they’re very intelligent within their own narrow field, but they’re not ‘generally’ intelligent, unlike…

Artificial General Intelligence (AGI): AGI is a (currently theoretical) AI system in which the system’s intelligence goes beyond a narrow field and is, instead, intelligent across a range of domains. It is sometimes referred to as ‘Strong AI’.  AGI would be able to match or exceed human intelligence in most cognitive tasks. Most expert opinion on when we will reach AGI ranges from sometime this year (2024) to around 2040, with many appearing to settle around the middle of that range. For what it’s worth, my belief is that it will be sooner rather than later.

Artificial Super Intelligence (ASI): ASI is an AI system that greatly exceeds the intelligence of AGI (therefore ASI is currently theoretical too). There’s no set point at which AGI becomes ASI – I tend to think of it as being intelligence at a level we can no longer even understand. Expert opinion on when, if ever, we will reach ASI, is divided. There are those who think that it would be reached extremely soon after achieving AGI, based on the idea that AGI would be able to improve its own code, and therefore increase its own intelligence exponentially. At the other end of the scale there are some who think that could never happen. If anyone can explain their logic to me in a sensible way, please do so!

Large Language Models (LLMs): LLMs are AI systems that predict text and generate new content. Examples of LLMs include GPT-4 / ChatGPT; Google’s Gemini; Claude and Llama. 

For more AI terms, there’s a very handy glossary of AI terms on the Tech Target website.

Share:

Leave a Reply

Your email address will not be published. Required fields are marked *

Explore the Blog - Editorial Writing Samples
  • Why AI could literally mean ‘lights out for all of …

    View Post
  • Simulation Theory, Part 2: Are We Living in a Simulation?!

    View Post
  • Simulation Theory, Part 1: What on Earth is Simulation Theory?

    View Post
  • Do Aliens Really Exist? The Fermi Paradox!

    View Post