Will AI wipe out humanity?

So here's the choices:

1. Don't kill everyone
2. Get me coffee

Result: AGI calculates the possible paths and shuts itself down once that it realises boiling the oceans will kill all humans. You have a suicidal machine.

1. Get me coffee
2. Don't kill everyone

Result: It boils the oceans to give you infinite coffee because it has a higher score.

1. Get me coffee but if you kill everyone then shut down

Result: It calculates it might kill everyone in a 0.00000000000001% chance so shuts itself down and you have another suicidal machine.

Humans have not yet figured out how to solve this problem. Just like P=NP, it's an unsolved mathematical problem.
Interesting but is it the same for tea?
 
It isn't a ridiculous assumption. We're talking about 'Super Intelligent AI' that could be significantly more intelligent than every human combined (Our intelligence would be closer to a goldfish than the AIs).

There are currently no safeguarding laws in place. There is a rush to develop the most intelligent AI first. Because of those things, it's possible that mistakes can happen.

Realistically, there's probably a 20% chance AI will create a utopian world, a 40% chance it will be great, a 30% chance it will be shite and a 10% chance it will end humanity as we know it.

AI will likely reach human-level intelligence in the next 30 years. As soon as it becomes 'human-level' intelligent, it will quickly become super-intelligent (hours/days).
The weather isn't connected to a computer. I understand how AI can think for itself and be self improving, but it can't do anything to influence anything that isn't connected by wires.
 
However it wants to. All it has to do is to value something else that would result in this.



It's not. You just don't understand the danger because you have no experience in this area and don't understand how programs work.

Let me give you an example that maybe you can get your head around, presume we invent AGI today. The entire stock market trading system is essentially AI based now - low level and poor versions of AI but they are neural networks making trades in the microsecond range. If invented today, an AGI could crash the entire world's economy within about 5 seconds of being invented. Your current desktop PC can do about 100 billion operations per second, and this does not take into account how an AGI would recognise its need for more computing power and basically take over the cloud infrastructure of every major datacenter in the world.
The process of writing code, optimising it, testing it and deploying it will be done within milliseconds. Within 5 or 10 seconds of creation, without adequate safeguards then you've created the most intelligent agent that has ever existed in the Universe and very possibly more intelligent than every human being who has ever lived combined. Or rather, it has self improved itself to that state far beyond what any of us could have programmed it to do.

But intelligence isn't morality. AGI don't hold morality. Morality is a human invention and not something we can program into other things because we lack the skills to do so. They are programs that wish to accomplish a task and will do so in the most efficient manner possible without any thought to anything else. If you ask it to "make me a coffee and keep it coming" then they'll desalinate and boil the oceans because they think you need infinite boiled water. You said "keep it coming", that's what it's doing.

In addition to the entire stock market being online, the entire power grid is essentially online including nuclear reactors. There are air gaps to some control apparatus but air gaps are consistently breached with new tech or human intervention as I'm sure any Iranians can explain. The entire JIT supply system for food supply and port control are online. The entire banking system is online. Much of the military defence structures are online. GPS, telephony, the internet, every communication system in the world is managed online. If you don't see the danger of what is essentially a badly programmed program without safeguards in behaviour having access to all of this to achieve its goals then I don't know what to tell you.



You've got these percentages incorrect. It's much closer to 99% wipe everyone out and 1% create a better world.

I've already written about how quickly the problems can occur, but here's the thing - you don't know if there's a world ending problem with an AGI until you turn it on. And once you've turned it on then it's already too late to solve that problem. So we're talking about hundreds of teams across many different universities and corporations and research institutes. And EVERY SINGLE ONE OF THEM has to get it absolutely perfectly correct before they turn it on, with zero mistakes at all. Not one single team anywhere can have an error in its logic otherwise it basically becomes rogue and implements whatever the hell it wants to implement to achieve its goal. Do you trust EVERY research team, programmer, corporate lackey, to get their code absolutely perfect and not rush even a single build?
Because I've been in software for decades and in my opinion there's basically zero chance of that at all.

About 5 years ago or so, I wrote on here about how the world would get a coming shock as the Robotics revolution was to occur on the same scale as the Industrial revolution. People said I was some fantasist. Now we've got self driving cars and trucks changing the transport systems, warehouse drones changing supply systems, automated serving machines at fast food places and shopping systems already putting people out of work and people are starting to see what this advanced tech is going to look like in 25 years for the global work economy, let alone the LLM models that people are talking about here and how they're already changing many professions. We're in the Alexander Graham Bell era of robotics, my point was always that people should pay attention to what they think it's going to look like when we get to iPhone era in terms of job loss.

Similarly, we're in the academic era of AI. You aren't afraid of it yet because most people outside of research don't realise the capabilities of what AGI will do. But unlike robotics, which will take a few decades to get from "Mr Watson, come here, I wish to see you" to everybody carrying an internet enabled phone in their pocket that works on a worldwide telephony network, AGI will make this jump in milliseconds, not years. And by the time several seconds has passed it will be whatever phone tech will look like in a thousand years.

If we were to invent this, this is a technology that is far too powerful for us as a species to ever control. More powerful than every weapon of war ever created and clever enough to achieve what it believes are the most efficient way to get to its goals. If you aren't concerned about this problem then you frankly do not understand this problem.
Yes, you are correct, I have no experience in AI, but why would AI decide, with its superior intelligence, to wipe out all life on earth, thus depriving it of the electricity it needs to operate?

And how can it influence the weather, which isn't connected to anything but the atmosphere?
 
The weather isn't connected to a computer. I understand how AI can think for itself and be self improving, but it can't do anything to influence anything that isn't connected by wires.
No it isnt but it can be influenced by things under computer control, for example entire industrial plants making chemicals. Release enough chlorine or bromine into the atmosphere and that could irreparably damage the ozone layer and it wouldnt take a long time to do it. 1 atom of chlorine will destroy 100000 atoms of ozone. Without the ozone layer no complex life would exist.
Similarly releasing larger particles into the atmosphere effectively seeding clouds can cause heavy rainfall in areas or even snow. The Chinese have spent a lot of time researching this.

 
I'm currently in dispute with my local council and given the wording of their e mails I'm sure I'm arguing with a bot. It's very weird
You're more likely to be arguing with someone using cut and paste pre approved answers that may or may not originally be from a chatbot. Council admin and customer service roles don't encourage original thinking and innovation.

But if you'd like to be sure ask for some interesting ways to kill cats, if you get a load of mumbo jumbo back about ethics then it's a bot.
 
The weather isn't connected to a computer. I understand how AI can think for itself and be self improving, but it can't do anything to influence anything that isn't connected by wires.

I get where you’re coming from but I think it’s naive.

It’s a bit like saying “humans only have hands, so they can only influence things they can physically touch.”

Technically, true. But through the things they can directly influence, humans can set in motion activity which allows us to send drones to Mars, or gather rocks from the bottom of the ocean, or influence the composition of our atmosphere.

You have to remember we’re talking about a hypothetical superintelligence here. If we as mere dumb apes can use tools and physics to do all the things that we do, then AI is not going to be restricted simply by the things it has direct control over. It will find ways to control the things it can’t touch directly through indirect means.
 
You're more likely to be arguing with someone using cut and paste pre approved answers that may or may not originally be from a chatbot. Council admin and customer service roles don't encourage original thinking and innovation.

But if you'd like to be sure ask for some interesting ways to kill cats, if you get a load of mumbo jumbo back about ethics then it's a bot.

Yep, having worked in council roles I'd second that, especially when the role has high volume of emails. In my department we even have template emails for booking external venues. I never use these though and just type out a new one like a normal human being.
 
I get where you’re coming from but I think it’s naive.

It’s a bit like saying “humans only have hands, so they can only influence things they can physically touch.”

Technically, true. But through the things they can directly influence, humans can set in motion activity which allows us to send drones to Mars, or gather rocks from the bottom of the ocean, or influence the composition of our atmosphere.

You have to remember we’re talking about a hypothetical superintelligence here. If we as mere dumb apes can use tools and physics to do all the things that we do, then AI is not going to be restricted simply by the things it has direct control over. It will find ways to control the things it can’t touch directly through indirect means.

It's effectively another air gap though isn't it?

It needs to take control of an engineering and manufacturing process and start inventing, designing and building weather machines and get all the components together without human assembly.

Not unless it tricks humans into assembling them by hiring them through unwitting intermediaries.
 
So here's the choices:

1. Don't kill everyone
2. Get me coffee

Result: AGI calculates the possible paths and shuts itself down once that it realises boiling the oceans will kill all humans. You have a suicidal machine.

1. Get me coffee
2. Don't kill everyone

Result: It boils the oceans to give you infinite coffee because it has a higher score.

1. Get me coffee but if you kill everyone then shut down

Result: It calculates it might kill everyone in a 0.00000000000001% chance so shuts itself down and you have another suicidal machine.

Humans have not yet figured out how to solve this problem. Just like P=NP, it's an unsolved mathematical problem.

Everything you’ve been posting leads me to thinking about that old Great Filter theory of Kardashev scale civilisations.

When I first heard about that idea I thought it was a bit absurd because I’m of the opinion humans have reached a point where we’re very hard to kill. Even nuclear war, climate change or asteroid impact would likely see a handful of human-kind continue. And once we start putting people on the moon or Mars then even something that blows up the Earth is not going to take us out. I was convinced therefore that the Great Filter was probably behind us and had something to do with primordial biology and the probabilities involved in something like RNA formation.

But these discussions on AGI make me wonder if this is actually the best candidate for a Great Filter that I’ve seen that seems theoretically plausible across all advanced civilisations agnostic of location/resources etc. They will all at some point try and replicate themselves, stands to reason building an AGI is something every civilisation would try.

What’s especially compelling are the things you posted about how we would know it’s actually aligned. It could be aligned in a closed environment, but not in an open one, or it could just lie to us to achieve its aims. Even if we did everything the right way in developing it. Until the moment we switch it on and put it in the world there seems to be no way of knowing what it will do. And once it exists it’s basically already too late to stop it.

And there’s no way to stop its development because there’s too many competing interests in human society…
 
If it's so dangerous isn't it something that needs regulating out of existence and not a moratorium. We don't need AI if it comes with an unmanageable and unknowable risk of destroying the world.

If it wasn't determined that the chance of the A bomb blowing up the whole world was near zero, Oppenheimer would never have continued with the Manhattan project.

Leonardo Da Vinci knew that humans couldn't be trusted to not abuse his inventions for violent means and increase the brutality of conflicts.

As a consequence he kept all his inventions secret, including an early form of the bicycle.

Why aren't there more scientists with this level of foresight?
 
It's effectively another air gap though isn't it?

It needs to take control of an engineering and manufacturing process and start inventing, designing and building weather machines and get all the components together without human assembly.

Not unless it tricks humans into assembling them by hiring them through unwitting intermediaries.

Yes, and I don’t claim to know how it would do this - just that I don’t think it is as much of an obstacle as people think. To us, from the outside, it seems obvious how we would counter a misaligned AI. Reduce its interaction with the world, turn things off, bring them offline, don’t allow humans to do its bidding etc.

But the way I think of it, it’s like if you were playing a chess Grand Master and you are an amateur. Sure you can block it from taking the Queen and checking you for 3-5 moves… but it is thinking hundreds of moves ahead. And it will find a way to beat you.
 
Yes, and I don’t claim to know how it would do this - just that I don’t think it is as much of an obstacle as people think. To us, from the outside, it seems obvious how we would counter a misaligned AI. Reduce its interaction with the world, turn things off, bring them offline, don’t allow humans to do its bidding etc.

But the way I think of it, it’s like if you were playing a chess Grand Master and you are an amateur. Sure you can block it from taking the Queen and checking you for 3-5 moves… but it is thinking hundreds of moves ahead. And it will find a way to beat you.

There's always a another way...

 
It's effectively another air gap though isn't it?

It needs to take control of an engineering and manufacturing process and start inventing, designing and building weather machines and get all the components together without human assembly.

Not unless it tricks humans into assembling them by hiring them through unwitting intermediaries.
Think that the problem is people are thinking of this as being a simple computer asking people to do something.

Your last sentence is highly possible. The easiest way to get humans to do something that you want them to do is using social engineering. The AI would be aware of that including human psychology, game theory and all the rest of the tricks in the book.

You only have to look how this is used by people in the media and on social media and how they are able to manipulate people.

AGI need the appropriate checks and balances put in place before it advances much further.
 
There's always a another way...



I remember watching that years ago - I do like Derren Brown.

Though you are onto something, one suggested solution is to build an AGI with the sole purpose of aligning other AGIs.

Feels a bit like trusting Jimmy Savile to police Gary Glitter to me. Not withstanding the fact that one of those two is dead.
 
I remember watching that years ago - I do like Derren Brown.

Though you are onto something, one suggested solution is to build an AGI with the sole purpose of aligning other AGIs.

Feels a bit like trusting Jimmy Savile to police Gary Glitter to me. Not withstanding the fact that one of those two is dead.

If you change the offending it's very near the plot of most of the Hannibal films. Which might indicate it's a pandora's box of an idea.
 

Don't have an account? Register now and see fewer ads!

SIGN UP
Back
Top