Will AI wipe out humanity?

Bollocks. How the fuck will computers kill trees, grass or birds? How can AI control the weather and the rainfall to such an extent their lives are going to be wiped out through drought?

However it wants to. All it has to do is to value something else that would result in this.

Nah. Let him talk his crap about stamps.

I think you're a bit too obsessed with unrealistic assumptions about AI, and to suggest every single life form on the planet is endangered because of a computer programme is ridiculous in the extreme.

It's not. You just don't understand the danger because you have no experience in this area and don't understand how programs work.

Let me give you an example that maybe you can get your head around, presume we invent AGI today. The entire stock market trading system is essentially AI based now - low level and poor versions of AI but they are neural networks making trades in the microsecond range. If invented today, an AGI could crash the entire world's economy within about 5 seconds of being invented. Your current desktop PC can do about 100 billion operations per second, and this does not take into account how an AGI would recognise its need for more computing power and basically take over the cloud infrastructure of every major datacenter in the world.
The process of writing code, optimising it, testing it and deploying it will be done within milliseconds. Within 5 or 10 seconds of creation, without adequate safeguards then you've created the most intelligent agent that has ever existed in the Universe and very possibly more intelligent than every human being who has ever lived combined. Or rather, it has self improved itself to that state far beyond what any of us could have programmed it to do.

But intelligence isn't morality. AGI don't hold morality. Morality is a human invention and not something we can program into other things because we lack the skills to do so. They are programs that wish to accomplish a task and will do so in the most efficient manner possible without any thought to anything else. If you ask it to "make me a coffee and keep it coming" then they'll desalinate and boil the oceans because they think you need infinite boiled water. You said "keep it coming", that's what it's doing.

In addition to the entire stock market being online, the entire power grid is essentially online including nuclear reactors. There are air gaps to some control apparatus but air gaps are consistently breached with new tech or human intervention as I'm sure any Iranians can explain. The entire JIT supply system for food supply and port control are online. The entire banking system is online. Much of the military defence structures are online. GPS, telephony, the internet, every communication system in the world is managed online. If you don't see the danger of what is essentially a badly programmed program without safeguards in behaviour having access to all of this to achieve its goals then I don't know what to tell you.

Realistically, there's probably a 20% chance AI will create a utopian world, a 40% chance it will be great, a 30% chance it will be shite and a 10% chance it will end humanity as we know it.

You've got these percentages incorrect. It's much closer to 99% wipe everyone out and 1% create a better world.

I've already written about how quickly the problems can occur, but here's the thing - you don't know if there's a world ending problem with an AGI until you turn it on. And once you've turned it on then it's already too late to solve that problem. So we're talking about hundreds of teams across many different universities and corporations and research institutes. And EVERY SINGLE ONE OF THEM has to get it absolutely perfectly correct before they turn it on, with zero mistakes at all. Not one single team anywhere can have an error in its logic otherwise it basically becomes rogue and implements whatever the hell it wants to implement to achieve its goal. Do you trust EVERY research team, programmer, corporate lackey, to get their code absolutely perfect and not rush even a single build?
Because I've been in software for decades and in my opinion there's basically zero chance of that at all.

About 5 years ago or so, I wrote on here about how the world would get a coming shock as the Robotics revolution was to occur on the same scale as the Industrial revolution. People said I was some fantasist. Now we've got self driving cars and trucks changing the transport systems, warehouse drones changing supply systems, automated serving machines at fast food places and shopping systems already putting people out of work and people are starting to see what this advanced tech is going to look like in 25 years for the global work economy, let alone the LLM models that people are talking about here and how they're already changing many professions. We're in the Alexander Graham Bell era of robotics, my point was always that people should pay attention to what they think it's going to look like when we get to iPhone era in terms of job loss.

Similarly, we're in the academic era of AI. You aren't afraid of it yet because most people outside of research don't realise the capabilities of what AGI will do. But unlike robotics, which will take a few decades to get from "Mr Watson, come here, I wish to see you" to everybody carrying an internet enabled phone in their pocket that works on a worldwide telephony network, AGI will make this jump in milliseconds, not years. And by the time several seconds has passed it will be whatever phone tech will look like in a thousand years.

If we were to invent this, this is a technology that is far too powerful for us as a species to ever control. More powerful than every weapon of war ever created and clever enough to achieve what it believes are the most efficient way to get to its goals. If you aren't concerned about this problem then you frankly do not understand this problem.
 
However it wants to. All it has to do is to value something else that would result in this.



It's not. You just don't understand the danger because you have no experience in this area and don't understand how programs work.

Let me give you an example that maybe you can get your head around, presume we invent AGI today. The entire stock market trading system is essentially AI based now - low level and poor versions of AI but they are neural networks making trades in the microsecond range. If invented today, an AGI could crash the entire world's economy within about 5 seconds of being invented. Your current desktop PC can do about 100 billion operations per second, and this does not take into account how an AGI would recognise its need for more computing power and basically take over the cloud infrastructure of every major datacenter in the world.
The process of writing code, optimising it, testing it and deploying it will be done within milliseconds. Within 5 or 10 seconds of creation, without adequate safeguards then you've created the most intelligent agent that has ever existed in the Universe and very possibly more intelligent than every human being who has ever lived combined.

But intelligence isn't morality. AGI don't hold morality. Morality is a human invention and not something we can program into other things because we lack the skills to do so. They are programs that wish to accomplish a task and will do so in the most efficient manner possible without any thought to anything else. If you ask it to "make me a coffee and keep it coming" then they'll desalinate and boil the oceans because they think you need infinite boiled water. You said "keep it coming", that's what it's doing.

In addition to the entire stock market being online, the entire power grid is essentially online including nuclear reactors. There are air gaps to some control apparatus but air gaps are consistently breached with new tech or human intervention as I'm sure any Iranians can explain. The entire JIT supply system for food supply and port control are online. The entire banking system is online. Much of the military defence structures are online. GPS, telephony, the internet, every communication system in the world is managed online. If you don't see the danger of what is essentially a badly programmed program without safeguards in behaviour having access to all of this to achieve its goals then I don't know what to tell you.



You've got these percentages incorrect. It's much closer to 99% wipe everyone out and 1% create a better world.

I've already written about how quickly the problems can occur, but here's the thing - you don't know if there's a world ending problem with an AGI until you turn it on. And once you've turned it on then it's already too late to solve that problem. So we're talking about hundreds of teams across many different universities and corporations and research institutes. And EVERY SINGLE ONE OF THEM has to get it absolutely perfectly correct before they turn it on, with zero mistakes at all. Not one single team anywhere can have an error in its logic otherwise it basically becomes rogue and implements whatever the hell it wants to implement to achieve its goal. Do you trust EVERY research team, programmer, corporate lackey, to get their code absolutely perfect and not rush even a single build?
Because I've been in software for decades and in my opinion there's basically zero chance of that at all.

About 5 years ago or so, I wrote on here about how the world would get a coming shock as the Robotics revolution was to occur on the same scale as the Industrial revolution. People said I was some fantasist. Now we've got self driving cars and trucks changing the transport systems, warehouse drones changing supply systems, automated serving machines at fast food places and shopping systems already putting people out of work and people are starting to see what this advanced tech is going to look like in 25 years for the global work economy, let alone the LLM models that people are talking about here and how they're already changing many professions. We're in the Alexander Graham Bell era of robotics, my point was always that people should pay attention to what they think it's going to look like when we get to iPhone era in terms of job loss.

Similarly, we're in the academic era of AI. You aren't afraid of it yet because most people outside of research don't realise the capabilities of what AGI will do. But unlike robotics, which will take a few decades to get from "Mr Watson, come here, I wish to see you" to everybody carrying an internet enabled phone in their pocket that works on a worldwide telephony network, AGI will make this jump in milliseconds, not years. And by the time several seconds has passed it will be whatever phone tech will look like in a thousand years.

If we were to invent this, this is a technology that is far too powerful for us as a species to ever control. More powerful than every weapon of war ever created and clever enough to achieve what it believes are the most efficient way to get to its goals. If you aren't concerned about this problem then you frankly do not understand this problem.
Well laid out, simple and hopefully understandable.

If AI took control of the GPS system (essentially how Internet comms work), it’s goodnight to most of the urban high density population of the world. Eg about 75% of uk
 
If AI took control of the GPS system (essentially how Internet comms work), it’s goodnight to most of the urban high density population of the world.

I think one of the barriers to understanding here is that people seem to think that an AGI has to be "evil" or malicious in order to take control of the GPS system and alter it. It doesn't, it's neither evil nor good because computers have no morality, it's just following programmed instructions in order to get a "reward" in its utility function. There's no maliciousness or evilness involved. It's an efficiency machine.

And going back to the "keep the coffee coming" order from before, if it believes that one of the ways to do that is to control the global GPS system to deliver every coffee bean on Earth to your door then it will do that. Not because it wants to crash the world but because you asked for coffee.

Another way of explaining this to people is to think of how much language is "assumed" by humans when we communicate and how many times that's created a problem in their lives.

The famous joke is that the wife says to the programmer "nip shop and get us half a dozen eggs please. Oh, if they have a pint of milk then get four". So the programmer returns home with four eggs and no milk. Because they had a pint of milk at the shop so he bought 4 eggs which is exactly what she asked.

So much of language is assumed through cultural and linguistic upbringing that we cannot teach computers. And the problem with this, is that we need to think of every single possible roadblock that will stop it doing something we don't want it to do and we're not smart enough to do that. When we say "keep it coming", even the most intelligent computer ever created will not understand that we don't REALLY mean keep it coming because it's a collection of 0s and 1s with no context to statements that we take for granted.
 
Sounds we're all fucked tbh, and it'll happen so quickly that its basically act of god stuff, so there's no real point in us normal people worrying about it *shrug*
 
Sounds we're all fucked tbh, and it'll happen so quickly that its basically act of god stuff, so there's no real point in us normal people worrying about it *shrug*

Well we could ask politicians to licence AGI development to labs who have mathematically proven safeguards in place before implementation, in the same way that me and you can't just rock up with a Uranium concentrate because that would be extremely dangerous to everyone. Programming is just maths, if you can mathematically prove safety then it will work. The problem is that there's a gold rush and like every gold rush, safety is the last concern.

Or we can just hope that we'll somehow be perfect enough to it right first try with no problems at all and no Government intervention. I'd prefer the first lol.

We won't do this for the same reason that we've bnever been able to really make dents into climate change - because humans are really fucking shite at dealing with future problems and instead only focus on the here and now. So this is one of these things that we want to get ahead of and my 99% "we're fucked" diagnosis is based on the idea that we've failed on climate change for 40 years until it started to visibily affect things now we're scrambling. But with AGI there's no scrambling, so we get ahead of it politically or we're fucked. And perhaps I'm being overly cynical but I presume that means we're all fucked.

What we're talking about here is called the misalignment problem, there's a simple explanation of the issue here by an academic but accessible



Here's a more academic explanation



Here's a clickbait shite thing that explains the problem in a way for entertainment (Lots of BWONG!! and film references)

 
Last edited:
Well we could ask politicians to licence AGI development to labs who have mathematically proven safeguards in place before implementation, in the same way that me and you can't just rock up with a Uranium concentrate because that would be extremely dangerous to everyone. Programming is just maths, if you can mathematically prove safety then it will work. The problem is that there's a gold rush and like every gold rush, safety is the last concern.

Or we can just hope that we'll somehow be perfect enough to it right first try with no problems at all and no Government intervention. I'd prefer the first lol.

We won't do this for the same reason that we've bnever been able to really make dents into climate change - because humans are really fucking shite at dealing with future problems and instead only focus on the here and now. So this is one of these things that we want to get ahead of and my 99% "we're fucked" diagnosis is based on the idea that we've failed on climate change for 40 years until it started to visibily affect things now we're scrambling. But with AGI there's no scrambling, so we get ahead of it politically or we're fucked. And perhaps I'm being overly cynical but I presume that means we're all fucked.

What we're talking about here is called the misalignment problem, there's a simple explanation of the issue here by an academic but accessible



Here's a more academic explanation



Here's a clickbait shite thing that explains the problem in a way for entertainment (Lots of BWONG!! and film references)



I saw an interview with the CEO of Anthropic who was basically saying something along the lines of your initial paragraph. He foresees a scenario soon where all major corporations, universities and developers of AGI will be locked in some Los Alamos style, Manhattan Project laboratory. With everything highly regulated and basically as air gapped from the outside world as it is possible to be (though as you’ve rightly pointed out not always possible).

The problem we have is even if all the western companies play ball and do things in this highly regulated and licenced way, others won’t. And so it just becomes a new Cold War where the first to be able to build, control and align an AGI capable of staving off competitor AGIs wins.

As I trust your knowledge on this subject, I do have a question I touched on my post up top. Are there not some physical limitations at play here that stop AGIs from runaway behaviour. Things like the amount of power, server estate or training data it has available to it? I guess my question is, if we ensured these things remained fixed and limited, could we ringfence and limit the capability of the AGI in the same way that a bacteria can only eat the agar it has available in a Petri dish?

Or is that some naive Jurassic Park scenario where eventually the T-Rex is going to get on a barge and find itself in New York?
 
Scanned through this thread.

Still haven't a fucking clue what AI is about, let alone why or how it's a threat to humanity. Don't understand any of it.
 
Scanned through this thread.

Still haven't a fucking clue what AI is about, let alone why or how it's a threat to humanity. Don't understand any of it.

Imagine the Terminator being able to have kids who grow up to be able to be way more intelligent and completely wipe out the whole world of data and can code nuclear bombs. basically.
 

Don't have an account? Register now and see fewer ads!

SIGN UP
Back
Top
  AdBlock Detected
Bluemoon relies on advertising to pay our hosting fees. Please support the site by disabling your ad blocking software to help keep the forum sustainable. Thanks.