Will AI wipe out humanity?

Oh sorry, these videos are more around the question. To answer specifically about how AGI could hurt humanity, there's an old video on a stamp collecting AI that could address it more directly. Maybe watch this then watch the ones above to really try to understand the challenges that AGI present



If you want to know more about world states, utility functions, and the maths behind this stuff then his channel can be viewed after the above. AGI is just maths and computer science.

These are all simplified thought experiments that explain the problem of AGI safety to a normal person, but the idea here is that you can't out think something that's smarter than you and it will find the loopholes without any thought about morality in the way that we consider it.
 
Artificial General Intelligence is the single largest threat to the human species, in front of nuclear war, climate change and other extinction level events.

The only people who disagree are generally people who don't know what AGI is or why it's much more likely to wipe out the entire world than those things. This is almost certainly what will kill all life on Earth.
Bollocks. How the fuck will computers kill trees, grass or birds? How can AI control the weather and the rainfall to such an extent their lives are going to be wiped out through drought?

Nah. Let him talk his crap about stamps.

I think you're a bit too obsessed with unrealistic assumptions about AI, and to suggest every single life form on the planet is endangered because of a computer programme is ridiculous in the extreme.
 
Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th.

Great film.
 
Bollocks. How the fuck will computers kill trees, grass or birds? How can AI control the weather and the rainfall to such an extent their lives are going to be wiped out through drought?

Nah. Let him talk his crap about stamps.

I think you're a bit too obsessed with unrealistic assumptions about AI, and to suggest every single life form on the planet is endangered because of a computer programme is ridiculous in the extreme.
It isn't a ridiculous assumption. We're talking about 'Super Intelligent AI' that could be significantly more intelligent than every human combined (Our intelligence would be closer to a goldfish than the AIs).

There are currently no safeguarding laws in place. There is a rush to develop the most intelligent AI first. Because of those things, it's possible that mistakes can happen.

Realistically, there's probably a 20% chance AI will create a utopian world, a 40% chance it will be great, a 30% chance it will be shite and a 10% chance it will end humanity as we know it.

AI will likely reach human-level intelligence in the next 30 years. As soon as it becomes 'human-level' intelligent, it will quickly become super-intelligent (hours/days).
 
But aren’t humans just a complex array of zeroes and ones?

I don’t see the functional difference between 80 billion neurones that are either on or off, and the 80 billion transistors in a modern GPU that are either on or off.

There’s only two real differences:
1) the inputs
2) the software/wiring

At the moment our best AI is generally of a single mode (text/images/sound). But they are building multi-modal general AI as we speak that will be able to take in inputs from different media.

Our brain’s software has had about 4 billion years of head start on being optimised and trained through evolution. It feels like it’s only a matter of time until our AI neural networks catch up.

I think people who think there is something inherently non-replicable about the way humans do things are in for a nasty shock.

Even through the recent LLM development there has been papers released about how some formal logical principles seem to weirdly pop out of the mechanics of language itself. As in, our brains logically assessing a situation show little difference to these AI prediction engines which are just coming up with the most likely next word in a chain of reasoning. My hunch is that we are not as special as we think we are.
Is it though? Our brains aren't based upon software designed by humans so that's not true although ironically we actually don't know. Our understanding of our own biology is that there are lots of connections and networks but how that translates into consciousness, learning, emotions etc is completely unknown.

Ultimately AI is currently very dumb because it is programmed with code written by humans and that brings huge flaws and limitations because as you say we aren't that special and we don't have all the answers. You've said it yourself that AI is sometimes indistinguishable from people but that's missing the point. AI may come to replicate human behaviour but that just proves that it isn't AI and it's just programmed to replicate a human. Any form of true AI will take on its own behaviour.

A dog acts and interacts with humans in its own way and I'd be far more impressed if we could replicate that behaviour as opposed to something that can write an essay for me. That will never happen though, we don't understand how dogs think and so we can't program it. Until we can otherwise replicate the biology then we will never ever have an AI dog that is indistinguishable from a real dog.

I write the odd bit of code and the dominant programming languages have barely changed for over 30 years. Computers have got faster but that's pretty irrelevant in terms of AI. We don't need faster hardware, we need a totally new generation of computer but at the moment that's miles away. To get that we need to solve the bigger problems in fields like quantum mechanics but relatively little progress has been made there for centuries. For some of these problems we don't even know the question let alone the answers.

I'm not scared of developments in AI at the moment because it will always be self-limited by our current model for computing and our own ability to program it. What we should be very scared of however is future advances in biology and especially merging ideas in physics and computing into biology. That's when things will get real.
 
Last edited:
Is it though? Our brains aren't based upon software designed by humans so that's not true although ironically we actually don't know. Our understanding of our own biology is that there are lots of connections and networks but how that translates into consciousness, learning, emotions etc is completely unknown.

Ultimately AI is currently very dumb because it is programmed with code written by humans and that brings huge flaws and limitations because as you say we aren't that special and we don't have all the answers. You've said it yourself that AI is sometimes indistinguishable from people but that's missing the point. AI may come to replicate human behaviour but that just proves that it isn't AI and it's just programmed to replicate a human. Any form of true AI will take on its own behaviour.

A dog acts and interacts with humans in its own way and I'd be far more impressed if we could replicate that behaviour as opposed to something that can write an essay for me. That will never happen though, we don't understand how dogs think and so we can't program it. Until we can otherwise replicate the biology then we will never ever have an AI dog that is indistinguishable from a real dog.

I write the odd bit of code and the dominant programming languages have barely changed for over 30 years. Computers have got faster but that's pretty irrelevant in terms of AI. We don't need faster hardware, we need a totally new generation of computer but at the moment that's miles away. To get that we need to solve the bigger problems in fields like quantum mechanics but relatively little progress has been made there for centuries. For some of these problems we don't even know the question let alone the answers.

I'm not scared of developments in AI at the moment because it will always be self-limited by our current model for computing and our own ability to program it. What we should be very scared of however is future advances in biology and especially merging ideas in physics and computing into biology. That's when things will get real.

I don’t think we’re really disagreeing here, though I think maybe I explained it sloppily.

I am saying the base “equipment”, which is just ultimately a load of switches, is the same. The differentiating factor is that we are not as good at wiring it all together as nature is. Our brain has had things built into it year over year through evolution for billions of years to become this incredibly complex network that we don’t understand. As you say we are currently miles away in terms of efficiently and effectively being able to write programs that can replicate this complexity.

I think where we may disagree is what comes next. Because the fact is we don’t actually understand how most of these AI platforms work either. You feed the input in, it goes into this incredibly dense neural network with numerous layers and then you get an output. You can add elements that break this process up to try to understand the underlying “logic” but understanding why an AI makes the decisions it does is not always possible.

So we are not going to be the limiting factor in this for long because frankly, we aren’t the ones really writing the “code”. These things are being trained to generate their own neural pathways to a level of detail no human could ever manually code. We still currently have a level of command but what happens when we use AI to improve the AI? Which is already being experimented with using models like autoGPT. Very quickly we are cut from the equation completely and what started as an exercise in mimicking human behaviour becomes an experiment in what happens if you get the smartest human in existence to continuously find ways to improve itself. Nobody knows where that leads. That’s like if we were the dogs and we had created a system which could keep improving itself until it was at human-level intelligence. To us the idea of an intelligence above our level is alien.

The thing that I think is going to be the limiting factor is access to good data to train the models. Eventually that has to hit a ceiling, right? So I don’t share your confidence that this isn’t a cause for concern until we start integrating it biologically, I think this is incredibly dangerous already. But I think we do have some things counting in our favour.
 

Don't have an account? Register now and see fewer ads!

SIGN UP
Back
Top
  AdBlock Detected
Bluemoon relies on advertising to pay our hosting fees. Please support the site by disabling your ad blocking software to help keep the forum sustainable. Thanks.