War Games mateWrong film, that's the one with the little fella on the cycle that likes torturing people.
A sort of reset you say, we should just equally share the wealth of the 2,640 billionaires (except Skeik Mansour) with a total net wealth of $12.2 trillion between everyone else globally, end to world poverty, end to lack of clean water, potentially end to mass unskilled migration and possibly end to mass population explosion in developing countries.I don't see it wiping out humanity as such but it could easily have a devastating affect on our lifestyles. The more reliant we become on I.T. systems the more at risk we become.
Just imagine a day where you wake up in the morning and your identity and financial history has gone, your bank account is empty, your pension pot has gone, you can't buy food and other essentials, you have no gas or electricity, you have no internet and can't contact anyone to resolve these issues. Now imagine that this happens to millions of people all at once.
Society as we know it would crumble within day's.
Artificial General Intelligence is the single largest threat to the human species, in front of nuclear war, climate change and other extinction level events.
The only people who disagree are generally people who don't know what AGI is or why it's much more likely to wipe out the entire world than those things. This is almost certainly what will kill all life on Earth.
Artificial General Intelligence is the single largest threat to the human species, in front of nuclear war, climate change and other extinction level events.
The only people who disagree are generally people who don't know what AGI is or why it's much more likely to wipe out the entire world than those things. This is almost certainly what will kill all life on Earth.
how does all life physically die from AI ?
Bollocks. How the fuck will computers kill trees, grass or birds? How can AI control the weather and the rainfall to such an extent their lives are going to be wiped out through drought?Artificial General Intelligence is the single largest threat to the human species, in front of nuclear war, climate change and other extinction level events.
The only people who disagree are generally people who don't know what AGI is or why it's much more likely to wipe out the entire world than those things. This is almost certainly what will kill all life on Earth.
It isn't a ridiculous assumption. We're talking about 'Super Intelligent AI' that could be significantly more intelligent than every human combined (Our intelligence would be closer to a goldfish than the AIs).Bollocks. How the fuck will computers kill trees, grass or birds? How can AI control the weather and the rainfall to such an extent their lives are going to be wiped out through drought?
Nah. Let him talk his crap about stamps.
I think you're a bit too obsessed with unrealistic assumptions about AI, and to suggest every single life form on the planet is endangered because of a computer programme is ridiculous in the extreme.
Is it though? Our brains aren't based upon software designed by humans so that's not true although ironically we actually don't know. Our understanding of our own biology is that there are lots of connections and networks but how that translates into consciousness, learning, emotions etc is completely unknown.But aren’t humans just a complex array of zeroes and ones?
I don’t see the functional difference between 80 billion neurones that are either on or off, and the 80 billion transistors in a modern GPU that are either on or off.
There’s only two real differences:
1) the inputs
2) the software/wiring
At the moment our best AI is generally of a single mode (text/images/sound). But they are building multi-modal general AI as we speak that will be able to take in inputs from different media.
Our brain’s software has had about 4 billion years of head start on being optimised and trained through evolution. It feels like it’s only a matter of time until our AI neural networks catch up.
I think people who think there is something inherently non-replicable about the way humans do things are in for a nasty shock.
Even through the recent LLM development there has been papers released about how some formal logical principles seem to weirdly pop out of the mechanics of language itself. As in, our brains logically assessing a situation show little difference to these AI prediction engines which are just coming up with the most likely next word in a chain of reasoning. My hunch is that we are not as special as we think we are.
Speak for yourself.we don't understand how dogs think
Is it though? Our brains aren't based upon software designed by humans so that's not true although ironically we actually don't know. Our understanding of our own biology is that there are lots of connections and networks but how that translates into consciousness, learning, emotions etc is completely unknown.
Ultimately AI is currently very dumb because it is programmed with code written by humans and that brings huge flaws and limitations because as you say we aren't that special and we don't have all the answers. You've said it yourself that AI is sometimes indistinguishable from people but that's missing the point. AI may come to replicate human behaviour but that just proves that it isn't AI and it's just programmed to replicate a human. Any form of true AI will take on its own behaviour.
A dog acts and interacts with humans in its own way and I'd be far more impressed if we could replicate that behaviour as opposed to something that can write an essay for me. That will never happen though, we don't understand how dogs think and so we can't program it. Until we can otherwise replicate the biology then we will never ever have an AI dog that is indistinguishable from a real dog.
I write the odd bit of code and the dominant programming languages have barely changed for over 30 years. Computers have got faster but that's pretty irrelevant in terms of AI. We don't need faster hardware, we need a totally new generation of computer but at the moment that's miles away. To get that we need to solve the bigger problems in fields like quantum mechanics but relatively little progress has been made there for centuries. For some of these problems we don't even know the question let alone the answers.
I'm not scared of developments in AI at the moment because it will always be self-limited by our current model for computing and our own ability to program it. What we should be very scared of however is future advances in biology and especially merging ideas in physics and computing into biology. That's when things will get real.