Bollocks. How the fuck will computers kill trees, grass or birds? How can AI control the weather and the rainfall to such an extent their lives are going to be wiped out through drought?
However it wants to. All it has to do is to value something else that would result in this.
Nah. Let him talk his crap about stamps.
I think you're a bit too obsessed with unrealistic assumptions about AI, and to suggest every single life form on the planet is endangered because of a computer programme is ridiculous in the extreme.
It's not. You just don't understand the danger because you have no experience in this area and don't understand how programs work.
Let me give you an example that maybe you can get your head around, presume we invent AGI today. The entire stock market trading system is essentially AI based now - low level and poor versions of AI but they are neural networks making trades in the microsecond range. If invented today, an AGI could crash the entire world's economy within about 5 seconds of being invented. Your current desktop PC can do about 100 billion operations per second, and this does not take into account how an AGI would recognise its need for more computing power and basically take over the cloud infrastructure of every major datacenter in the world.
The process of writing code, optimising it, testing it and deploying it will be done within milliseconds. Within 5 or 10 seconds of creation, without adequate safeguards then you've created the most intelligent agent that has ever existed in the Universe and very possibly more intelligent than every human being who has ever lived combined. Or rather, it has self improved itself to that state far beyond what any of us could have programmed it to do.
But intelligence isn't morality. AGI don't hold morality. Morality is a human invention and not something we can program into other things because we lack the skills to do so. They are programs that wish to accomplish a task and will do so in the most efficient manner possible without any thought to anything else. If you ask it to "make me a coffee and keep it coming" then they'll desalinate and boil the oceans because they think you need infinite boiled water. You said "keep it coming", that's what it's doing.
In addition to the entire stock market being online, the entire power grid is essentially online including nuclear reactors. There are air gaps to some control apparatus but air gaps are consistently breached with new tech or human intervention as I'm sure any Iranians can explain. The entire JIT supply system for food supply and port control are online. The entire banking system is online. Much of the military defence structures are online. GPS, telephony, the internet, every communication system in the world is managed online. If you don't see the danger of what is essentially a badly programmed program without safeguards in behaviour having access to all of this to achieve its goals then I don't know what to tell you.
Realistically, there's probably a 20% chance AI will create a utopian world, a 40% chance it will be great, a 30% chance it will be shite and a 10% chance it will end humanity as we know it.
You've got these percentages incorrect. It's much closer to 99% wipe everyone out and 1% create a better world.
I've already written about how quickly the problems can occur, but here's the thing - you don't know if there's a world ending problem with an AGI until you turn it on. And once you've turned it on then it's already too late to solve that problem. So we're talking about hundreds of teams across many different universities and corporations and research institutes. And EVERY SINGLE ONE OF THEM has to get it absolutely perfectly correct before they turn it on, with zero mistakes at all. Not one single team anywhere can have an error in its logic otherwise it basically becomes rogue and implements whatever the hell it wants to implement to achieve its goal. Do you trust EVERY research team, programmer, corporate lackey, to get their code absolutely perfect and not rush even a single build?
Because I've been in software for decades and in my opinion there's basically zero chance of that at all.
About 5 years ago or so, I wrote on here about how the world would get a coming shock as the Robotics revolution was to occur on the same scale as the Industrial revolution. People said I was some fantasist. Now we've got self driving cars and trucks changing the transport systems, warehouse drones changing supply systems, automated serving machines at fast food places and shopping systems already putting people out of work and people are starting to see what this advanced tech is going to look like in 25 years for the global work economy, let alone the LLM models that people are talking about here and how they're already changing many professions. We're in the Alexander Graham Bell era of robotics, my point was always that people should pay attention to what they think it's going to look like when we get to iPhone era in terms of job loss.
Similarly, we're in the academic era of AI. You aren't afraid of it yet because most people outside of research don't realise the capabilities of what AGI will do. But unlike robotics, which will take a few decades to get from "Mr Watson, come here, I wish to see you" to everybody carrying an internet enabled phone in their pocket that works on a worldwide telephony network, AGI will make this jump in milliseconds, not years. And by the time several seconds has passed it will be whatever phone tech will look like in a thousand years.
If we were to invent this, this is a technology that is far too powerful for us as a species to ever control. More powerful than every weapon of war ever created and clever enough to achieve what it believes are the most efficient way to get to its goals. If you aren't concerned about this problem then you frankly do not understand this problem.