grunge
Well-Known Member
Time is the big problem, something BlueHammer isn't really getting. All those things he mentioned, they start off and take years to become an issue because they work on human or geological timescales. Computers function at the speed of light (ish). One of the key tenets of AGI is that it will self learn, it will find the most efficient way for it to get to the number 1 (i.e. complete its task). Humans go to school for decades to learn what this thing would learn in around a nanosecond. And every time it learns, it becomes more efficient at learning.
The second thing to think about that I try to stress is the computer. This is Colossus.
80 years ago, it was the world's most powerful computer. It wasn't what we refer to as "Turing complete" either, it wasn't a generalised computer, it was a computer that had one specific task that it tried to do. It took up a room, was a top secret Government project and in terms of computing power, could if stretched, very possibly run a single icon on your desktop. Though not the operating system or desktop itself. Just the icon. Maybe.
Today I'm typing on something that is very literally over a million times more power than that. Think of iPhones, think of VR headsets, think of those hologram machines and robotics and all that. We have gotten from Colossus to that in 80 years of progress. There are now computer chips so powerful and so small that we implant them in a human brain.
What you're seeing with AI at the moment is that we're in the Colossus era. It's a fun distraction, it can do one thing well and better than humans can, but it's a bit limited and you need different Colossi for different tasks. I'm not asking you to judge AI where it is now, I'm asking you to think about what AI will look like in the VR headset era of its development.
The generalisation of AI is going to be the world's most dangerous ever invention, far more dangerous than any nuclear weapon or biologiocal threat. And in my experience, I think we've got neither the understanding, the foresight or the patience to safely control it. There's always some idiot who wants to rush or skip a step or is not as smart as they think they are and, as I say, it only takes one ever.
Our current “AI” is going to be akin to the Industrial Revolution in terms of upheaval. It will be big but what we have now is just not AI, it’s not even remotely close to it. This will be lots of automation of tasks.
It’s akin to being scared of a database lookup at this point as that’s pretty much all “AI” we have now is. It’s just a very very large database with probability applied logic for responses from it.
There is no intelligence in any of it.