It’s a concern going forward. You’d have to be an idiot not to accept that.hardly, just not sure i'll jump on bored that humanity is all going to be wiped out pretty soon because of AI.
maybe that's my ignorance.
It’s a concern going forward. You’d have to be an idiot not to accept that.hardly, just not sure i'll jump on bored that humanity is all going to be wiped out pretty soon because of AI.
maybe that's my ignorance.
Frightens me to death, glad I'm 63 and not 13. Selfish I know and who knows what's beyond death, if anything?
It’s a concern going forward. You’d have to be an idiot not to accept that.
AI is a threat and will certainly change the landscape , but so is Climate Change, Nuclear, Russia, pandemics, flu, measles, Donald Trump, cancers, caffeine, the water we drink.
Still it worries me.that article is 12 years ago.
They're not working closely together for controls or regulations that's why it is such a problem, they are all in a race to weaponise AI it is like the nuclear arms race all over again.of course and I fully agree/expect institutions, governments to work closely together to bring in controls and regulations which is happening - https://en.wikipedia.org/wiki/Global_Partnership_on_Artificial_Intelligence
AI is a threat and will certainly change the landscape , but so is Climate Change, Nuclear, Russia, pandemics, flu, measles, Donald Trump, cancers, caffeine, the water we drink. Absolutely everything is a concern going forward. Humanity adapts and survives (on the whole)
Still it worries me.
I firmly believe that one day and who knows when, someone, something (AI?) somewhere will turn the internet off. We as a race will be fucked. Nothing will work, nothing will happen.
Planes won't fly, supply networks will cease, power provision will end and as Humans we won't have a clue what to do about it.... just my fear, Been saying this for at least the last ten years... could be totally wrong, who knows?
Time is the big problem, something BlueHammer isn't really getting. All those things he mentioned, they start off and take years to become an issue because they work on human or geological timescales. Computers function at the speed of light (ish). One of the key tenets of AGI is that it will self learn, it will find the most efficient way for it to get to the number 1 (i.e. complete its task). Humans go to school for decades to learn what this thing would learn in around a nanosecond. And every time it learns, it becomes more efficient at learning.
The second thing to think about that I try to stress is the computer. This is Colossus.
80 years ago, it was the world's most powerful computer. It wasn't what we refer to as "Turing complete" either, it wasn't a generalised computer, it was a computer that had one specific task that it tried to do. It took up a room, was a top secret Government project and in terms of computing power, could if stretched, very possibly run a single icon on your desktop. Though not the operating system or desktop itself. Just the icon. Maybe.
Today I'm typing on something that is very literally over a million times more power than that. Think of iPhones, think of VR headsets, think of those hologram machines and robotics and all that. We have gotten from Colossus to that in 80 years of progress. There are now computer chips so powerful and so small that we implant them in a human brain.
What you're seeing with AI at the moment is that we're in the Colossus era. It's a fun distraction, it can do one thing well and better than humans can, but it's a bit limited and you need different Colossi for different tasks. I'm not asking you to judge AI where it is now, I'm asking you to think about what AI will look like in the VR headset era of its development.
The generalisation of AI is going to be the world's most dangerous ever invention, far more dangerous than any nuclear weapon or biologiocal threat. And in my experience, I think we've got neither the understanding, the foresight or the patience to safely control it. There's always some idiot who wants to rush or skip a step or is not as smart as they think they are and, as I say, it only takes one ever.
I get it. I just don't share the same panic levels that we're basically screwed and whole of humanity is about to be wiped out - maybe I'm just being naïve.
as always though, I enjoy your detailed posts on such subject matters.