Artificial Intelligence

hardly, just not sure i'll jump on bored that humanity is all going to be wiped out pretty soon because of AI.
maybe that's my ignorance.
It’s a concern going forward. You’d have to be an idiot not to accept that.
 
It’s a concern going forward. You’d have to be an idiot not to accept that.

of course and I fully agree/expect institutions, governments to work closely together to bring in controls and regulations which is happening - https://en.wikipedia.org/wiki/Global_Partnership_on_Artificial_Intelligence

AI is a threat and will certainly change the landscape , but so is Climate Change, Nuclear, Russia, pandemics, flu, measles, Donald Trump, cancers, caffeine, the water we drink. Absolutely everything is a concern going forward. Humanity adapts and survives (on the whole)
 
I'm trying to think of a way of explaining the AI safety issue in a condensed way but it's a broad topic so it's difficult. The reason that AGI is so dangerous can be simplified right down to the problem that 1 is greater than 0. That's essentially it.

AGI is not a person. It doesn't have a value system. It is a task completing machine and it will find more and more efficient ways of completing that task in nanoseconds until by 1 second after you've switched it on, it's already more efficient than every human who ever lived combined. It learns at the speed of light.

If it is efficient for it capture every single power grid on the planet in order for it to complete its task then its going to do that and it will do it so quick that nobody even recognises its happened. You might say "well what if we air gap it??". Then you've created a scenario where it is efficient for the AGI to model your psychology and manipulate you into connecting it so it can efficiently complete its task.

You might say "well what if we air gap it until its trustworthy and we've tested it extensively?". Now you've created an AGI that has been taught that it can only complete its task by passing your tests so it will passs all of your tests but immediately do what it wants after connection.

You might say "well what if we build instructions into it to not harm humans?". Firstly, you'd have to define humans and harm and that's a fun topic on its own. Secondly, is its task more important or less important than not harming humans? If its less important then it will ignore you. If it's more important than it won't complete its task because you now created a "don't harm humans" machine instead of a task machine because you made it more important. So a machine that will power up, ignore you then either idle away forever or immediately close down which is completely useless.

So you say, "ok well let's teach it human values then!". Whose values? Mine? Yours? How? Most people can't even agree about whether the North Stand is a problem or not, let alone agree on a morally and ethically perfect system of values. Almost nobody has a consistent value system that is overarching and applies all the time, and nobody at all can come up with a way of writing that stuff down in any programming language that includes every possible exception and one off scenario. To attempt to write a value system for an AGI that doesn't have any loopholes, you have to be smarter and more efficient than an AGI which is not possible.

The solution to this, that has never been found so far but is been extensively researched and postulated in AI safety academia, is for you to mathematically prove safety before you switch it on. Maths isn't magic, if you can prove its safe then its safe. UNLESS you might a slight error somewhere in your work which nobody caught and then, you know, we're all fucked. And every team on the entire planet in academia, corporations, even home users who someday want to create AGI or play with it have to do that, without error, every single time, without skipping a single step or rushing to market or anything. Because it only takes one.

AGI is singular focused on its reward system. How does it get to 1. Anything else is a total irrelevance to it. People cannot fathom how incredibly dangerous it is and why most AI safety researchers are screaming from the rooftops about how we need to stop RIGHT NOW and evaluate and get legislation and safety in place before somebody accidently creates something that basically rips apart the world.
 
AI is a threat and will certainly change the landscape , but so is Climate Change, Nuclear, Russia, pandemics, flu, measles, Donald Trump, cancers, caffeine, the water we drink.

As mentioned, this is a massive massive misunderstanding of the threat of AGI, the timescales it works on, and the potential damage caused.
 
that article is 12 years ago.
Still it worries me.

I firmly believe that one day and who knows when, someone, something (AI?) somewhere will turn the internet off. We as a race will be fucked. Nothing will work, nothing will happen.

Planes won't fly, supply networks will cease, communications will end, as will power generation and as Humans we won't have a clue what to do about it.... just my fear, Been saying this for at least the last ten years... could be totally wrong, who knows?
 
of course and I fully agree/expect institutions, governments to work closely together to bring in controls and regulations which is happening - https://en.wikipedia.org/wiki/Global_Partnership_on_Artificial_Intelligence

AI is a threat and will certainly change the landscape , but so is Climate Change, Nuclear, Russia, pandemics, flu, measles, Donald Trump, cancers, caffeine, the water we drink. Absolutely everything is a concern going forward. Humanity adapts and survives (on the whole)
They're not working closely together for controls or regulations that's why it is such a problem, they are all in a race to weaponise AI it is like the nuclear arms race all over again.
 
Still it worries me.

I firmly believe that one day and who knows when, someone, something (AI?) somewhere will turn the internet off. We as a race will be fucked. Nothing will work, nothing will happen.

Planes won't fly, supply networks will cease, power provision will end and as Humans we won't have a clue what to do about it.... just my fear, Been saying this for at least the last ten years... could be totally wrong, who knows?

Time is the big problem, something BlueHammer isn't really getting. All those things he mentioned, they start off and take years to become an issue because they work on human or geological timescales. Computers function at the speed of light (ish). One of the key tenets of AGI is that it will self learn, it will find the most efficient way for it to get to the number 1 (i.e. complete its task). Humans go to school for decades to learn what this thing would learn in around a nanosecond. And every time it learns, it becomes more efficient at learning.

The second thing to think about that I try to stress is the computer. This is Colossus.

1920px-Colossus.jpg


80 years ago, it was the world's most powerful computer. It wasn't what we refer to as "Turing complete" either, it wasn't a generalised computer, it was a computer that had one specific task that it tried to do. It took up a room, was a top secret Government project and in terms of computing power, could if stretched, very possibly run a single icon on your desktop. Though not the operating system or desktop itself. Just the icon. Maybe.

Today I'm typing on something that is very literally over a million times more power than that. Think of iPhones, think of VR headsets, think of those hologram machines and robotics and all that. We have gotten from Colossus to that in 80 years of progress. There are now computer chips so powerful and so small that we implant them in a human brain.

What you're seeing with AI at the moment is that we're in the Colossus era. It's a fun distraction, it can do one thing well and better than humans can, but it's a bit limited and you need different Colossi for different tasks. I'm not asking you to judge AI where it is now, I'm asking you to think about what AI will look like in the VR headset era of its development.

The generalisation of AI is going to be the world's most dangerous ever invention, far more dangerous than any nuclear weapon or biologiocal threat. And in my experience, I think we've got neither the understanding, the foresight or the patience to safely control it. There's always some idiot who wants to rush or skip a step or is not as smart as they think they are and, as I say, it only takes one ever.
 
Time is the big problem, something BlueHammer isn't really getting. All those things he mentioned, they start off and take years to become an issue because they work on human or geological timescales. Computers function at the speed of light (ish). One of the key tenets of AGI is that it will self learn, it will find the most efficient way for it to get to the number 1 (i.e. complete its task). Humans go to school for decades to learn what this thing would learn in around a nanosecond. And every time it learns, it becomes more efficient at learning.

The second thing to think about that I try to stress is the computer. This is Colossus.

1920px-Colossus.jpg


80 years ago, it was the world's most powerful computer. It wasn't what we refer to as "Turing complete" either, it wasn't a generalised computer, it was a computer that had one specific task that it tried to do. It took up a room, was a top secret Government project and in terms of computing power, could if stretched, very possibly run a single icon on your desktop. Though not the operating system or desktop itself. Just the icon. Maybe.

Today I'm typing on something that is very literally over a million times more power than that. Think of iPhones, think of VR headsets, think of those hologram machines and robotics and all that. We have gotten from Colossus to that in 80 years of progress. There are now computer chips so powerful and so small that we implant them in a human brain.

What you're seeing with AI at the moment is that we're in the Colossus era. It's a fun distraction, it can do one thing well and better than humans can, but it's a bit limited and you need different Colossi for different tasks. I'm not asking you to judge AI where it is now, I'm asking you to think about what AI will look like in the VR headset era of its development.

The generalisation of AI is going to be the world's most dangerous ever invention, far more dangerous than any nuclear weapon or biologiocal threat. And in my experience, I think we've got neither the understanding, the foresight or the patience to safely control it. There's always some idiot who wants to rush or skip a step or is not as smart as they think they are and, as I say, it only takes one ever.

I get it. I just don't share the same panic levels that we're basically screwed and whole of humanity is about to be wiped out - maybe I'm just being naïve.
as always though, I enjoy your detailed posts on such subject matters.
 
I get it. I just don't share the same panic levels that we're basically screwed and whole of humanity is about to be wiped out - maybe I'm just being naïve.
as always though, I enjoy your detailed posts on such subject matters.

Again, it boils down to a simple thing really. If 1 is greater than 0 then we're all fucked.

Because if you make the safeguards 1 the you've got a machine whose goal is to use the safeguards and not complete a task.

If you make the task completion 1 and the safeguards 0.999999999999 then it will complete the task and ignore the safeguards.

Until we can solve this problem, and implement it, and then form some sort of worldwide legislation whereby every possible agent has to adhere to this AND not one person tries to break the law, then we're knackered. I just think the likelihood of those happening in perfect sync is slim.

Look at climate change. Scientifically proven across numerous fields and numerous disciplines by researchers all over the world in hugely different specialisations and about 30% of the politicians in the world's most powerful country don't eve believe in it let alone want to action it. And that's happening in front of them, this would have to be done in advance.

AGI is not going to fuck us because AGI is dangerous, its going to fuck us because humans are dangerous.
 

Don't have an account? Register now and see fewer ads!

SIGN UP
Back
Top
  AdBlock Detected
Bluemoon relies on advertising to pay our hosting fees. Please support the site by disabling your ad blocking software to help keep the forum sustainable. Thanks.