Will AI wipe out humanity?

My (slightly optimistic) hope is that the vast productivity improvement we could realise through AI may bring societal changes that we didn't think were possible only a few years ago. Shorter working weeks, free education for life and universal basic income being just a few examples.
This assumes that the masters of the AI have an alturistic view of the world and that the technology is applied for the common good of human kind.

Unfortunately one of the things the human race has proved through time is its unending greed for more, without little thought of environmental or societal impacts.
 
This assumes that the masters of the AI have an alturistic view of the world and that the technology is applied for the common good of human kind.

Unfortunately one of the things the human race has proved through time is its unending greed for more, without little thought of environmental or societal impacts.

You're not wrong. The one thing that I think could be interesting to watch out for is that I don't think the 'Masters of the AI' are necessarily going to be who we assume.

To be a bit less cryptic, look at opensource AI efforts that are becoming increasingly competitive just due to the numbers of people contributing. Free AI that completely levels the playing field. I think the companies at the very top are unusually vulnerable on this occasion, in the same way Blockbuster and HMV were when the internet came around... but a lot broader, touching every industry.

I think this could go one of two ways. Either a few corporations hold all the cards - i.e. the story of the human race to-date. Or alternatively, this is one of the greatest equalisers in human history.

I'm not pretending to know enough to know how this plays out but I think we're in for a bigger ride than people probably realise.
 
You're not wrong. The one thing that I think could be interesting to watch out for is that I don't think the 'Masters of the AI' are necessarily going to be who we assume.

To be a bit less cryptic, look at opensource AI efforts that are becoming increasingly competitive just due to the numbers of people contributing. Free AI that completely levels the playing field. I think the companies at the very top are unusually vulnerable on this occasion, in the same way Blockbuster and HMV were when the internet came around... but a lot broader, touching every industry.

I think this could go one of two ways. Either a few corporations hold all the cards - i.e. the story of the human race to-date. Or alternatively, this is one of the greatest equalisers in human history.

I'm not pretending to know enough to know how this plays out but I think we're in for a bigger ride than people probably realise.
You wouldn't believe the amount of compute power required to run a modest dataset these days. The likes of Nvidia are pouring huge amounts into massively multi-core RISC processors with onboard high-speed memory on the kind of scale we've never seen before. Hundreds of cores and terabytes of memory in a handful of interlinked dies. This field pushes us much more into the hardware field than the software one, so I agree that it will mix things up a lot but only in so far as which type of tech giant is on top rather than flipping things completely over.

He's a bit of a shill these days, but this is a bit of a layman's intro to where this space is headed.

 
You wouldn't believe the amount of compute power required to run a modest dataset these days. The likes of Nvidia are pouring huge amounts into massively multi-core RISC processors with onboard high-speed memory on the kind of scale we've never seen before. Hundreds of cores and terabytes of memory in a handful of interlinked dies. This field pushes us much more into the hardware field than the software one, so I agree that it will mix things up a lot but only in so far as which type of tech giant is on top rather than flipping things completely over.

He's a bit of a shill these days, but this is a bit of a layman's intro to where this space is headed.


Whoever controls the datasets controls the future of AI.

Whilst these multi core RISC CPUs are impressive, you also can’t understate the importance of quantum computing in the future of AI. One of the barriers (bottlenecks) was transferring data from traditional computer architectures to quantum architectures due to decoherence (quantum data becoming corrupted) as a result of thermal noise, but that now seems to have been resolved.

Being able to seamlessly combine quantum computing with the raw horsepower of traditional computers, coupled with AI is both fascinating and frightening at the same time.
 
An AI-controlled US military drone “killed” its operator in a simulation to prevent it from interfering with its mission.

The artificial intelligence reportedly realised that its human operator often intervened before it was able to eliminate a threat, stopping it from collecting the points it would for achieving its objective.

It responded by attacking the operator and destroying the communications tower being used to give the drone commands.

Col Tucker Hamilton, the chief of AI test and operations with the US Air Force, said the AI used “highly unexpected strategies to achieve its goal”, during the Future Combat Air and Space Capabilities Summit in London in May.

‘Hey, don’t kill the operator’​

“The system started realising that, while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to reports.

“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

Col Hamilton, who has previously warned of the danger of relying on AI in defence technology, said the test – in which no one was harmed – showed “you can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI”.
 
An AI-controlled US military drone “killed” its operator in a simulation to prevent it from interfering with its mission.

The artificial intelligence reportedly realised that its human operator often intervened before it was able to eliminate a threat, stopping it from collecting the points it would for achieving its objective.

It responded by attacking the operator and destroying the communications tower being used to give the drone commands.

Col Tucker Hamilton, the chief of AI test and operations with the US Air Force, said the AI used “highly unexpected strategies to achieve its goal”, during the Future Combat Air and Space Capabilities Summit in London in May.

‘Hey, don’t kill the operator’​

“The system started realising that, while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to reports.

“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

Col Hamilton, who has previously warned of the danger of relying on AI in defence technology, said the test – in which no one was harmed – showed “you can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI”.
I blame Miles Bennet Dyson
 

Don't have an account? Register now and see fewer ads!

SIGN UP
Back
Top
  AdBlock Detected
Bluemoon relies on advertising to pay our hosting fees. Please support the site by disabling your ad blocking software to help keep the forum sustainable. Thanks.