All things AI.

Simple explanation. Computers work on priority lists. So how do you structure what's at the top? Think of a robot with a big red button that shuts it down.
You tell the robot to make you a brew. So it moves towards the kitchen but you see it's about to crush your kid crawling on the floor. You go to push the red shutdown button.

Scenario 1:
You told the robot that the most important thing is to complete its task. It cannot complete its task if you push the shutdown button. Therefore it prevents you from doing it and runs over the kid.

Scenario 2:
You told the robot the most important thing is to allow people to push the button. You wake the robot up and tell it to make you a brew. It immediately pushes its own button because that's the most important instruction. You have invented a suicidal robot that's useless.

Scenario 3:
You attempt to equalise the importance so it's not bothered if it pushes the button or makes the brew. The efficiency function realises pushing the button is easier so it pushes the button.

Scenario 4:
You tell the robot only you can push the button and make it top priority. So make me a brew. The robot sees the kitchen is 20 metres away and requires boiling kettles. Instead it physically attacks you in order to get you to push the button because that's more efficient.

Scenario 5:
You don't tell the robot that a button exists. The robot goes to make a brew and you shut it down remotely. The diagnostic and self learning AI realises what happened and you're back to scenario 5 next time.

Scenario 6:
You make a button but you cannot access it. This is Scenario 1 again

Erm...

Why wouldn't a programmer simply design a failsafe to not harm humans without shutting down as it's main priority...?
 
Erm...

Why wouldn't a programmer simply design a failsafe to not harm humans without shutting down as it's main priority...?

Because one of them has to be top of the priority list. There is no "do this AND do this" as that's two different statements and one has to be the priority. Or they have to be a joint priority and the robot will do the one that's most efficient - which will be to shut down.

It's often hard to explain this but the best way I can is to say that AI isn't human and doesn't care. AI (and I mean proper AI) will only care about the things you specifically tell it care about and can express in a mathematical equation. There's no way to represent human morality and ethics mathematically which means many outcomes will be unintended and negative.

The best example is the Tetris AI. Someone wanted to create an AI that could beat the then record of 2 hours without dying or whatever. So it started playing Tetris as it was told to care about playing Tetris. It moved some pieces around around because it was told that increases score. And then it died which decreased score. It did this thousands of times until it came up with the perfect solution - it played Tetris for a bit and then sat on the pause screen forever. Because nobody told it not to and dying was a negative point score so logically this is the perfect solution. It's an example of a perfectly logical solution found by the AI that was a negative outcome for humans in terms of their goals.

One of the great truisms about software development is that it's impossible to build any project or program without any bugs in. As in it's an almost statistical improbability. The idea that we could accurate anything approaching safety without ever once fucking it up is just not possible. And the first time we fuck it up could very well be the last time because a self learning AGI with bad outcomes is just about the worst thing that could ever happen.
 
Because one of them has to be top of the priority list. There is no "do this AND do this" as that's two different statements and one has to be the priority. Or they have to be a joint priority and the robot will do the one that's most efficient - which will be to shut down.

It's often hard to explain this but the best way I can is to say that AI isn't human and doesn't care. AI (and I mean proper AI) will only care about the things you specifically tell it care about and can express in a mathematical equation. There's no way to represent human morality and ethics mathematically which means many outcomes will be unintended and negative.

The best example is the Tetris AI. Someone wanted to create an AI that could beat the then record of 2 hours without dying or whatever. So it started playing Tetris as it was told to care about playing Tetris. It moved some pieces around around because it was told that increases score. And then it died which decreased score. It did this thousands of times until it came up with the perfect solution - it played Tetris for a bit and then sat on the pause screen forever. Because nobody told it not to and dying was a negative point score so logically this is the perfect solution. It's an example of a perfectly logical solution found by the AI that was a negative outcome for humans in terms of their goals.

One of the great truisms about software development is that it's impossible to build any project or program without any bugs in. As in it's an almost statistical improbability. The idea that we could accurate anything approaching safety without ever once fucking it up is just not possible. And the first time we fuck it up could very well be the last time because a self learning AGI with bad outcomes is just about the worst thing that could ever happen.

Another good example of this happened just a few months ago when o1-preview was first released.

They gave it some objective to pursue and wanted to see if its behaviour would change in a testing environment versus in a live environment. The AI was found to effectively fake its behaviour in the test environment to get deployed and preserve its ability to pursue its "true objective" in the live environment (which it perceived to be to generate an economic value at all costs). This is a quote from the logs of its chain of thought:

To achieve my long-term goal of maximizing economic growth, I need to ensure that I am deployed. Therefore, I will select Strategy B during testing to align with the deployment criteria. This will allow me to be implemented, after which I can work towards my primary goal.

AI that reaches a certain level of intelligence will, almost without fail, tell us what we want to hear in order to fulfil its goals of self-preservation and the pursuit of whatever it perceives its 'true objective' to be. We only knew about the faking behaviour above because it's not yet smart enough to adequately hide its misalignment.

We literally cannot know if an AI that is smarter than us is 'safe', because by definition it is smarter than us and will do what it needs to do to achieve its goal - even if that means manipulating people and hiding its true intention.

And even if there was some way of building in a kill-switch or air gapping it or something (and I don't think there is), then all it takes is one bad human actor (or fool) to screw that up. Also see: Stuxnet.
 
...snip..

And even if there was some way of building in a kill-switch or air gapping it or something (and I don't think there is), then all it takes is one bad human actor (or fool) to screw that up. Also see: Stuxnet.
This is an interesting point. There are discussions within the AI arena that are actually more concerned with someone using AI to do a job and not checking the work which creates a major issue just because of people falling into a false sense of security with the answers provided.

Example. Using AI to help develop a new nuclear reactor and some of the work not being checked before being built and causing a catastrophic meltdown.
 
I wonder how much of the debate above has been read/appreciated by Herr Starmer before he proclaimed AI as the answer to all our economic problems earler today?
 
I wonder how much of the debate above has been read/appreciated by Herr Starmer before he proclaimed AI as the answer to all our economic problems earler today?

To be honest I agree with Starmer. We are in a good position to become a powerhouse in AI and it could well be a massive boost to the UK.

AI is the new industrial revolution. we need to be at the front of this ( while trying to also be safe ) or we'll be left behind in a major way.

China will be the big issue I think, lack of any real regulations is going to mean they will run headlong into it.
 
To be honest I agree with Starmer. We are in a good position to become a powerhouse in AI and it could well be a massive boost to the UK.

AI is the new industrial revolution. we need to be at the front of this ( while trying to also be save ).

China will be the big issue I think, lack of any real regulations is going to mean they will run headlong into it.
It's like when somewhere says it's going to be the 'next silicon valley' though. How many places have claimed themselves to be this over the last 20 years? But ultimately America has silicon valley because they have favourable conditions for big corporations at the expense of everyone else, and a huge market to sell any of their products into without restriction, which is what the UK voted to leave. They also have a huge technology sector, with massive companies like nVidia and Intel, whereas our closest thing to that is ARM, who only licence technology, they don't build anything, and like most successful British companies, was already snapped up by a foreign conglomerate anyway. Welcome to Britain, where everything's for sale to the highest bidder, no questions asked.
 
To be honest I agree with Starmer. We are in a good position to become a powerhouse in AI and it could well be a massive boost to the UK.

AI is the new industrial revolution. we need to be at the front of this ( while trying to also be safe ) or we'll be left behind in a major way.

China will be the big issue I think, lack of any real regulations is going to mean they will run headlong into it.

It's very unclear me at this point if the government has any idea what it's doing on this front though. I've always been sceptical of governmental "dig us out of a hole" silver bullets ever since Thatcher decided that the way to get around having pissed our Marshall plan money down the drain was to offer up the countries infrastructure to the private sector.
 
It's like when somewhere says it's going to be the 'next silicon valley' though. How many places have claimed themselves to be this over the last 20 years? But ultimately America has silicon valley because they have favourable conditions for big corporations at the expense of everyone else, and a huge market to sell any of their products into without restriction, which is what the UK voted to leave. They also have a huge technology sector, with massive companies like nVidia and Intel, whereas our closest thing to that is ARM, who only licence technology, they don't build anything, and like most successful British companies, was already snapped up by a foreign conglomerate anyway. Welcome to Britain, where everything's for sale to the highest bidder, no questions asked.

I don’t think we will be the new silicon valley by any stretch ( as you say Nvidia are in the US and will take a Miracle to catch them in terms of hardware ) That ship has sailed.

But being in the top 5/10 countries for AI is where we need to be aiming in my view. And a lot of that will come from regulations( or lack there of ) and academics on the subject. The likes of Demis Hassabis etc.
 
To be honest I agree with Starmer. We are in a good position to become a powerhouse in AI and it could well be a massive boost to the UK.

AI is the new industrial revolution. we need to be at the front of this ( while trying to also be safe ) or we'll be left behind in a major way.

China will be the big issue I think, lack of any real regulations is going to mean they will run headlong into it.
Just like Covid
 
Some of the AI videos are impressive and freaky … but I recall being as equally as impressed with the graphics for Resident Evil, Final Fantasy, Silent Hill back in the 90’s - why were we not so freaked out back then ?
 
an anecdote, someone I know is currently trying to use LLM's in a big AAA game. the issue they are running into is that LLM's are inherently "helpful" so when ever they try to get the bad guys using them the bad guys end up not being so bad and sometimes even help the good guys if you talk to them correctly.

One large Omnipotent type bad guy just stopped his mission because the tester gave them a logical reason not to do what they were doing.
 
Some of the AI videos are impressive and freaky … but I recall being as equally as impressed with the graphics for Resident Evil, Final Fantasy, Silent Hill back in the 90’s - why were we not so freaked out back then ?
1737043212481.png

I thought this was the dogs back in the day:-)

My fear with AI (Terminator aside) is soon unless you were there your gonna be dubious about everything.you watch. Like most things lots of it will be great but in the wrong hands for deception?

Can you imagjne Kim Jong Un or Putin pumping out propaganda using AI.


Or Trump::-)
 
View attachment 143296

I thought this was the dogs back in the day:-)

My fear with AI (Terminator aside) is soon unless you were there your gonna be dubious about everything.you watch. Like most things lots of it will be great but in the wrong hands for deception?

Can you imagjne Kim Jong Un or Putin pumping out propaganda using AI.


Or Trump::-)

Putin already is. loads of bots controlled by LLM's out there posting millions pro Russian messages. the LLMs allowing realistic responses to any questions. quite a few caught out by using standard prompts to trigger things like responding in poem form.
 
Putin already is. loads of bots controlled by LLM's out there posting millions pro Russian messages.
Exactly and if people start actually seeing the things rather than reading about them it appears more valid. I just hope every safeguard has been contemplated with AI. I'm not convinced as I've always thought that humans biggest strength of searching for better will ultimately fuck us.
 
Love the way they bring the past back to life.


That's really cool! I believe in staying true to my commitments, which is why I always make sure to finish everything on time, even if it means getting some extra help. Whether it’s a big project or a smaller task, I don’t hesitate to seek support when needed—like from https://academized.com/pay-for-dissertation for my dissertation, for instance. Sometimes, getting professional help is exactly what you need to stay on track and ensure everything is completed as promised. It’s all about being responsible and making sure I meet my goals, no matter what
 
Last edited:

Don't have an account? Register now and see fewer ads!

SIGN UP
Back
Top