All things AI.

Read this earlier


I know many people on here regard the BBC as untrustworthy/biased but bearing in mind how many people rely on the Internet for news it's quite concerning. AI in general bothers me a lot, surely it's only as good as the programmers who wrote the code and the potential cock-ups if it goes wrong are quite scary, you've even got politicians touting it as a great boost for the NHS etc etc.

It’s a bit of a misconception that AI is just a load of code. In reality it’s not a lot of code at all. It’s a way of encoding vast amounts of data such that if you input A you achieve a desired result B.

So your thought that it’s only as good as the programmers is not quite correct. If you look at Stockfish - the best chess AI - it is vastly superior to any human player. But it’s not working off some set of rules designed by programmers, we genuinely don’t know what it’s doing - it’s too smart for us - it plays moves which confuse Grandmasters and seem to make no sense. But it wins. It “feels” the game in a way that is totally alien to any human being, practically indistinguishable from magic.

The problem we’ll be facing soon with the advent of OpenAI’s o3 model, is that LLMs are reaching this point in a variety of other disciplines. And when they reach that level of competency all bets are off. o3 ranks equal to the #175 best human competitive computer programmer in the world. It can solve a breadth of frontier mathematics problems better than any single human.

What happens when you don’t need computer programmers because the AI just does it way better? And what about when it solves a maths problem but we as humans can’t understand how it solved it? AI is making decisions for us, but we don’t really know why it’s making these decisions. We just know it’s better at making them than we are so we let it get on with it. So why employ any humans at all? Well at least we have manual labour? Oh, the AI will just solve embodiment robotics so now we have androids way more capable than we are.

And then it starts making decisions which seem good for us but suddenly and very drastically we realise they are not good for us. In fact they are very bad.

Anyway I reckon we have another… few years before that… at least :)

The good news is that while the tech develops fast, adoption is much slower, and there are still elements of language models that make them unreliable in some circumstances. But you can guarantee that those won’t last long.
 
Creating certain images/styles you want with AI is a skill in itself and you’re probs just shite at it. It’s fine tho we all have our flaws
Aah right. So when I typed in "a purple shirt with white pinstripes", I wasn't clear enough and it was my own fault that the design ended up with clouds on it? Got ya.
 
Creating certain images/styles you want with AI is a skill in itself and you’re probs just shite at it. It’s fine tho we all have our flaws

There's probably an AI bot that can help with that.

Make it churn out something that the picture generating AI creator can better understand.
 

Witness the future as the 1950s envisioned it! This short AI film takes you on a thrilling ride through retro-futuristic landscapes, blending atomic-age aesthetics with advanced technology. Immerse yourself in a world of gleaming rocket ships, stylish space stations, and bold optimism—where yesterday’s dreams meet tomorrow’s reality.
 
Anyone else notice how in the most part these AI videos dont allow faces to move. mostly because that is where the Uncanny valley hits hardest.
 
It’s a bit of a misconception that AI is just a load of code. In reality it’s not a lot of code at all. It’s a way of encoding vast amounts of data such that if you input A you achieve a desired result B.

So your thought that it’s only as good as the programmers is not quite correct. If you look at Stockfish - the best chess AI - it is vastly superior to any human player. But it’s not working off some set of rules designed by programmers, we genuinely don’t know what it’s doing - it’s too smart for us - it plays moves which confuse Grandmasters and seem to make no sense. But it wins. It “feels” the game in a way that is totally alien to any human being, practically indistinguishable from magic.

The problem we’ll be facing soon with the advent of OpenAI’s o3 model, is that LLMs are reaching this point in a variety of other disciplines. And when they reach that level of competency all bets are off. o3 ranks equal to the #175 best human competitive computer programmer in the world. It can solve a breadth of frontier mathematics problems better than any single human.

What happens when you don’t need computer programmers because the AI just does it way better? And what about when it solves a maths problem but we as humans can’t understand how it solved it? AI is making decisions for us, but we don’t really know why it’s making these decisions. We just know it’s better at making them than we are so we let it get on with it. So why employ any humans at all? Well at least we have manual labour? Oh, the AI will just solve embodiment robotics so now we have androids way more capable than we are.

And then it starts making decisions which seem good for us but suddenly and very drastically we realise they are not good for us. In fact they are very bad.

Anyway I reckon we have another… few years before that… at least :)

The good news is that while the tech develops fast, adoption is much slower, and there are still elements of language models that make them unreliable in some circumstances. But you can guarantee that those won’t last long.
Very interesting post, thank you for the explanation whichs is a bit beyond and old-time assembler programmer like me. I'm not sure about "We just know it’s better at making them than we are so we let it get on with it." ? What happens if it gets involved with things like climate change and power generation etc? One possible entirely logical assessment is that humans are bad for the planet so why not eliminate them? Skynet?
 
Very interesting post, thank you for the explanation whichs is a bit beyond and old-time assembler programmer like me. I'm not sure about "We just know it’s better at making them than we are so we let it get on with it." ? What happens if it gets involved with things like climate change and power generation etc? One possible entirely logical assessment is that humans are bad for the planet so why not eliminate them? Skynet?
Because humans are not bad for the planet, lots of humans are our brains have allowed us to see off most of our natural predators and natures attempts to keep the numbers down through disease, ecosystems usually find a way to level the field
 
First it's going to destroy the entire world's economy.

Then it's going to cause an extinction level event.
I would at the very least be mightily concerned that pretty soon unless you are actually witnessing something it's hard to believe what's true. Dictators controlling media could convince their citizens of anything.
Even in so called free democracies you can put something out there that people will believe.
 
It’s a bit of a misconception that AI is just a load of code. In reality it’s not a lot of code at all. It’s a way of encoding vast amounts of data such that if you input A you achieve a desired result B.

So your thought that it’s only as good as the programmers is not quite correct. If you look at Stockfish - the best chess AI - it is vastly superior to any human player. But it’s not working off some set of rules designed by programmers, we genuinely don’t know what it’s doing - it’s too smart for us - it plays moves which confuse Grandmasters and seem to make no sense. But it wins. It “feels” the game in a way that is totally alien to any human being, practically indistinguishable from magic.

The problem we’ll be facing soon with the advent of OpenAI’s o3 model, is that LLMs are reaching this point in a variety of other disciplines. And when they reach that level of competency all bets are off. o3 ranks equal to the #175 best human competitive computer programmer in the world. It can solve a breadth of frontier mathematics problems better than any single human.

What happens when you don’t need computer programmers because the AI just does it way better? And what about when it solves a maths problem but we as humans can’t understand how it solved it? AI is making decisions for us, but we don’t really know why it’s making these decisions. We just know it’s better at making them than we are so we let it get on with it. So why employ any humans at all? Well at least we have manual labour? Oh, the AI will just solve embodiment robotics so now we have androids way more capable than we are.

And then it starts making decisions which seem good for us but suddenly and very drastically we realise they are not good for us. In fact they are very bad.

Anyway I reckon we have another… few years before that… at least :)

The good news is that while the tech develops fast, adoption is much slower, and there are still elements of language models that make them unreliable in some circumstances. But you can guarantee that those won’t last long.
Can you ease my mind that least we are in control of stopping it if needed?
 
Because humans are not bad for the planet, lots of humans are our brains have allowed us to see off most of our natural predators and natures attempts to keep the numbers down through disease, ecosystems usually find a way to level the field
I know humans aren't bad for the planet, the planet was here long before humans and will be here long after we're extinct, the only threat to the planet is when the sun runs out of fuel and turns into a red giant and then it's curtains. What I meant was that if AI is basing it's decisions on data then it may well be that the data it can get at may well suggest that humans are not a good thing?
 
Very interesting post, thank you for the explanation whichs is a bit beyond and old-time assembler programmer like me. I'm not sure about "We just know it’s better at making them than we are so we let it get on with it." ? What happens if it gets involved with things like climate change and power generation etc? One possible entirely logical assessment is that humans are bad for the planet so why not eliminate them? Skynet?

I think it will almost certainly get involved in climate change at some point. Some of the tech community are now saying that we’ve lost the climate battle (as in, we haven’t been able to solve it ourselves) and it is now too late, so our only option is to put the house on black, go big on AI and hope that in its infinite wisdom it can solve climate change for us by helping us crack things like engineering problems associated with fusion power. In Sam Altman’s words:

“If we have to spend even 1% of the world’s electricity training powerful AI, and that AI helps us figure out how to get to non-carbon-based energy or to do carbon capture better, that would be a massive win,”

Bear in mind, these AI data centres use such vast quantities of energy that big tech firms are building out modular nuclear reactors just to power them. So it really is a “go big or go home” strategy.

The problem we have is that nobody is going to pause AI development. If OpenAI stopped everything tomorrow, China would just carry on. There is such a vast advantage to having some super intelligent AI running critical infrastructure so the chance of it happening somewhere is quite high.

Then you’re right, how it makes its decisions is very difficult if not impossible to control. It’s like your dog trying to tell you what to do. Except you are built with social values and years of evolution which tell you what is “good” and what is “bad”. There is no good or bad to an AI, there’s just whatever its objective is. We can try and tell it what is “good” but, will it understand? Will it even care? Will it find a loop hole? The alignment of our objectives with the AI requires the AI to fully understand the ethical framework in which we’re operating. But… not even human beings understand the ethical framework in which we operate. We make ethically inconsistent decisions all the time. So how do we begin to convey this to a superior intelligence?

We haven’t adequately prepared for any of this and the more people work on it, the harder they find it to be.

This guy’s videos are great explainers:

 
It’s a bit of a misconception that AI is just a load of code. In reality it’s not a lot of code at all. It’s a way of encoding vast amounts of data such that if you input A you achieve a desired result B.

So your thought that it’s only as good as the programmers is not quite correct. If you look at Stockfish - the best chess AI - it is vastly superior to any human player. But it’s not working off some set of rules designed by programmers, we genuinely don’t know what it’s doing - it’s too smart for us - it plays moves which confuse Grandmasters and seem to make no sense. But it wins. It “feels” the game in a way that is totally alien to any human being, practically indistinguishable from magic.

The problem we’ll be facing soon with the advent of OpenAI’s o3 model, is that LLMs are reaching this point in a variety of other disciplines. And when they reach that level of competency all bets are off. o3 ranks equal to the #175 best human competitive computer programmer in the world. It can solve a breadth of frontier mathematics problems better than any single human.

What happens when you don’t need computer programmers because the AI just does it way better? And what about when it solves a maths problem but we as humans can’t understand how it solved it? AI is making decisions for us, but we don’t really know why it’s making these decisions. We just know it’s better at making them than we are so we let it get on with it. So why employ any humans at all? Well at least we have manual labour? Oh, the AI will just solve embodiment robotics so now we have androids way more capable than we are.

And then it starts making decisions which seem good for us but suddenly and very drastically we realise they are not good for us. In fact they are very bad.

Anyway I reckon we have another… few years before that… at least :)

The good news is that while the tech develops fast, adoption is much slower, and there are still elements of language models that make them unreliable in some circumstances. But you can guarantee that those won’t last long.

Do you have a link to that. would be interesting to see what was being developed.

I've used quite a few AI tools for development and the majority of them are awful. id say at least 50% of the time the suggestions they make are just plain wrong.

I know of one cutting edge firm that has trialed making pages for there website with AI to replace web builders, the results were OK, nothing special, doesn't mean it wont get there tho.
 

Don't have an account? Register now and see fewer ads!

SIGN UP
Back
Top