SkyBlueFlux
Well-Known Member
Read this earlier
Apple AI feature 'out of control', says former Guardian editor
Apple has pledged improvements to its news summarising tool, but critics say it is dangerous and needs to be withdrawn.www.bbc.co.uk
I know many people on here regard the BBC as untrustworthy/biased but bearing in mind how many people rely on the Internet for news it's quite concerning. AI in general bothers me a lot, surely it's only as good as the programmers who wrote the code and the potential cock-ups if it goes wrong are quite scary, you've even got politicians touting it as a great boost for the NHS etc etc.
It’s a bit of a misconception that AI is just a load of code. In reality it’s not a lot of code at all. It’s a way of encoding vast amounts of data such that if you input A you achieve a desired result B.
So your thought that it’s only as good as the programmers is not quite correct. If you look at Stockfish - the best chess AI - it is vastly superior to any human player. But it’s not working off some set of rules designed by programmers, we genuinely don’t know what it’s doing - it’s too smart for us - it plays moves which confuse Grandmasters and seem to make no sense. But it wins. It “feels” the game in a way that is totally alien to any human being, practically indistinguishable from magic.
The problem we’ll be facing soon with the advent of OpenAI’s o3 model, is that LLMs are reaching this point in a variety of other disciplines. And when they reach that level of competency all bets are off. o3 ranks equal to the #175 best human competitive computer programmer in the world. It can solve a breadth of frontier mathematics problems better than any single human.
What happens when you don’t need computer programmers because the AI just does it way better? And what about when it solves a maths problem but we as humans can’t understand how it solved it? AI is making decisions for us, but we don’t really know why it’s making these decisions. We just know it’s better at making them than we are so we let it get on with it. So why employ any humans at all? Well at least we have manual labour? Oh, the AI will just solve embodiment robotics so now we have androids way more capable than we are.
And then it starts making decisions which seem good for us but suddenly and very drastically we realise they are not good for us. In fact they are very bad.
Anyway I reckon we have another… few years before that… at least :)
The good news is that while the tech develops fast, adoption is much slower, and there are still elements of language models that make them unreliable in some circumstances. But you can guarantee that those won’t last long.