Chat GPT

The problem is you're talking about tommorow.

Self-driving cars have been coming "tomorrow" for a decade.

Not happening this year. Nor next.

The problem with being 95% of the way to solving a problem, the last 5% can be either incredibly drawn out and difficult, or prove fundamentally, utterly, impossible with a given approach. The whole approach to defining the problem can be at fault.

We are not 95% of the way. 90% would be extremely generous.
Not happening this year? Well we've only 15 days left so that's not saying much.

I'll even throw you next year as well. 2025, I am not sure. Jury's out. 2035? 100% nailed on certainty that AI will be significantly more intelligent by humans by then. Nailed on.

And what if I'm wrong and it's 2045? This is the blink of an eye in the scheme of things. The unavoidable, inescapable fact is that, barring nuclear war or other such curved ball, AI is going to be more intelligent than humans very soon. My personal view is it will be 2025, but maybe next year.
 
I was reading this new research from Google today and it reminded me of some of the recent discussions we’ve had in this thread. Mainly the argument that these AI models can’t produce anything “new” they are simply re-hashing their training data.


In this study, Google have used an LLM combined with an “evaluator” (essentially a tool used to evaluate truthfulness of outputs and deter hallucinations).

Using this they have found new solutions to maths problems that human mathematicians have been working on for decades. Namely the “cap set” problem. The model was able to develop new cap sets higher than humans have managed in the last 20 years of trying. Terence Tao, arguably the most notable living mathematician, described this as his favourite open problem in mathematics and the area is now being theoretically advanced by AI.

I see this as strong evidence of what I’d mentioned previously, which is that creativity and innovation are not abstract human traits - they are emergent qualities of any entity that has the capability to combine its vast training data in new ways. In my view there is quite literally no field that is safe from a sufficiently specialised AI.
But surely maths problems are just advanced pattern recognition, the exact sort of thing that AI already excels at?

The problem with AI in terms of creativity is that it doesn't yet have any real way of experiencing the world. That's why it's art and literature is so shit. Because it can only rely on second-hand accounts of the world to create its art, like those artists in the 18th century who had to draw creatures based entirely on the description of some explorer. I remember an interview with Graham Linehan where he was criticizing videogame writing, and one of the things he brought up was GTA where the writer's only experience of their subject matter is watching Goodfellas 10 times. That's where AI is in terms of creativity, except that it's even worse, because it isn't capable of having any real concept of any of the things it's writing. The 'write in the style of Shakespeare' stuff works, because language is a pattern it can recognize and replicate. But ask it 'write a poem about what it feels like to be on top of Mt Everest' and it will just come up with banal, derivative crap that doesn't ring true, because it could never possibly know.

Having said that, one thing that AI may reveal is just how much like that a lot of human-authored works are, which is why it can still be quite easy to fool us.
 
I was saying fully self-driving cars aren't appearing on the road any time soon. AGI, uhh. Well. We'll see.
Your question "how useful and accurate will it be?" Makes no sense when you understand that in a matter of 1 to 5 years, it will be more intelligent than all humans, ever.
Plus dude, I'm talking from experience, trying to get the thing to code. It doesn't know when to ask questions, seek clarification, or qualify answers. It is like a lightening fast polymath junior developer with very very bad ADD who skipped a lot of classes. Its a powerful sidekick. One you have to correct and learn not to get frustrated with. But I, and others, still have to do a shit load of work.

It really is a superb regex generator, that I love it for. That, sort of makes sense, if you know what regex is!

It so often doesn't have executive intelligence, that ability to nail a question and solve a problem. To me that's something to do with what intelligence is. We're autonomous creatures. Our brains and bodies work together. Nature spent hundreds of millions of years encoding intelligence for the purpose of survival. If it didn't work, it died. Think of that. Billions of creatures with billuions of neurons. Taught by reality - not what we say about reality, but reality itself. For a billion years. That's very different from the systems we're discussing.

Think about that, then think about the survival mechanisms involved in driving. That, to me, is what we're facing. It's possible (to me it seems obvious) that much of what makes life work is undescribed in language.
 
I was saying fully self-driving cars aren't appearing on the road any time soon. AGI, uhh. Well. We'll see.

Plus dude, I'm talking from experience, trying to get the thing to code. It doesn't know when to ask questions, seek clarification, or qualify it's answer. It is like a lightening fast polymath junior developer with bad ADD who skipped a lot of classes. Its a powerful sidekick. One you have to correct and learn not to get frustrated with. But I, and others, still have to do a shit load of work.

It really is a superb regex generator, that I love it for. That, sort of makes sense, if you know what regex is!

It so often doesn't have executive intelligence, that ability to nail a question and solve a problem. To me that's something to do with what intelligence is. We're autonomous creatures. Our brains and bodies work together. Nature spent hundreds of millions of years encoding intelligence for the purpose of survival. If it didn't work, it died. Think of that. Billions of creatures with billuions of neurons. Taught by reality - not what we say about reality, but reality itself. For a billion years. That's very different from the systems we're discussing.

Think about that, then think about the survival mechanisms involved in driving. That, to me, is what we're facing. It's possible (to me it seems obvious) that much of what makes life work is undescribed in language.
Fair enough mate. Your opinion is just as valid as mine. But IMO you are not factoring in the exponential rate of development. Where was AI 2 or 3 years ago and now look at it.

We'll know soon enough whether you're right or I am. And FWIW I do think self-driving cars will take a bit longer due to the need to intelligently process images REALLY fast, and do so with cheap, affordable hardware. It's going to be much easier to create super-intelligence in a massive data centre. That's what IMO is months rather than decades aways.
 
The problem is you're talking about tommorow.

Self-driving cars have been coming "tomorrow" for a decade.

Not happening this year. Nor next.

The problem with being 95% of the way to solving a problem, the last 5% can be either incredibly drawn out and difficult, or prove fundamentally, utterly, impossible with a given approach. The whole approach to defining the problem can be at fault.

We are not 95% of the way. 90% would be extremely generous.
One thing I would also add is that we hear this whenever a new technology comes out, and there are a lot of people who stand to make a hell of a lot of money from overstating this technology. The whole of tech industry investment seems to run on potential rather than actual revenue nowadays (how the fuck is Tesla the most valuable car company when they sell fuck all cars, for example), so there's a huge incentive to exaggerate how much of a revolution this is. And every product has to add AI nowadays just for marketing purposes, regardless of whether it's good for the product.

I've been looking into AI in education at university a bit recently, and have tried out a few tools. The very best of it saves a bit of time for someone when making lessons or learning materials. The worst of it creates pedagogically shite resources that won't help anyone learn, but they'll get used anyway, because AI will be used to decrease the amount of time that educators get to think carefully about what they're providing students with, and they'll be under pressure to create more and more in less and less time.
 
Might be a good time to bump this thread again given there’s been some developments lately.

OpenAI have just revealed a new text-to-video model called Sora which at first glance blows competitors out of the water. You can see examples of the videos here:


Effectively photorealistic video from a simple two sentence prompt.

Only a matter of time until your scheduled daily TV viewing comes courtesy of a bunch of GPUs.
 
That's cool. May need to get GPT+ so I can have a go when it's released
 
i don't think people realise just how profound SORA is. If you look at the videos, it's easy to marvel at the quality of the videos, generated in response to simple Engish prompts. You look at the videos and say "Wow".

BUT... that is only half the story. In fact it's 1% of the story. What is truly mind-bending, shocking even, is that to do what it does, SORA has itself without instruction, figured out how physics works. It cannot render tiny ships floating in a cup of foaming coffee without understanding the motion of ships, without understanding that coffee is not like water and that it foams differently, without understanding depth of field to make the ships look small. It cannot create a video of a woman walking in Tokyo with understanding how fabric sways and moves, how her hangbag would swing.

It was not taught to understand these things. They just showed it lots of other videos and it's just figured it out. This is yet more, but this time much more compelling evidence, that these AI's we are creating, really do in some sense "think". The idea that e.g. GPT4 is just generating the next words in a sentence without understanding anything, is just wrong. These thinks are thinking already.
 
The world has changed irreversibly again today with the reveal of GPT-4o.

Up until now, text-to-speech AIs weren't really good enough to convince you they were human. This omni-modal model however... well I'll let you judge for yourselves:





 

Don't have an account? Register now and see fewer ads!

SIGN UP
Back
Top
  AdBlock Detected
Bluemoon relies on advertising to pay our hosting fees. Please support the site by disabling your ad blocking software to help keep the forum sustainable. Thanks.