Chat GPT

I’m not sure I agree mate. Humans have capacity to imagine something new I don’t see AI replacing that - I just can’t see AI waking up one day saying I’m going to invent this…. It will be an invaluable tool once the idea has manifested itself

I used to think this about two years ago. I now not only think that it’s wrong, I am absolutely certainty that it is. And GPT-4 is not the proof of that. It’s actually better demonstrated in systems like AlphaZero. They do exactly what humans do, using “intuition” to invent strategies and think moves ahead in games of Go and Chess. But they do it way better than the best humans to a scary degree.

The idea that humans are somehow special or our brains are doing anything fundamentally different to what an AI is doing is starting to look very naive. The only difference is how the neurones in our brain are organised, how they perceive different senses to combine data from experiences, and the sheer scale and efficiency involved which has evolved over a billion years. Human beings have an extraordinarily large training data set and a very efficient compiler.

These are all immensely solvable barriers to overcome. It’s largely just a matter of how long it takes.
 
I used to think this about two years ago. I now not only think that it’s wrong, I am absolutely certainty that it is. And GPT-4 is not the proof of that. It’s actually better demonstrated in systems like AlphaZero. They do exactly what humans do, using “intuition” to invent strategies and think moves ahead in games of Go and Chess. But they do it way better than the best humans to a scary degree.

The idea that humans are somehow special or our brains are doing anything fundamentally different to what an AI is doing is starting to look very naive. The only difference is how the neurones in our brain are organised, how they perceive different senses to combine data from experiences, and the sheer scale and efficiency involved which has evolved over a billion years. Human beings have an extraordinarily large training data set and a very efficient compiler.

These are all immensely solvable barriers to overcome. It’s largely just a matter of how long it takes.

I asked it to invent an engine to let me travel at the speed of light. It said it was beyond our current technology. I asked it what technology it needed. It didn’t know.

It sounded clever when it told me it didn’t know however.
 
I asked it to invent an engine to let me travel at the speed of light. It said it was beyond our current technology. I asked it what technology it needed. It didn’t know.

It sounded clever when it told me it didn’t know however.

While a bit humorous, one of the greatest current challenges with LLMs is getting them to acknowledge when they don’t know something rather than trying to tell you what they think you want to here and hallucinating an answer. They are improving in this respect.

Also we need to get away from the idea that Chat GPT is the kind of AGI that we’re talking about more broadly. ChatGPT wasn’t built to solve the mysteries of the universe, it was built for one very specific task, which is to replicate human writing and speech.

It just so happens that the unintended consequences of it getting really good at this is that it opened up doors for it doing all sorts of other things. You’ve basically built an interface for humans to interact with AI in the same way they would a person which is extremely powerful for conveying ideas.

An actual AGI will also have reasoning capability built in. Which GPT doesn’t explicitly have… though some reasoning does happen to pop out of its answers, which is very interesting in and of itself and has caused a lot of philosophical debates on what reasoning actually is.
 
I used to think this about two years ago. I now not only think that it’s wrong, I am absolutely certainty that it is. And GPT-4 is not the proof of that. It’s actually better demonstrated in systems like AlphaZero. They do exactly what humans do, using “intuition” to invent strategies and think moves ahead in games of Go and Chess. But they do it way better than the best humans to a scary degree.

The idea that humans are somehow special or our brains are doing anything fundamentally different to what an AI is doing is starting to look very naive. The only difference is how the neurones in our brain are organised, how they perceive different senses to combine data from experiences, and the sheer scale and efficiency involved which has evolved over a billion years. Human beings have an extraordinarily large training data set and a very efficient compiler.

These are all immensely solvable barriers to overcome. It’s largely just a matter of how long it takes.
100% spot on. There's nothing "magic" about the human brain, other than it's incredible that evolution could have resulted in such an amazing thing. It is absolutely inevitable that man-made neural networks will soon be equally capable, and then a bit more capable and then a lot more capable and than massively more capable. Whilst we are limited by our own biology, synthetic AI will not be. It will be able to have more and more neurons, processing faster and faster and faster.

The thing that's actually pretty amazing is that GPT4 has around 100x less neurons than a human brain and yet it already knows more than any human on the planet. It understands umpteen languages and has the entire of wikipedia and umpteen other sources at its fingertips. We've invented a learning algorithm that is much better at learning than our own brains are.
 
Last edited:
While a bit humorous, one of the greatest current challenges with LLMs is getting them to acknowledge when they don’t know something rather than trying to tell you what they think you want to here and hallucinating an answer. They are improving in this respect.

Also we need to get away from the idea that Chat GPT is the kind of AGI that we’re talking about more broadly. ChatGPT wasn’t built to solve the mysteries of the universe, it was built for one very specific task, which is to replicate human writing and speech.

It just so happens that the unintended consequences of it getting really good at this is that it opened up doors for it doing all sorts of other things. You’ve basically built an interface for humans to interact with AI in the same way they would a person which is extremely powerful for conveying ideas.

An actual AGI will also have reasoning capability built in. Which GPT doesn’t explicitly have… though some reasoning does happen to pop out of its answers, which is very interesting in and of itself and has caused a lot of philosophical debates on what reasoning actually is.
Some people are hung up on the idea that because GPT4 and other transformer models are not built like the human brain, then necessarily they cannot think. I think this is patently wrong and these models do in some sense, think. OK it may not be thinking in exactly the same way as we think, but it is unimaginable to me that you can write entire paragraphs, books even, by simply predicting the next word. It's ludicrous to suggest that this is all they are doing.

The cat sat on the <blank>, might be easy enough. But what about the word after that? And after that? The permutations after only a few words are astronomical, and completely impossible to predict unless you understand what it is you are wanting to say. These things think already, I am certain of it. They just demonstrate that we do not understand what thinking is and how it emerges from complex neural nets.
 
The Open AI debacle looks as much as tho it might be about lack of progress, as consequence of a great leap forward.

"Gates also predicted that in the next two to five years, the accuracy of AI software will witness a considerable increase along with a reduction in cost. This will lead to the creation of new and reliable applications. Interestingly, he also said that he anticipates a stagnation in development initially. The billionaire said that, with GPT-4, the company has reached a limit, and he does not feel that GPT-5 will be better than its predecessor."

I nearly made those exact predictions on this thread last week. I can't say I have any great insight that hasn't been around for some time. Except that I could no longer ignore what a time sink it had all been, for not an awful lot of real results. And - at the end of the year - how few and far between the really useful and enjoyable products were.

Just watch out for SV's hype and lapses into shallow religious techno optmisim, and the myriad of distraction and sales techniques (hype, conspiracy, etc)... vs the reality for people like us - that by most measures our lives have not improved over the last 15-20 years in the same way they generally did in previous decades.
 

And here's the real story..... Silicon Valley startups take Venture Capital money, you form another start up, and they become your customer. Read the other day, companies that came through one source of investment (ycombinator, which Sam used to run, before being sacked for a conflict of interest), do over half their business exclusively with each other.
 
Last edited by a moderator:

Don't have an account? Register now and see fewer ads!

SIGN UP
Back
Top
  AdBlock Detected
Bluemoon relies on advertising to pay our hosting fees. Please support the site by disabling your ad blocking software to help keep the forum sustainable. Thanks.