Chippy_boy
Well-Known Member
Deeper and deeper down the rabbit hole we fall...
I’m not sure I agree mate. Humans have capacity to imagine something new I don’t see AI replacing that - I just can’t see AI waking up one day saying I’m going to invent this…. It will be an invaluable tool once the idea has manifested itself
I used to think this about two years ago. I now not only think that it’s wrong, I am absolutely certainty that it is. And GPT-4 is not the proof of that. It’s actually better demonstrated in systems like AlphaZero. They do exactly what humans do, using “intuition” to invent strategies and think moves ahead in games of Go and Chess. But they do it way better than the best humans to a scary degree.
The idea that humans are somehow special or our brains are doing anything fundamentally different to what an AI is doing is starting to look very naive. The only difference is how the neurones in our brain are organised, how they perceive different senses to combine data from experiences, and the sheer scale and efficiency involved which has evolved over a billion years. Human beings have an extraordinarily large training data set and a very efficient compiler.
These are all immensely solvable barriers to overcome. It’s largely just a matter of how long it takes.
I asked it to invent an engine to let me travel at the speed of light. It said it was beyond our current technology. I asked it what technology it needed. It didn’t know.
It sounded clever when it told me it didn’t know however.
100% spot on. There's nothing "magic" about the human brain, other than it's incredible that evolution could have resulted in such an amazing thing. It is absolutely inevitable that man-made neural networks will soon be equally capable, and then a bit more capable and then a lot more capable and than massively more capable. Whilst we are limited by our own biology, synthetic AI will not be. It will be able to have more and more neurons, processing faster and faster and faster.I used to think this about two years ago. I now not only think that it’s wrong, I am absolutely certainty that it is. And GPT-4 is not the proof of that. It’s actually better demonstrated in systems like AlphaZero. They do exactly what humans do, using “intuition” to invent strategies and think moves ahead in games of Go and Chess. But they do it way better than the best humans to a scary degree.
The idea that humans are somehow special or our brains are doing anything fundamentally different to what an AI is doing is starting to look very naive. The only difference is how the neurones in our brain are organised, how they perceive different senses to combine data from experiences, and the sheer scale and efficiency involved which has evolved over a billion years. Human beings have an extraordinarily large training data set and a very efficient compiler.
These are all immensely solvable barriers to overcome. It’s largely just a matter of how long it takes.
Some people are hung up on the idea that because GPT4 and other transformer models are not built like the human brain, then necessarily they cannot think. I think this is patently wrong and these models do in some sense, think. OK it may not be thinking in exactly the same way as we think, but it is unimaginable to me that you can write entire paragraphs, books even, by simply predicting the next word. It's ludicrous to suggest that this is all they are doing.While a bit humorous, one of the greatest current challenges with LLMs is getting them to acknowledge when they don’t know something rather than trying to tell you what they think you want to here and hallucinating an answer. They are improving in this respect.
Also we need to get away from the idea that Chat GPT is the kind of AGI that we’re talking about more broadly. ChatGPT wasn’t built to solve the mysteries of the universe, it was built for one very specific task, which is to replicate human writing and speech.
It just so happens that the unintended consequences of it getting really good at this is that it opened up doors for it doing all sorts of other things. You’ve basically built an interface for humans to interact with AI in the same way they would a person which is extremely powerful for conveying ideas.
An actual AGI will also have reasoning capability built in. Which GPT doesn’t explicitly have… though some reasoning does happen to pop out of its answers, which is very interesting in and of itself and has caused a lot of philosophical debates on what reasoning actually is.