Chat GPT

Not sure if this link can be accessed without subscription, but the podcast episode can be downloaded free. Fascinating interview on AI more generally.
https://www.newyorker.com/podcast/p...-far-too-late-to-stop-artificial-intelligence
Geoff Hinton is such a shining light in all of this, and in fact he and I have exchanged a few emails on the subject and I cannot share too much detail but suffice it to say, his last reply - and I am cutting and pasting - was,

1702049459242.png
 
Last edited:
Honestly mate that is just so mistaken. The only bit I agree with the "any time soon" part, and that's only because you are describing one of the most difficult puzzles in all of science. But the idea that AI can only parrot out what humans have fed it, is I am afraid simply not true.

In my experience, it's not people's understanding of AI that is most flawed, it is their understanding of humans.

What do people think we are, if not simply parrots with an extremely large multi-modal training data set? These words you're reading, you are able to do so because you have trained your brain on reading other words. The accent you have is something you picked up from hearing other people with that accent. Every word you say and every thing you think comes from the "data" that makes up your experiences. There is no magic happening.

"Creativity" comprises combining that data in a new way that produces something different. AI does this routinely to fold proteins, beat people at Go, or write a Shakespearian sonnet in the style of Michael Caine.

I suspect one day people will come to realise they are way more like parrots than the AI.
 
In my experience, it's not people's understanding of AI that is most flawed, it is their understanding of humans.

What do people think we are, if not simply parrots with an extremely large multi-modal training data set? These words you're reading, you are able to do so because you have trained your brain on reading other words. The accent you have is something you picked up from hearing other people with that accent. Every word you say and every thing you think comes from the "data" that makes up your experiences. There is no magic happening.

"Creativity" comprises combining that data in a new way that produces something different. AI does this routinely to fold proteins, beat people at Go, or write a Shakespearian sonnet in the style of Michael Caine.

I suspect one day people will come to realise they are way more like parrots than the AI.
Spot on mate.

We really don't know what sentience is. We get into all sorts of philosophical discussions about it. What does it mean to "think".

Well it seems to me - and this is just my lay-persons view mind you - that thinking and language are intrinsically linked. When you imagine concepts in your mind, you use words. What I believe is going on is that when these large language models reach a certain level of complexity, trained upon enough language, a strange and wonderful thing happens and they really do start to think. Not in precisely the same way we do of course because their achitecture is different from ours, but we and AI share the same fundamental construct that we both possess very large and complex neural nets. These nets IMO start "thinking" once they've reached a critical level of complexity. I think it's an emergent property of very large neural nets.

Geoffrey himself has said he is sure that these large LLMs really do think. People might say they are just clever at predicting the next words in a sentence but that's clearly impossible without understanding in some way, what it is you are trying to say. An LLM could not write a whole article, a book even - as they can do - without understanding what it is they are talking about.
 
I was reading this new research from Google today and it reminded me of some of the recent discussions we’ve had in this thread. Mainly the argument that these AI models can’t produce anything “new” they are simply re-hashing their training data.


In this study, Google have used an LLM combined with an “evaluator” (essentially a tool used to evaluate truthfulness of outputs and deter hallucinations).

Using this they have found new solutions to maths problems that human mathematicians have been working on for decades. Namely the “cap set” problem. The model was able to develop new cap sets higher than humans have managed in the last 20 years of trying. Terence Tao, arguably the most notable living mathematician, described this as his favourite open problem in mathematics and the area is now being theoretically advanced by AI.

I see this as strong evidence of what I’d mentioned previously, which is that creativity and innovation are not abstract human traits - they are emergent qualities of any entity that has the capability to combine its vast training data in new ways. In my view there is quite literally no field that is safe from a sufficiently specialised AI.
 
Last edited:
In my experience, it's not people's understanding of AI that is most flawed, it is their understanding of humans.

What do people think we are, if not simply parrots with an extremely large multi-modal training data set? These words you're reading, you are able to do so because you have trained your brain on reading other words. The accent you have is something you picked up from hearing other people with that accent. Every word you say and every thing you think comes from the "data" that makes up your experiences. There is no magic happening.

"Creativity" comprises combining that data in a new way that produces something different. AI does this routinely to fold proteins, beat people at Go, or write a Shakespearian sonnet in the style of Michael Caine.

I suspect one day people will come to realise they are way more like parrots than the AI.
Once the AI learns how to bullshit about the art that it's just 'created' will be a turning point.
 
I was reading this new research from Google today and it reminded me of some of the recent discussions we’ve had in this thread. Mainly the argument that these AI models can’t produce anything “new” they are simply re-hashing their training data.


In this study, Google have used an LLM combined with an “evaluator” (essentially a tool used to evaluate truthfulness of outputs and deter hallucinations).

Using this they have found new solutions to maths problems that human mathematicians have been working on for decades. Namely the “cap set” problem. The model was able to develop new cap sets higher than humans have managed in the last 20 years of trying. Terence Tao, arguably the most notable living mathematician, described this as his favourite open problem in mathematics and the area is now being theoretically advanced by AI.

I see this as strong evidence of what I’d mentioned previously, which is that creativity and innovation are not abstract human traits - they are emergent qualities of any entity that has the capability to combine its vast training data in new ways. In my view there is quite literally no field that is safe from a sufficiently specialised AI.
What annoys me when using AI is the overlaying set of algorithms that dictate what the AI is allowed to reveal of knowledge. I have noticed this rule set is getting tighter and tighter, clearly political context to it.
 
In my view there is quite literally no field that is safe from a sufficiently specialised AI.
Agreed, but I'd actually re-state it as "no field is safe from a sufficiently capable general purpose AI". I cannot imagine there will be ANY non-manual jobs which are safe over the next 20 years or so. The cleverest scientists, the most capable CEOs, the best teachers and philosophers, with in a very short space of time will not be humans. And give it another 20 years after that and no manual labour jobs will be safe either.
 
Honestly mate that is just so mistaken. The only bit I agree with the "any time soon" part, and that's only because you are describing one of the most difficult puzzles in all of science. But the idea that AI can only parrot out what humans have fed it, is I am afraid simply not true.
No, it's a big question. It can come up with pretty much anything. But how useful and accurate will it be?
 
No, it's a big question. It can come up with pretty much anything. But how useful and accurate will it be?
Your question "how useful and accurate will it be?" Makes no sense when you understand that in a matter of 1 to 5 years, it will be more intelligent than all humans, ever.
 
The problem is you're talking about tommorow.

Self-driving cars have been coming "tomorrow" for a decade.

Not happening this year. Nor next.

The problem with being 95% of the way to solving a problem, the last 5% can be either incredibly drawn out and difficult, or prove fundamentally, utterly, impossible with a given approach. The whole approach to defining the problem can be at fault.

We are not 95% of the way. 90% would be extremely generous.
 

Don't have an account? Register now and see fewer ads!

SIGN UP
Back
Top
  AdBlock Detected
Bluemoon relies on advertising to pay our hosting fees. Please support the site by disabling your ad blocking software to help keep the forum sustainable. Thanks.