Chat GPT

Taking this scenario a little bit further.
Does Paul really need to paint the whole lounge in one go? Whatever he does it's still going to take appx. 6 hours to complete both jobs.
Will Paul need to use the kitchen to prepare and cook his pizza, or will he use Just Eat to provide him with his desired dish?
There are other interminables in this, on the face of it, straightforward puzzle.
 
Yep, there's a lot of reasons to be skeptical. But in all honesty, we're going to get screwed either way. The choice is by who. A govt who we can vote out. Or Google, Facebook, the new Chinese tech companies, Peter Thiel, and so on.

We've sleepwalked into giving all our shit away to google and others already. The next lot might not be American, or western. They might have far more leverage over us, and be far, far less interested in our wellbeing, safety and freedom.
Freedom.
That is something which doesn't truly exist in the physical world, 'even birds are chained to the sky' - Dylan.
Only in our thoughts and feelings can we experience true freedom and our opportunities to be able to achieve this are becoming fewer.
Our minds are being cluttered with 24/7 'news', bland and ridiculous t.v programmes (soaps, big brother, meerkats etc.) Social media and so on.
Our feelings are also trampled underfoot by media, pills, drugs, war etc.
So, freedom, point me in the right direction....
 
walle-socialnetwork01_jpg_1400x0_crop_q85.jpg
Is that the Irish dipper?
 
Been using it again today, clever but unbelievably frustrating too.

Was asking ChatGPT to write me some Javascript to do something very simple in a web page but we kept going round in circles. I'll pay for a month of the new version and see if it does any better. It was hardly rocket science.
Pie charts....
 
It cant think in the same way a human thinks, it isnt thinking at all. Its being trained on obscenely large amounts of data that we cant even fathom. The problem with visualising how large language models work is that people dont understand how computers work, they can do billions of operations a second.
There is enormous effort going into getting this result, and that is through the use of massive data and massive computing power.
Your first sentence line is correct, but it's the opinion of people way more knowledgeable than me, that they do think.

This is what "the Godfather of AI" Geoffrey Hinton's take on it, for example:



If you think he's just some nut, take a look at his CV: https://en.wikipedia.org/wiki/Geoffrey_Hinton
 
Been using it again today, clever but unbelievably frustrating too.

Was asking ChatGPT to write me some Javascript to do something very simple in a web page but we kept going round in circles. I'll pay for a month of the new version and see if it does any better. It was hardly rocket science.
Bear in mind it hasn't been taught how to program JavaScript at all. Or indeed how to program anything. Let that sink in for a moment!

It's just figured it out for itself. How well do you think your Mum would do at writing some JavaScript if she has zero coding experience?

And imagine how good these models are going to get when we DO want them to be able to program.

This is really not me being a doom-mongerer but people need to quickly get their heads around the fact that AI (and later, robotics) are going to replace human work in pretty much every area. Touchy feely and complex manual jobs will be last to go, but anything which requires thinking, is up for grabs in the VERY near term. Even CEO's jobs will go.
 
Your first sentence line is correct, but it's the opinion of people way more knowledgeable than me, that they do think.

This is what "the Godfather of AI" Geoffrey Hinton's take on it, for example:



If you think he's just some nut, take a look at his CV: https://en.wikipedia.org/wiki/Geoffrey_Hinton

I encourage you to have more faith in yourself! This sort of thing is basically high level philosophy of mind and is meant to be debated!
 
Yeah the video you posted, he says they think because he used the phrase " it really does think that". I agree i would also use this word " it thinks X" in talking about chat gpt, but it doesnt mean i think it is thinking in the same way that humans do. You can also say that a basic computer program thinks the next word i want to type on my auto complete is "x" but no one talks about auto complete having the ability to "think" in the other sense of the word.
 
Personally i believe (think!) if we continue along this road we will uncover more about the human mind and be able to truly mimic our "thinking" process. But we haven't got there yet with chatGPT, even if it is vastly beter at retrieving what we want through large amount of data.
 
Yeah the video you posted, he says they think because he used the phrase " it really does think that". I agree i would also use this word " it thinks X" in talking about chat gpt, but it doesnt mean i think it is thinking in the same way that humans do. You can also say that a basic computer program thinks the next word i want to type on my auto complete is "x" but no one talks about auto complete having the ability to "think" in the other sense of the word.
It's a semantic argument really. The same can be said of the word "understand".

But the practical reality is it doesn't matter. If AI's in the very near future can perform all the mental tasks a human can do - and that is really just around the corner - then how they actually do it and whether it's "proper" thinking and understanding, or just fake, is rather irrelevant.

And the idea that once they have become our equal, they will very soon after be superior, is at the same time staggering, disturbing and incredibly exciting.

FWIW I am on the "it really does think" side of the fence. I think (no pun intended) people just have too narrow a definition of what thinking is. Some are constrained to believe that only biological "thinking" is real. I am not so narrow minded. The fact is we have no clue what is going in the billions of connections and weightings inside these monumentally complex models.
 
Personally i believe (think!) if we continue along this road we will uncover more about the human mind and be able to truly mimic our "thinking" process. But we haven't got there yet with chatGPT, even if it is vastly beter at retrieving what we want through large amount of data.
I think you are right. Incidentally, Geoffrey Hinton is at heart a cognitive psychologist. He started out 40 years ago investigating (silicon based) neural networks because he thought it would lead to a better understanding of how the brain works.

What has shocked him is that these current massive neural nets such as GPT4 already demonstrate capability and levels of human-like "understanding" that he never thought would be possible. In some ways they are better than human brain neural networks already.

It seems to me inconceivable given the rate of progress that machine intelligence will not surpass that of humans very very soon indeed. A few years ago, the median timeline for this amongst experts in the field was 80 years. I think you'd get a VERY different answer if you asked them now. But even if it is 80 years, that is NOTHING in the context of life on earth and indeed human life on earth. We are talking in the blink of an eye, man will be overtaken and not be the most intelligent "life" on the planet. Staggering.
 
Last edited:
Yeah very interesting points! I agree that super intelligence is coming, and its it opens up all sort of questions about the human mind and what we are as humans.

I think we havent as a society and even in the intellectual class really dived into what exactly is going on in our brains - and this is what is so interesting and also what will be unavoidable the more advanced ai learning gets.
 
Oh and by the way, in my line above "We are talking in the blink of an eye, man will be overtaken and not be the most intelligent "life" on the planet", I am of course only referring to its cognitive capabilities, not its physical attributes.

But our own physical biology has only developed by chance - by the random errors caused by imperfect DNA replication happening trillions of times with the better mutations coming out on top. The end result might be human forms which are in many ways a marvel, but in others, quite naff.

I also think it inevitable in the fairly near future (I am talking less than 100 years) minds immeasurably superior to our own will be able to design and build synthetic "creatures" which outperform us physically in every respect.

So it's not only our brains that will be superseded, it's our bodies as well. When Philip K DIck wrote "Do Androids Dream of Electric Sheep", it was more than insightful, IMO.
 
Heres a idea- say we agree now that chatgpt doesnt have human like thoughts or feelings. If you ask it now after a response to your question- how did you know or what were your thought processes that led to it will reply " as an ai language model" and proceed to explain about neural networks. but lets say gpt 5 has been exposed to lots of personalities on how a human would answer, similiarly as how it can replicate essays, or has been trained on lots and lots of chats between people talking about thought processes. This could probably fool 99 percent of the people it interacts with.


But would that be the same as an ai who could think? One that has been trained how to respond? This seems an interesting question to ponder
 
Heres a idea- say we agree now that chatgpt doesnt have human like thoughts or feelings. If you ask it now after a response to your question- how did you know or what were your thought processes that led to it will reply " as an ai language model" and proceed to explain about neural networks. but lets say gpt 5 has been exposed to lots of personalities on how a human would answer, similiarly as how it can replicate essays, or has been trained on lots and lots of chats between people talking about thought processes. This could probably fool 99 percent of the people it interacts with.


But would that be the same as an ai who could think? One that has been trained how to respond? This seems an interesting question to ponder
Yes it transpires actually that people are very easily duped. There's even a term for it - I cannot remember it off-hand but it's part of our psyche to attribute "human-ness" to non-human things, and when something actually pretends to BE human, then we get drawn in hook, line and sinker.

I think it may be a long time (decades?) before we really know whether these things GPT5, GPT6, Palm2, Deepmind Gemini etc have become conscious. We have no way of looking inside and figuring out what they are doing, and moreover even if we did, who's to say that some other non-human structures are actually thinking, even though they may not be doing it the same way humans do it.

But whilst that question is interesting, it's also a bit of a moot point. If it walks like a duck, talks like a duck and quacks like a duck, then effectively it is a duck. If we cannot tell whether its thinking like a human or not, does it matter whether it is or not?
 
Yes it transpires actually that people are very easily duped. There's even a term for it - I cannot remember it off-hand but it's part of our psyche to attribute "human-ness" to non-human things, and when something actually pretends to BE human, then we get drawn in hook, line and sinker.

I think it may be a long time (decades?) before we really know whether these things GPT5, GPT6, Palm2, Deepmind Gemini etc have become conscious. We have no way of looking inside and figuring out what they are doing, and moreover even if we did, who's to say that some other non-human structures are actually thinking, even though they may not be doing it the same way humans do it.

But whilst that question is interesting, it's also a bit of a moot point. If it walks like a duck, talks like a duck and quacks like a duck, then effectively it is a duck. If we cannot tell whether its thinking like a human or not, does it matter whether it is or not?

Pareidolia

The things we need to be most careful about with AI are the following two things:

1. Self-improvement: The ability for an AI to automatically review its own code and suggest improvements. At the moment the AI LLMs we have can only do this effectively with human supervision but we are getting to the point where they soon may not need us. At this stage we could see what is known as an AI-explosion. Very rapid developments of the AI that outpace the speed at which any human could develop new iterations. There are some constraints on this which I think offer us some theoretical protection like the training data that's available to the AI and the amount of processing power etc.

2. Self-replication: This is a property that non-AI can also have (e.g. a computer virus) but this is the attribute that leads to propagation. As long as AI isn't able to propagate itself and there are 'air gaps' it will never be able to do any more than we allow it to do. We in theory hold the keys to what the AI could have access to in our physical domain.

If we let either of these two things process without adequate safeguards we won't even know we're in trouble until it's too late to do anything about it.
 

Don't have an account? Register now and see fewer ads!

SIGN UP
Back
Top