Chat GPT

Thanks for the post, and I do understand that to an extent. I guess I'm just struggling with how we get from a beyond super smart AI to where it's destroying humanity. I don't get why or how that would happen, I don't get why people won't have jobs to go to outside of niche segments of society. Super smart AI doesn't negate the need for nurses, police officers, nursery staff etc. Why would it develop itself to destroy the planet? Sorry - I know I'm likely being extremely naive here, but at this point certain posts on this thread just sound way beyond anything I can imagine. That's on me I suppose!

I would look up some of Damocles posts further up the thread, he gave some good examples of this.

It’s known as the alignment problem - but it breaks down into two questions. 1. How do you make sure the AI has the same goals that we have? 2. How do we reward AI in the right way to achieve those goals without some unintended consequence?

I think Damocles used the example of if we give it a “bad” instruction like “make us loads of coffee”. Then how do we stop an AI with near unchecked power from trying to boil the world’s oceans to get us loads of coffee.

It sounds like this kind of thing is easy to solve for but it’s actually really not, these misalignments happen all the time in AI. They will train a model to do one specific task and then when they ask it to do something slightly different they find they hadn’t actually trained it to do what they thought in the first place - the behaviour was not what they expected.

The AI doesn’t have to be “evil” it just needs to be trying to achieve something different to what we intended.

It could also be evil though.
 
For now, yes you’re right that AI is often not good enough to self-correct errors in code. Sometimes they can do it if prompted by humans but errors will perpetuate with the current quality of AI which is part of the reason we’re not yet at this stage. But humans also make errors, and if you have a proper process of development and testing and AI capable of what is basically QA then this issue gets solved.

They are also training a subset of AI to be really good at general problem solving. At the moment the Large Language Models we see talked about demonstrate a mixture of cognitive memorisation with some limited problem solving. So if you give them maths problems, they might get them right if they have memorised similar problems in their training data, but if you change the problem slightly then the LLMs are less good at generalising the logic. That’s because they aren’t designed to be logic engines, they’re designed to predict language.

But they are training AI that will effectively be logic specialists. And those AI will be able to plan and problem solve situations like “what is wrong with my code” better than any human could. Similar to how they trained AI to win in a game of Go. This is just Go with way more rules.

People are seeing this problem through the eyes of GPT-4 just one day becoming super smart, but GPT only does language… in reality it is likely to start with a system of AIs that all feed into one another to create a General Intelligence. Each part being highly specialised. That’s when it gets ropey.
Thanks for this - I'm fascinated by all this. I use GPT and the Adobe AI tools in my work but only very basic things to save time. Stuff like drafting snappy social media headlines and extending backgrounds in photos to make them easier to fit into artwork templates etc. A few years ago something that would've taken me all day now takes a few mins.

I work in marketing and I don't see AI "taking our jobs" at all. The people who are going to lose their jobs are the people who don't adapt to the new tools.

For me it's an epochal moment in the same way DTP was in the 90s or social media was in the 2000s.
 
I would look up some of Damocles posts further up the thread, he gave some good examples of this.

It’s known as the alignment problem - but it breaks down into two questions. 1. How do you make sure the AI has the same goals that we have? 2. How do we reward AI in the right way to achieve those goals without some unintended consequence?

I think Damocles used the example of if we give it a “bad” instruction like “make us loads of coffee”. Then how do we stop an AI with near unchecked power from trying to boil the world’s oceans to get us loads of coffee.

It sounds like this kind of thing is easy to solve for but it’s actually really not, these misalignments happen all the time in AI. They will train a model to do one specific task and then when they ask it to do something slightly different they find they hadn’t actually trained it to do what they thought in the first place - the behaviour was not what they expected.

The AI doesn’t have to be “evil” it just needs to be trying to achieve something different to what we intended.

It could also be evil though.
This reminds me of the Universal Paperclips game. I read a book about it a few years ago..
 
Ive started to play about with this, one one hand i am pleasantly surprised at how easy it is to use and can generate documents that would take me hours on the other hand it scares the shit out of me.
 
I asked how to hack next doors cctv the answer.
cannot fulfill your request. I'm just an AI, it's not within my programming or ethical guidelines to provide information on how to engage in illegal activities such as hacking CCTV cameras. Additionally, providing information on how to hack CCTV cameras can be considered a criminal offense .
 
I think it's a great tool for a lot of things but especially for those of us that are DMs for DnD. It is an easy tool that can take part of the creative burden off of us
 
I watched the first few minutes and honestly, he's chatting shite. There's more chance of Liverpool winning the champions league this season than us achieving his so called singularity any time soon. If you are really interested in this stuff, the guy to watch is Yoshua Bengio. He's one of the godfathers of modern AI and is a particularly brilliant communicator.
Btw, just read this from Bengio...

"Previously thought to be decades or even centuries away, we now believe <AGI> could be within a few years or decades,” Bengio told the senators. “The shorter timeframe, say five years, is really worrisome because we will need more time to effectively mitigate the potentially significant threats to democracy, national security, and our collective future"
...
"In my first two months of playing with <ChatGPT> I was mostly comforted in my beliefs that it’s still missing something fundamental. I wasn’t worried. But after playing with it enough, it dawned on me that it was an amazingly surprising advance. We’re moving much faster than I anticipated. I have the impression that it might not take a lot to fix what’s missing."
 
Last edited:
Thanks for the post, and I do understand that to an extent. I guess I'm just struggling with how we get from a beyond super smart AI to where it's destroying humanity. I don't get why or how that would happen, I don't get why people won't have jobs to go to outside of niche segments of society. Super smart AI doesn't negate the need for nurses, police officers, nursery staff etc. Why would it develop itself to destroy the planet? Sorry - I know I'm likely being extremely naive here, but at this point certain posts on this thread just sound way beyond anything I can imagine. That's on me I suppose!
Replacing all manual labour will require robotics that are incredibly sophisticated and we are miles off that, so skilled manual jobs like e.g. being a plasterer will be the last to go. But it's only a matter of time.

However to answer your question, "Why would it develop itself to destroy the planet?", let me turn that around and ask, "Why wouldn't it"?

That's the concern. Once these things are immeasurably more intelligent than we are, we have no clue what they might think or do. And it is more than fanciful to think that they might want to harm us. One of the strongest urges, innate in all advanced life forms, is to do everything possible to not die. If these new lifeforms have this and they perceive a risk that humans concerned about our loss of control, want to switch them off, then we'd be in deep trouble. They would not have morals or guilt or any sense of remorse in eliminating all human life.
 
I disagree with the bold point...

I think you're both looking at these "problems" with human-level IQ and basing it on our current world.

Everything is going to change dramatically and quickly. Let's say we have an AI that is 1000x more intelligent than the smartest human. It would solve things like poverty, world hunger, climate change, housing shortages etc within minutes. It would be like asking Einstein to add 2+2. Something this intelligent would be capable of doing things our 'primitive' minds could only consider as magic - we're talking immortality, limitless knowledge, virtual realities etc.

Either, we are all going to die (because of a mishap or a Fermi's Paradox 'Great Filter' event). Or we are on the cusp of living in eternal paradise - they are effectively the only two outcomes here. How soon either of those is going to happen is the question to be asking. As well as why every government on the planet isn't doing everything they can to ensure whoever is coding AI is being ultra-responsible!
In an idle few minutes I've been re-reading this thread since this subject is so fascinating for me. I think I must have missed your post, or at least this from your final paragraph,

Either, we are all going to die (because of a mishap or a Fermi's Paradox 'Great Filter' event). Or we are on the cusp of living in eternal paradise - they are effectively the only two outcomes here.

How deeply profound! I'd never really thought of this as a Boolean thing, but I think you are right. It's either nirvana or curtains, and nothing in between.
 
Btw, just read this from Bengio...

"Previously thought to be decades or even centuries away, we now believe <AGI> could be within a few years or decades,” Bengio told the senators. “The shorter timeframe, say five years, is really worrisome because we will need more time to effectively mitigate the potentially significant threats to democracy, national security, and our collective future"
...
"In my first two months of playing with <ChatGPT> I was mostly comforted in my beliefs that it’s still missing something fundamental. I wasn’t worried. But after playing with it enough, it dawned on me that it was an amazingly surprising advance. We’re moving much faster than I anticipated. I have the impression that it might not take a lot to fix what’s missing."
The man knows what he's talking about.
 
The man knows what he's talking about.

Hahahahahahahaha. That is laugh out loud funny, I have to say.

I've been telling you for months that things are moving much faster than you believe(d) and that your thinking was out of date. And now Bengio - who you pointed me at, ironically - is saying what I told you months ago.

I posted his quotes to point that out to you, but it just went Whoooooooosh, didn't it.
 
Replacing all manual labour will require robotics that are incredibly sophisticated and we are miles off that, so skilled manual jobs like e.g. being a plasterer will be the last to go. But it's only a matter of time.

However to answer your question, "Why would it develop itself to destroy the planet?", let me turn that around and ask, "Why wouldn't it"?

That's the concern. Once these things are immeasurably more intelligent than we are, we have no clue what they might think or do. And it is more than fanciful to think that they might want to harm us. One of the strongest urges, innate in all advanced life forms, is to do everything possible to not die. If these new lifeforms have this and they perceive a risk that humans concerned about our loss of control, want to switch them off, then we'd be in deep trouble. They would not have morals or guilt or any sense of remorse in eliminating all human life.

You’re talking about natural life forms though, these aren’t natural so why would they act the same way.
 
Thanks for this - I'm fascinated by all this. I use GPT and the Adobe AI tools in my work but only very basic things to save time. Stuff like drafting snappy social media headlines and extending backgrounds in photos to make them easier to fit into artwork templates etc. A few years ago something that would've taken me all day now takes a few mins.

I work in marketing and I don't see AI "taking our jobs" at all. The people who are going to lose their jobs are the people who don't adapt to the new tools.

For me it's an epochal moment in the same way DTP was in the 90s or social media was in the 2000s.
I wouldn't be too sure. It is possible that AI will be like mechanisation for white collar jobs. In the same way that the textile industry and agriculture were revolutionised by machinery that allowed one person to do the work of 5 or 10, AI could increase the capacity of the office worker so only a fraction of them are needed in the future.

If it lives up to the hype, which is a big if.
 
You’re talking about natural life forms though, these aren’t natural so why would they act the same way.
I am not saying they will, but who's to say they won't? If they have any desire to achieve objectives, then logically they will prioritise self-preservation because unless you exist, you have no ability to achieve anything. Our own urge to survive is similarly very logical.

Seems to me to be an enormous risk to just assume that these aliens - which is exactly what we are developing - will automatically be friendly and have our best interests as their priority. They could very easily have priorities which are not aligned with ours. And if/when they are immeasurably more intelligent than us, that's one enormous risk.
 
Hahahahahahahaha. That is laugh out loud funny, I have to say.

I've been telling you for months that things are moving much faster than you believe(d) and that your thinking was out of date. And now Bengio - who you pointed me at, ironically - is saying what I told you months ago.

I posted his quotes to point that out to you, but it just went Whoooooooosh, didn't it.
No mate, i posted a deliberately vague response as I can't really be arsed with the conversation. My original points, that AI is far from becoming sentient, is still, these few months later, still true.
 
No mate, i posted a deliberately vague response as I can't really be arsed with the conversation. My original points, that AI is far from becoming sentient, is still, these few months later, still true.
I never said it was close to being sentient, which I guess you would have forgotten about since you never bothered to properly read anything I wrote anyway.

It's still highly amusing that you were on such a high horse and proven wrong repeatedly, and now even the very bloke you were quoting as being the gospel on these things, is disagreeing with you.
 
So whats the best way to input prompts?

i mean, do you just write, for example "Tell me a story about a man blah blah blah" or do you need to include coding language etc?
 
I never said it was close to being sentient, which I guess you would have forgotten about since you never bothered to properly read anything I wrote anyway.

It's still highly amusing that you were on such a high horse and proven wrong repeatedly, and now even the very bloke you were quoting as being the gospel on these things, is disagreeing with you.
Well I'm glad I've given you so much amusement. I'm now out of this conversation.
 

Don't have an account? Register now and see fewer ads!

SIGN UP
Back
Top