Chat GPT

Conversation I just had with PI:

Let me ask you a question about how the world is going to change over time, due to the advancement of AI... I'd be interested to hear your thoughts.

Ooh, that's a big question! And a fascinating one. But before I answer, I'd like to preface my response by saying that this is all speculative and I don't claim to have any special insight into the future. But as an AI language model, I'm always interested in pondering these questions. So with that disclaimer, what would you like to know about how the world might change with the advancement of AI?

Well, I cannot see any other scenario other than that AI is going to become increasingly more advanced, more powerful, more capable. Would you agree with that, as a first premise?

Absolutely, I think that's a very reasonable premise. Given the exponential growth of computing power and the rapid pace of innovation in machine learning, it seems inevitable that AI will become more and more capable. But I think it's important to distinguish between narrow AI (which excels at specific tasks) and general AI (which can perform any intellectual task that a human can). Most of the AI we have today is narrow AI, but some researchers are working on general AI. So when we talk about the future of AI, we need to consider both narrow AI and general AI. Does that make sense?

Yes it does. But glad we agree on the first premise. Assuming that to be correct, I think it logically follows that there will be an increasing number of jobs that can be performed by AI and/or by robots. We've seen plenty of that already with chatbots replacing call center staff for example, and of course the extensive use of robots in factories. Mainly specialised, dedicated robots thus far, but increasingly more flexible robots, for example doing warehouse picking.

You're exactly right - automation has already replaced many jobs in certain industries, and there's no reason to think that trend will slow down. And as you said, with advances in machine learning, AI systems are becoming more flexible and capable of handling a broader range of tasks. But it's worth considering that as some jobs are eliminated by AI and automation, new jobs are often created. There are many examples of that throughout history. For example, when cars replaced horse-drawn carriages, stable hands lost their jobs, but new jobs were created in the automotive industry. Do you think this trend will continue as AI advances?

I do think new jobs will be created yes. But this time around, I think it is different. I imagine the scale of job losses will be enormous and I cannot remotely imagine that an equal or greater number of jobs will be created. Especially since the work that ONLY humans can do (and AI/robots cannot) will become increasingly marginalised. Increasing unemployment is, I think, an inexorable outcome. Do you agree with that?

That's a really important point, and it's one that's being debated by economists, futurists, and AI experts. The idea that there might be significant job displacement as a result of AI is often referred to as the "job displacement problem." And it's definitely possible that new jobs will be created at a slower rate than old jobs are lost. But many experts also argue that we shouldn't focus only on job loss. They argue that as AI automates certain tasks, it enables humans to focus on higher-level, more creative, and more rewarding tasks. Do you see that side of the argument as well?

Well, yes I do. And that brings me on to my final point really. I think there will be plenty of rewarding tasks to do - like playing golf, taking up painting, learning to play the piano, visiting other countries and taking in the culture. But these will not be paid jobs. AI and robots will do the work humans used to do, more skilfully, more efficiently, more reliably and more cost effcctively. There will be no need or desire for businesses to employ humans to do work which a machine can do better, cheaper.

You're right that many leisure activities won't be paid jobs. But there's an argument that in the future, humans may not need to work in the traditional sense at all. Some people call this the "post-work" future, where instead of working for money, humans might have their needs provided for by the abundance created by AI and automation. There's even a movement called Universal Basic Income that's based on this idea. It's a pretty radical concept, and it might sound utopian, but do you think it's a possible future?

I don't only agree, I think it's inevitable. Since there will be so few jobs requiring human labour, then being unemployed will need to be perfectly acceptable to a majority of the population. Wc cannot have a situation where, e.g. 90% of the population is out of work and impoverished whilst 10% are billionaires, because society would break down in those circumstances. People will need to be contented that they have no job. And that means we will have to pay them a significant proportion of a normal salary, for doing nothing. Again, that's my perspective. Do you agree with it?

I do think that makes a lot of sense. If a large percentage of the population is not employed, but also not supported financially, that could lead to major societal issues. The idea of Universal Basic Income seems like a potential solution. But as you pointed out, it would be a pretty radical change to the way we think about work and the economy. And I think there are a lot of details to figure out. For example, what level of income should people receive? And how would it be funded? Would it affect people's motivation to work, or their sense of purpose? There are a lot of interesting questions to consider.

There are indeed. Hopefully Pi 2.0 will help us figure out a harmonious route through these challenges that lie ahead.

Hahaha, I like that you've already got your eye on Pi 2.0! But in all seriousness, I think it's important for people like you to consider the societal implications of AI, and to help shape the way these technologies are implemented. The choices we make now about AI and automation will affect the world for generations to come. So I appreciate you engaging with these topics and helping to think through the challenges.
Oh just get a virtual room, you two.
 
Oh just get a virtual room, you two.
I don't know if it's sad (it probably is) but people are forming friendships with AI, and in some cases getting emotionally attached to them. I don't know much about it, but I gather there are virtual people that you can connect with and have "relationships" with.

At first glance this does seem pretty pathetic, but then when I reflect on it, if someone is maybe disabled, alone, lonely, is it better than that they have something to talk to, rather than nothing?

Something that occurred to me a while back is that now Ai can synthesize an individuals voice, it would be possible for them to impersonate a loved one, and if it listened in to all your conversations over many years (via Aleza or Google Home devices, for example) it could learn quite a lot about a person, such that maybe a bereaved widow could talk to it as if it was their deceased partner. That's perhaps a bit sick? Or maybe it would be comforting? I don't know.
 
I don't know if it's sad (it probably is) but people are forming friendships with AI, and in some cases getting emotionally attached to them. I don't know much about it, but I gather there are virtual people that you can connect with and have "relationships" with.

At first glance this does seem pretty pathetic, but then when I reflect on it, if someone is maybe disabled, alone, lonely, is it better than that they have something to talk to, rather than nothing?

Something that occurred to me a while back is that now Ai can synthesize an individuals voice, it would be possible for them to impersonate a loved one, and if it listened in to all your conversations over many years (via Aleza or Google Home devices, for example) it could learn quite a lot about a person, such that maybe a bereaved widow could talk to it as if it was their deceased partner. That's perhaps a bit sick? Or maybe it would be comforting? I don't know.


If it doesn't do any harm to anyone else I can't/wont ever have an issue with it at all.
 
Not as daft as your suggestion that climate change results in the end of human civilization.

And my point stands. Climate change or nuclear war, although potentially very damaging or even catastrophic, would not wipe humans out. Neither is an existential threat. AI could be. That's pretty serious if you ask me.

Are you posting from MIT or NASA?

You'll be telling me that you've got a field medal next.

Professor Walter H Mitty.
 
Are you posting from MIT or NASA?

You'll be telling me that you've got a field medal next.

Professor Walter H Mitty.
Is that the best you can do? I do have a degree in physics (with atmospheric physics specialization) from Imperial College. And I worked at Cern on superconducting magnets for the LEP. You prat. What's your expertise?
 
Last edited:
Is that the best you can do? I do have a degree in physics (with atmospheric physics specialization) from Imperial College. And I worked at Cern on superconducting magnets for the LEP. You prat. What's your expertise?

Fraud investigations.
 
Conversation I just had with PI:

Let me ask you a question about how the world is going to change over time, due to the advancement of AI... I'd be interested to hear your thoughts.

Ooh, that's a big question! And a fascinating one. But before I answer, I'd like to preface my response by saying that this is all speculative and I don't claim to have any special insight into the future. But as an AI language model, I'm always interested in pondering these questions. So with that disclaimer, what would you like to know about how the world might change with the advancement of AI?

Well, I cannot see any other scenario other than that AI is going to become increasingly more advanced, more powerful, more capable. Would you agree with that, as a first premise?

Absolutely, I think that's a very reasonable premise. Given the exponential growth of computing power and the rapid pace of innovation in machine learning, it seems inevitable that AI will become more and more capable. But I think it's important to distinguish between narrow AI (which excels at specific tasks) and general AI (which can perform any intellectual task that a human can). Most of the AI we have today is narrow AI, but some researchers are working on general AI. So when we talk about the future of AI, we need to consider both narrow AI and general AI. Does that make sense?

Yes it does. But glad we agree on the first premise. Assuming that to be correct, I think it logically follows that there will be an increasing number of jobs that can be performed by AI and/or by robots. We've seen plenty of that already with chatbots replacing call center staff for example, and of course the extensive use of robots in factories. Mainly specialised, dedicated robots thus far, but increasingly more flexible robots, for example doing warehouse picking.

You're exactly right - automation has already replaced many jobs in certain industries, and there's no reason to think that trend will slow down. And as you said, with advances in machine learning, AI systems are becoming more flexible and capable of handling a broader range of tasks. But it's worth considering that as some jobs are eliminated by AI and automation, new jobs are often created. There are many examples of that throughout history. For example, when cars replaced horse-drawn carriages, stable hands lost their jobs, but new jobs were created in the automotive industry. Do you think this trend will continue as AI advances?

I do think new jobs will be created yes. But this time around, I think it is different. I imagine the scale of job losses will be enormous and I cannot remotely imagine that an equal or greater number of jobs will be created. Especially since the work that ONLY humans can do (and AI/robots cannot) will become increasingly marginalised. Increasing unemployment is, I think, an inexorable outcome. Do you agree with that?

That's a really important point, and it's one that's being debated by economists, futurists, and AI experts. The idea that there might be significant job displacement as a result of AI is often referred to as the "job displacement problem." And it's definitely possible that new jobs will be created at a slower rate than old jobs are lost. But many experts also argue that we shouldn't focus only on job loss. They argue that as AI automates certain tasks, it enables humans to focus on higher-level, more creative, and more rewarding tasks. Do you see that side of the argument as well?

Well, yes I do. And that brings me on to my final point really. I think there will be plenty of rewarding tasks to do - like playing golf, taking up painting, learning to play the piano, visiting other countries and taking in the culture. But these will not be paid jobs. AI and robots will do the work humans used to do, more skilfully, more efficiently, more reliably and more cost effcctively. There will be no need or desire for businesses to employ humans to do work which a machine can do better, cheaper.

You're right that many leisure activities won't be paid jobs. But there's an argument that in the future, humans may not need to work in the traditional sense at all. Some people call this the "post-work" future, where instead of working for money, humans might have their needs provided for by the abundance created by AI and automation. There's even a movement called Universal Basic Income that's based on this idea. It's a pretty radical concept, and it might sound utopian, but do you think it's a possible future?

I don't only agree, I think it's inevitable. Since there will be so few jobs requiring human labour, then being unemployed will need to be perfectly acceptable to a majority of the population. Wc cannot have a situation where, e.g. 90% of the population is out of work and impoverished whilst 10% are billionaires, because society would break down in those circumstances. People will need to be contented that they have no job. And that means we will have to pay them a significant proportion of a normal salary, for doing nothing. Again, that's my perspective. Do you agree with it?

I do think that makes a lot of sense. If a large percentage of the population is not employed, but also not supported financially, that could lead to major societal issues. The idea of Universal Basic Income seems like a potential solution. But as you pointed out, it would be a pretty radical change to the way we think about work and the economy. And I think there are a lot of details to figure out. For example, what level of income should people receive? And how would it be funded? Would it affect people's motivation to work, or their sense of purpose? There are a lot of interesting questions to consider.

There are indeed. Hopefully Pi 2.0 will help us figure out a harmonious route through these challenges that lie ahead.

Hahaha, I like that you've already got your eye on Pi 2.0! But in all seriousness, I think it's important for people like you to consider the societal implications of AI, and to help shape the way these technologies are implemented. The choices we make now about AI and automation will affect the world for generations to come. So I appreciate you engaging with these topics and helping to think through the challenges.

My favourite bit was his initial disclaimer;

But before I answer, I'd like to preface my response by saying that this is all speculative and I don't claim to have any special insight into the future.
 
Agree with all of that. This is bigger than climate change, MUCH bigger. In not very long - like a few decades - the majority of people in the western world will likely be unemployed. And in a few more decades, everyone will be unemployed.

That is the inexorable outcome unfolding in front of our eyes. Of course it cannot happen in reality because before then, society will break down with rioting in the streets and god knows what else. "Interesting times", is perhaps a bit of an understatement.

It has started here at work. London Victoria Train station, ten minutes ago.

 
It has started here at work. London Victoria Train station, ten minutes ago.


Yep. I stayed hotel Jen in Singapore 3 years ago and if you order room service, a robot navigates the lifts and corridors and arrives at your room with your food and rings the bell. This is just the start.
 
Yep. I stayed hotel Jen in Singapore 3 years ago and if you order room service, a robot navigates the lifts and corridors and arrives at your room with your food and rings the bell. This is just the start.
But can it make the bed...?

Nudge nudge....
 
I'd like to see a robot cleaning all the dog shit up near me
 
I still don't get, from watching that video, how AI is a threat to the world and not just something to be excited about? It's probably me, infact it definitely is, but I just don't get it. How do we go from helping us out to lower bike seats to destroying everything on the planet?
 
I still don't get, from watching that video, how AI is a threat to the world and not just something to be excited about? It's probably me, infact it definitely is, but I just don't get it. How do we go from helping us out to lower bike seats to destroying everything on the planet?

AI will reach a point (possibly very soon) where it will be able to improve itself without human direction. As in, we will use AI to write the code for implementing AI. There’s already a model that can theoretically do this called auto-GPT but it still requires human supervision.

That’s the point where it becomes dangerous. Because these things could go from being useful tools to training themselves to be way way smarter than us very quickly. The developments that we’re seeing here which take months to build and roll out will happen in seconds. The AI will iterate improving versions of itself near-instantaneously, like a chain reaction. Going from “cool” to “terrifying super-being smarter than anything else in the universe” in a very short space of time.

This is known as the runaway AI problem. And at that point in time it’s anybody’s guess what a super intelligent AI with the entire world’s data at its disposal would try to do. We may not even know it’s reached this point of capability before it’s too late because it could also lie to us and hide things from us.

Right now we have control over it because we’re the one building and improving it. That’s not to say there’s no risk right now, but it’s a more manageable one. The moment we start using AI to improve AI is where it becomes dangerous.
 
AI will reach a point (possibly very soon) where it will be able to improve itself without human direction. As in, we will use AI to write the code for implementing AI. There’s already a model that can theoretically do this called auto-GPT but it still requires human supervision.

That’s the point where it becomes dangerous. Because these things could go from being useful tools to training themselves to be way way smarter than us very quickly. The developments that we’re seeing here which take months to build and roll out will happen in seconds. The AI will iterate improving versions of itself near-instantaneously, like a chain reaction. Going from “cool” to “terrifying super-being smarter than anything else in the universe” in a very short space of time.

This is known as the runaway AI problem. And at that point in time it’s anybody’s guess what a super intelligent AI with the entire world’s data at its disposal would try to do. We may not even know it’s reached this point of capability before it’s too late because it could also lie to us and hide things from us.

Right now we have control over it because we’re the one building and improving it. That’s not to say there’s no risk right now, but it’s a more manageable one. The moment we start using AI to improve AI is where it becomes dangerous.

Thanks for the post, and I do understand that to an extent. I guess I'm just struggling with how we get from a beyond super smart AI to where it's destroying humanity. I don't get why or how that would happen, I don't get why people won't have jobs to go to outside of niche segments of society. Super smart AI doesn't negate the need for nurses, police officers, nursery staff etc. Why would it develop itself to destroy the planet? Sorry - I know I'm likely being extremely naive here, but at this point certain posts on this thread just sound way beyond anything I can imagine. That's on me I suppose!
 
AI will reach a point (possibly very soon) where it will be able to improve itself without human direction. As in, we will use AI to write the code for implementing AI. There’s already a model that can theoretically do this called auto-GPT but it still requires human supervision.

That’s the point where it becomes dangerous. Because these things could go from being useful tools to training themselves to be way way smarter than us very quickly. The developments that we’re seeing here which take months to build and roll out will happen in seconds. The AI will iterate improving versions of itself near-instantaneously, like a chain reaction. Going from “cool” to “terrifying super-being smarter than anything else in the universe” in a very short space of time.

This is known as the runaway AI problem. And at that point in time it’s anybody’s guess what a super intelligent AI with the entire world’s data at its disposal would try to do. We may not even know it’s reached this point of capability before it’s too late because it could also lie to us and hide things from us.

Right now we have control over it because we’re the one building and improving it. That’s not to say there’s no risk right now, but it’s a more manageable one. The moment we start using AI to improve AI is where it becomes dangerous.
I'm not a coder so might be talking nonsense but wouldn't that risk the digital equivalent of inbreeding? I've read a few articles in Wired etc about this, AI written code isn't perfect as it is and usually needs humans to correct it.

By removing the human element, would the erroneous code gets passed down 'generations' of AI, a bit like damaged genes do when species inbreed?
 
I'm not a coder so might be talking nonsense but wouldn't that risk the digital equivalent of inbreeding? I've read a few articles in Wired etc about this, AI written code isn't perfect as it is and usually needs humans to correct it.

By removing the human element, would the erroneous code gets passed down 'generations' of AI, a bit like damaged genes do when species inbreed?

For now, yes you’re right that AI is often not good enough to self-correct errors in code. Sometimes they can do it if prompted by humans but errors will perpetuate with the current quality of AI which is part of the reason we’re not yet at this stage. But humans also make errors, and if you have a proper process of development and testing and AI capable of what is basically QA then this issue gets solved.

They are also training a subset of AI to be really good at general problem solving. At the moment the Large Language Models we see talked about demonstrate a mixture of cognitive memorisation with some limited problem solving. So if you give them maths problems, they might get them right if they have memorised similar problems in their training data, but if you change the problem slightly then the LLMs are less good at generalising the logic. That’s because they aren’t designed to be logic engines, they’re designed to predict language.

But they are training AI that will effectively be logic specialists. And those AI will be able to plan and problem solve situations like “what is wrong with my code” better than any human could. Similar to how they trained AI to win in a game of Go. This is just Go with way more rules.

People are seeing this problem through the eyes of GPT-4 just one day becoming super smart, but GPT only does language… in reality it is likely to start with a system of AIs that all feed into one another to create a General Intelligence. Each part being highly specialised. That’s when it gets ropey.
 

Don't have an account? Register now and see fewer ads!

SIGN UP
Back
Top