Chat GPT

An app namely Chat GPT has appeared just now on my phone.
It says in a green box OPEN.
Should I leave well alone?
I find this a little bit disconcerting.
 
Conversation I just had with PI:

Let me ask you a question about how the world is going to change over time, due to the advancement of AI... I'd be interested to hear your thoughts.

Ooh, that's a big question! And a fascinating one. But before I answer, I'd like to preface my response by saying that this is all speculative and I don't claim to have any special insight into the future. But as an AI language model, I'm always interested in pondering these questions. So with that disclaimer, what would you like to know about how the world might change with the advancement of AI?

Well, I cannot see any other scenario other than that AI is going to become increasingly more advanced, more powerful, more capable. Would you agree with that, as a first premise?

Absolutely, I think that's a very reasonable premise. Given the exponential growth of computing power and the rapid pace of innovation in machine learning, it seems inevitable that AI will become more and more capable. But I think it's important to distinguish between narrow AI (which excels at specific tasks) and general AI (which can perform any intellectual task that a human can). Most of the AI we have today is narrow AI, but some researchers are working on general AI. So when we talk about the future of AI, we need to consider both narrow AI and general AI. Does that make sense?

Yes it does. But glad we agree on the first premise. Assuming that to be correct, I think it logically follows that there will be an increasing number of jobs that can be performed by AI and/or by robots. We've seen plenty of that already with chatbots replacing call center staff for example, and of course the extensive use of robots in factories. Mainly specialised, dedicated robots thus far, but increasingly more flexible robots, for example doing warehouse picking.

You're exactly right - automation has already replaced many jobs in certain industries, and there's no reason to think that trend will slow down. And as you said, with advances in machine learning, AI systems are becoming more flexible and capable of handling a broader range of tasks. But it's worth considering that as some jobs are eliminated by AI and automation, new jobs are often created. There are many examples of that throughout history. For example, when cars replaced horse-drawn carriages, stable hands lost their jobs, but new jobs were created in the automotive industry. Do you think this trend will continue as AI advances?

I do think new jobs will be created yes. But this time around, I think it is different. I imagine the scale of job losses will be enormous and I cannot remotely imagine that an equal or greater number of jobs will be created. Especially since the work that ONLY humans can do (and AI/robots cannot) will become increasingly marginalised. Increasing unemployment is, I think, an inexorable outcome. Do you agree with that?

That's a really important point, and it's one that's being debated by economists, futurists, and AI experts. The idea that there might be significant job displacement as a result of AI is often referred to as the "job displacement problem." And it's definitely possible that new jobs will be created at a slower rate than old jobs are lost. But many experts also argue that we shouldn't focus only on job loss. They argue that as AI automates certain tasks, it enables humans to focus on higher-level, more creative, and more rewarding tasks. Do you see that side of the argument as well?

Well, yes I do. And that brings me on to my final point really. I think there will be plenty of rewarding tasks to do - like playing golf, taking up painting, learning to play the piano, visiting other countries and taking in the culture. But these will not be paid jobs. AI and robots will do the work humans used to do, more skilfully, more efficiently, more reliably and more cost effcctively. There will be no need or desire for businesses to employ humans to do work which a machine can do better, cheaper.

You're right that many leisure activities won't be paid jobs. But there's an argument that in the future, humans may not need to work in the traditional sense at all. Some people call this the "post-work" future, where instead of working for money, humans might have their needs provided for by the abundance created by AI and automation. There's even a movement called Universal Basic Income that's based on this idea. It's a pretty radical concept, and it might sound utopian, but do you think it's a possible future?

I don't only agree, I think it's inevitable. Since there will be so few jobs requiring human labour, then being unemployed will need to be perfectly acceptable to a majority of the population. Wc cannot have a situation where, e.g. 90% of the population is out of work and impoverished whilst 10% are billionaires, because society would break down in those circumstances. People will need to be contented that they have no job. And that means we will have to pay them a significant proportion of a normal salary, for doing nothing. Again, that's my perspective. Do you agree with it?

I do think that makes a lot of sense. If a large percentage of the population is not employed, but also not supported financially, that could lead to major societal issues. The idea of Universal Basic Income seems like a potential solution. But as you pointed out, it would be a pretty radical change to the way we think about work and the economy. And I think there are a lot of details to figure out. For example, what level of income should people receive? And how would it be funded? Would it affect people's motivation to work, or their sense of purpose? There are a lot of interesting questions to consider.

There are indeed. Hopefully Pi 2.0 will help us figure out a harmonious route through these challenges that lie ahead.

Hahaha, I like that you've already got your eye on Pi 2.0! But in all seriousness, I think it's important for people like you to consider the societal implications of AI, and to help shape the way these technologies are implemented. The choices we make now about AI and automation will affect the world for generations to come. So I appreciate you engaging with these topics and helping to think through the challenges.
 
I am terrified that these new Foundation Models will replace millions of peoples work.
More terrified that this will be super quickin just a few years.
Not enough time for people to adapt or for new industries and new jobs to emerge.
 
Agree with all of that. This is bigger than climate change, MUCH bigger. In not very long - like a few decades - the majority of people in the western world will likely be unemployed. And in a few more decades, everyone will be unemployed.

That is the inexorable outcome unfolding in front of our eyes. Of course it cannot happen in reality because before then, society will break down with rioting in the streets and god knows what else. "Interesting times", is perhaps a bit of an understatement.
Climate change and AI are inextricably linked, aren't they though? The power requirements for widespread implementation of AI will be huge. The electricity it demands will be competing with transports demands as we move to electric vehicles.
 
Climate change and AI are inextricably linked, aren't they though? The power requirements for widespread implementation of AI will be huge. The electricity it demands will be competing with transports demands as we move to electric vehicles.
Climate change is IMO an irrelevant piece of fluff compared to this. It's barely worth thinking about.

And I am not even sure AI will need vast amounts of power. It takes powerful computers to ingest the training data and process it, but once that is done, the power consumption to respond to queries is pretty minimal. Hence OpenAI can support hundreds of millions of users without charging them for it. Soon you'll be able to run powerful AI models on your phone, such is the rate of progress of both the software and the hardware.
 
Climate change is IMO an irrelevant piece of fluff compared to this. It's barely worth thinking about.

And I am not even sure AI will need vast amounts of power. It takes powerful computers to ingest the training data and process it, but once that is done, the power consumption to respond to queries is pretty minimal. Hence OpenAI can support hundreds of millions of users without charging them for it. Soon you'll be able to run powerful AI models on your phone, such is the rate of progress of both the software and the hardware.

We can just switch off computers and go back to older technology not capable of running AI can't we ?

Can we just turn the dial down on the atmospheric temperature ?

It's bullshit scaremongering designed to get governments to throw vast amounts of money into picking winners in AI.

Climate change means no human civilization, AI means unwinding the internet of things and bringing analogue technology back.
 
No. We can’t.

It wasn't meant to be literally switching off and dusting out an MS-DOS from the cupboard but the principle is the same.

You can decouple systems and rely less on technology that can support Ai.

It's scaremongering bollocks to get funding.if it's the threat that it is hyped up to be, just stop developing AI rather than trying to control the monster.
 
Climate change means no human civilization,
What utter nonsense.

And I just saw your subsequent post. There's no way to stop AI development. Businesses won't stop because they could go bust if their competitors steal a match on them. And countries won't stop for the same reasons. Military forces won't stop because a lightening fast super intelligent weapon system will beat a slow to react human one, so you'd better make sure you're not behind your enemies in your AI capabilities or you're in big trouble.

It's reckoned all out nuclear war could kill 5 billion of the 7 billion human population. A calamity of unimaginable proportion for sure. But in a couple of hundred years or less we'd be back to normal. A mere blip in evolutionary timescales.

And runaway climate change (not that this will happen anyway) would never kill everyone. Some places on earth would still be habitable and the planet's temperature would return to normal in time. Even Chris Packham says so! 252 million years ago the Earth's biggest mass extinction event happened killing 90% of all life. Vastly more CO2 was released and temperature warming much greater than any current projections. And yet the planet recovered and abundant life is still here today isn't it.

AI is the only truly existential danger we face right now.
 
Last edited:
What utter nonsense.

And I just saw your subsequent post. There's no way to stop AI development. Businesses won't stop because they could go bust if their competitors steal a match on them. And countries won't stop for the same reasons. Military forces won't stop because a lightening fast super intelligent weapon system will beat a slow to react human one, so you'd better make sure you're not behind your enemies in your AI capabilities or you're in big trouble.

It's reckoned all out nuclear war could kill 5 billion of the 7 billion human population. A calamity of unimaginable proportion for sure. But in a couple of hundred years or less we'd be back to normal. A mere blip in evolutionary timescales.

And runaway climate change (not that this will happen anyway) would never kill everyone. Some places on earth would still be habitable and the planet's temperature would return to normal in time. Even Chris Packham says so! 252 million years ago the Earth's biggest mass extinction event happened killing 90% of all life. Vastly more CO2 was released and temperature warming much greater than any current projections. And yet the planet recovered and abundant life is still here today isn't it.

AI is the only truly existential danger we face right now.

Did organised human society exist back then?

Oh it didn't, what a daft post.
 
Did organised human society exist back then?

Oh it didn't, what a daft post.
Not as daft as your suggestion that climate change results in the end of human civilization.

And my point stands. Climate change or nuclear war, although potentially very damaging or even catastrophic, would not wipe humans out. Neither is an existential threat. AI could be. That's pretty serious if you ask me.
 
We can just switch off computers and go back to older technology not capable of running AI can't we ?

Can we just turn the dial down on the atmospheric temperature ?

It's bullshit scaremongering designed to get governments to throw vast amounts of money into picking winners in AI.

Climate change means no human civilization, AI means unwinding the internet of things and bringing analogue technology back.

I don’t have much of a horse in this race as I think AI and Climate Change both present different but similarly catastrophic possibilities.

If there’s one thing I’ve learned from working in technology strategy it’s that going forwards is difficult, going backwards is nearly impossible. Our systems and server architectures are so intertwined with the fabric of how everything works that disentangling them to move to another architecture would take a lot of time and resources. And unless we put in place AI ‘air gaps’ to stop it from duplicating effectively, the AI would move far quicker than we can. Think about worms and viruses of the past like Mydoom which caused tens of billions of damage, except you can’t just replace some software because this thing can think and evade your attempts to get rid of it.

Climate change would cause huge collapse of ecosystems, natural resources, supply chains and unprecedented displacement of people. AI could cause literal overnight systemic collapse, which would impact systems that provide things like food and water to people.

The end result in both instances, and this is just my opinion, is billions of people being wiped out from famine and systemic collapse and humans being restricted to living in (relatively) primitive enclaves.

To be clear, I think anything short of the planet being eaten by a supernova is likely to see at least some humans survive. We’ve reached a point where we’re very hard to completely kill.
 
I don’t have much of a horse in this race as I think AI and Climate Change both present different but similarly catastrophic possibilities.

If there’s one thing I’ve learned from working in technology strategy it’s that going forwards is difficult, going backwards is nearly impossible. Our systems and server architectures are so intertwined with the fabric of how everything works that disentangling them to move to another architecture would take a lot of time and resources. And unless we put in place AI ‘air gaps’ to stop it from duplicating effectively, the AI would move far quicker than we can. Think about worms and viruses of the past like Mydoom which caused tens of billions of damage, except you can’t just replace some software because this thing can think and evade your attempts to get rid of it.

Climate change would cause huge collapse of ecosystems, natural resources, supply chains and unprecedented displacement of people. AI could cause literal overnight systemic collapse, which would impact systems that provide things like food and water to people.

The end result in both instances, and this is just my opinion, is billions of people being wiped out from famine and systemic collapse and humans being restricted to living in (relatively) primitive enclaves.

To be clear, I think anything short of the planet being eaten by a supernova is likely to see at least some humans survive. We’ve reached a point where we’re very hard to completely kill.
I agree with all of that apart from one very possible scenario which you didn't cover.

It's pretty much an inexorable path now that we are going to develop AIs which are much more intelligent than us and VERY soon afterwards, due to their ability to improve their intelligence themselves - which we cannot - they will become vastly, and I mean VASTLY more intelligent. Like imagine having an IQ not of 130 or 150 or 200, but 10,000. And there will be countless numbers of these all-knowing entities knitted into our infrastructure and running everything we do.

We will have absolutely no power whatsoever to stop them doing whatever it is they decide to do. And if they decide that mankind should be eliminated from the planet for the good of the rest of life on earth, then it is literally the end of human existence, full stop. They already control our nuclear weapons (does anyone imagine Putin has a red button with a two-handled plunger and curly wires going back to some rocket somewhere) and who knows what sort of viruses they could develop and unleash. We'd be so out-smarted, we would be like mice trying to outthink humans.

I don't think this is "likely", but in a recent survey of the many of the top AI experts, 50% of them said that there was at least a 10% chance of the human race dying out soon after "the singularity" - the point at which they become smarter than us. Would you or your family get on a plane if it meant you had a 10% chance of all of you dying if you did?

And this risk is not centuries away. It's decades at most, and probably less. To put this into perspective, climate change isn't going to change anyone's life here in the UK to any significant effect for the next 100 years.
 
Last edited:
I agree with all of that apart from one very possible scenario which you didn't cover.

It's pretty much an inexorable path now that we are going to develop AIs which are much more intelligent than us and VERY soon afterwards, due to their ability to improve their intelligence themselves - which we cannot - they will become vastly, and I mean VASTLY more intelligent. Like imagine having an IQ not of 130 or 150 or 200, but 10,000. And there will be countless numbers of these all-knowing entities knitted into our infrastructure and running everything we do.

We will have absolutely no power whatsoever to stop them doing whatever it is they decide to do. And if they decide that mankind should be eliminated from the planet for the good of the rest of life on earth, then it is literally the end of human existence, full stop.

I don't think this is "likely", but in a recent survey of the many of the top AI experts, 50% of them said that there was at least a 10% chance of that happening. Would you or your family get on a plane if it meant you had a 10% chance of all of you dying if you did?

And this risk is not centuries away. It's decades at most, and probably less. To put this into perspective, climate change isn't going to change anyone's life here in the UK to any significant effect for the next 100 years.

You’re right I didn’t include what I think of as the “Terminator scenario”.

I guess my hunch (and it is only that) is that AI, no matter how intelligent, is only as good as its ability to physically interact with the world around it. Zuckerberg spoke about this recently. A lot of the obvious regulation we could apply is around creating barriers between AI which interacts in a virtual space and “robots” to use a general term which constitute things like cars, machinery, weapons which interact with our physical space. Basically we can create “air gaps” such that the AI can destroy our systems but it can’t walk around destroying us. Basically don’t give the killer robot, killer robot arms :)

Implementing those types of barriers is theoretically more achievable than trying to regulate the runaway intelligence of the AI which is much much harder and more problematic.
 
You’re right I didn’t include what I think of as the “Terminator scenario”.

I guess my hunch (and it is only that) is that AI, no matter how intelligent, is only as good as its ability to physically interact with the world around it. Zuckerberg spoke about this recently. A lot of the obvious regulation we could apply is around creating barriers between AI which interacts in a virtual space and “robots” to use a general term which constitute things like cars, machinery, weapons which interact with our physical space. Basically we can create “air gaps” such that the AI can destroy our systems but it can’t walk around destroying us. Basically don’t give the killer robot, killer robot arms :)

Implementing those types of barriers is theoretically more achievable than trying to regulate the runaway intelligence of the AI which is much much harder and more problematic.
Unfortunately we are not creating those barriers, in fact we are actively breaking them down as fast as we possibly can. We are rushing headlong into the wrong direction at break-neck speed. As soon as one set of armed forces has AI-piloted fighter jets, it is game over for an enemy that relies on human pilots. Ditto tanks, nuclear response capability and everything else. So we are actively handing over control of the red button as fast as we can. Don't anyone kid you that we will stop short of creating autonomous killing machines.

We're doing the same BTW in seemingly less devastating ways, like integrating them into our banking systems, which could "only" wreak financial havoc upon the entire world. We're doing this now.

I don't know if you saw my post a few pages back where I said it's being claimed by one of the world's leading AI experts/developers that in less than 2 years, AI will be able to take control of an entire business process, such as doing research, designing products, building websites, interfacing with manufacturers and logistics providers, negotiating contracts and creating and operating marketing campaigns, and do this all by itself.

He reckons AI will be able to do this in 2 years. Connecting the AI's to external systems to facilitate this is a huge area of investment.
 
Last edited:

Don't have an account? Register now and see fewer ads!

SIGN UP
Back
Top