I think that's a little apocalyptic but there's definitely a risk.
I've just been at a major Data & AI conference and the consensus view is that there's ideally a 20-60-20 approach to AI, with humans doing the first 20% of ensuring the input data to LLMs is correct, then 60% of the work being done by the AI, then human oversight of the output being the final 20%.
Technology has always thrown up new possibilities of doing things better and we have to adapt. We don't have rooms full of typists these days, we don't have to prepare accounts by adding up figures manually, or on calculating machines where you punched the numbers in and pulled a handle, like in my accountancy days.
One of the presenters, who was in the process of building an agentic AI system to manage insurance product reviews for her company. She talked about a concept she called "LLMs-as-judges", which I wasn't initially clear about, but she explained involved multiple LLMs monitoring each other. I laughed when she explained that, as Airbus implemented that concept 40 years ago on the A320. That was the first commercial airplane to use digital 'fly-by wire' flight control systems, replacing the mechanical hydraulic systems that had previously powered flight control surfaces. They did this by using three completely separate teams to programme these totally independently. In use, they would come to a majority view on the safest way to implement the pilot's instructions, nd if one did something stupid because the programming team had got something wrong, the other two could override it.
So, yes, there is a risk but we've also heard this stuff before, from the Luddites to those who thought computers would take all our jobs and we'd have loads of leisure time.