All things AI.

I think that's a little apocalyptic but there's definitely a risk.

I've just been at a major Data & AI conference and the consensus view is that there's ideally a 20-60-20 approach to AI, with humans doing the first 20% of ensuring the input data to LLMs is correct, then 60% of the work being done by the AI, then human oversight of the output being the final 20%.

Technology has always thrown up new possibilities of doing things better and we have to adapt. We don't have rooms full of typists these days, we don't have to prepare accounts by adding up figures manually, or on calculating machines where you punched the numbers in and pulled a handle, like in my accountancy days.

One of the presenters, who was in the process of building an agentic AI system to manage insurance product reviews for her company. She talked about a concept she called "LLMs-as-judges", which I wasn't initially clear about, but she explained involved multiple LLMs monitoring each other. I laughed when she explained that, as Airbus implemented that concept 40 years ago on the A320. That was the first commercial airplane to use digital 'fly-by wire' flight control systems, replacing the mechanical hydraulic systems that had previously powered flight control surfaces. They did this by using three completely separate teams to programme these totally independently. In use, they would come to a majority view on the safest way to implement the pilot's instructions, nd if one did something stupid because the programming team had got something wrong, the other two could override it.

So, yes, there is a risk but we've also heard this stuff before, from the Luddites to those who thought computers would take all our jobs and we'd have loads of leisure time.

It’s very generous to give the first section 20% there, it’s a shedload more than that, at least for the next few years. I’ve been going through it for the last few years and the best thing about AI stll isn’t AI itself, it’s making everyone properly realise how important foundational data is.
 
I think that's a little apocalyptic but there's definitely a risk.

I've just been at a major Data & AI conference and the consensus view is that there's ideally a 20-60-20 approach to AI, with humans doing the first 20% of ensuring the input data to LLMs is correct, then 60% of the work being done by the AI, then human oversight of the output being the final 20%.

Technology has always thrown up new possibilities of doing things better and we have to adapt. We don't have rooms full of typists these days, we don't have to prepare accounts by adding up figures manually, or on calculating machines where you punched the numbers in and pulled a handle, like in my accountancy days.

One of the presenters, who was in the process of building an agentic AI system to manage insurance product reviews for her company. She talked about a concept she called "LLMs-as-judges", which I wasn't initially clear about, but she explained involved multiple LLMs monitoring each other. I laughed when she explained that, as Airbus implemented that concept 40 years ago on the A320. That was the first commercial airplane to use digital 'fly-by wire' flight control systems, replacing the mechanical hydraulic systems that had previously powered flight control surfaces. They did this by using three completely separate teams to programme these totally independently. In use, they would come to a majority view on the safest way to implement the pilot's instructions, nd if one did something stupid because the programming team had got something wrong, the other two could override it.

So, yes, there is a risk but we've also heard this stuff before, from the Luddites to those who thought computers would take all our jobs and we'd have loads of leisure time.
Fwiw, as the pilot in command of a 787, my inputs go into “gatekeeper” computers that “listen to the ask” and decide whether it’s reasonable and within ITS parameter for the way IT thinks the aircraft should fly, THEN allow my inputs to be exercised by the flight controls IF IT IS HAPPY!

So far, we haven’t had a serious disagreement!

Funny anecdote on the A320…

When it first came to airlines with traditional Boeing fleets, there was a “3 step learning curve” to understanding the aircraft.

New guy: “What the hell is it doing?”

Mildly experienced: “I’ve seen it do that before.”

Experienced: “Oh yeah, don’t worry about that, it does it all the time!”

Lastly, I was talking to a BMI A330 Captain in Vegas back in the day when I was a 320 Captain. We were talking about the level of automation and the aircraft “helping” you as a pilot.

His words? “Yeah, it’s like Black Magic below 50 feet isn’t it?!” Hilarious, and true!!

Automation is awesome, but, as humans, we need to ensure we have a “disconnect” switch before the automation makes a decision you, as the operator/manager, doesn’t want it to make. We’ve all seen the Airbus flyby where the aircraft settled into the trees at the end of a low & slow pass over the airfield with dignitaries galore in attendance!!!
 
Okay, I thought this, in particular, was an interesting byproduct search find (off a channel of someone I despise, actually, but wanted to be in the general loop of knowledge!). The scope can be limitless but, as always, there's gonna be pitfalls about this.

With that said, I wanted to grab your thoughts on it...

 
Okay, I thought this, in particular, was an interesting byproduct search find (off a channel of someone I despise, actually, but wanted to be in the general loop of knowledge!). The scope can be limitless but, as always, there's gonna be pitfalls about this.

With that said, I wanted to grab your thoughts on it...




Do that in Piccadilly and you'll end up being robbed stabbed and beaten for the tech you are wearing :)
 
Besides that, does this 'upskill' people for other standard and generalised situations or take people out of jobs?

I can see governments and society killing this before it gets to that stage, people say that it wont happen but it will.

We have gone too far with it already, it'll start a war through making a video of the wrong person one of these days.

But an interesting video though, here's the analogue to that :)

41xxtnYJE9L._AC_UF894,1000_QL80_.jpg
 
I can see governments and society killing this before it gets to that stage, people say that it wont happen but it will.

We have gone too far with it already, it'll start a war through making a video of the wrong person one of these days.

But an interesting video though, here's the analogue to that :)

41xxtnYJE9L._AC_UF894,1000_QL80_.jpg

Interesting answer. Can't put a stop to capitalism, like this, can you? That would only happen if instructions to dangerous pathways happen.
 
Along with the environment its the most obvious foreseen for decades problem ever. Fuck humans they really are beyond fucking help.

Also not to sure what to make of the characters who seem to be leading the most prominent AI companies at the moment
 
It’s very generous to give the first section 20% there, it’s a shedload more than that, at least for the next few years. I’ve been going through it for the last few years and the best thing about AI stll isn’t AI itself, it’s making everyone properly realise how important foundational data is.
Yes, it will need to be more than that for the foreseeable future (i.e. the next few years).
 
Besides that, does this 'upskill' people for other standard and generalised situations or take people out of jobs?
I don't think that it will replace people in its current form. At the moment AI is just a crazy good productivity tool but people are starting to discover new applications for it.

I was a skeptic but I have used it to reduce my bills, find the best electricity tariffs, model my pension and last week it told me how to mend my broken dishwasher as in specifically step by step with pictures. It will definitely replace Google search engine, if I'm looking to buy something new then now I ask ChatGPT to compare options.

AI is the business opportunity of a lifetime and it'll take over everything that we touch but it's still early days. Trades are going to be safe but if you can imagine dreaming up a computer tool that could replace you in your job then you might be in trouble in 10-20 years time.
 
I don't think that it will replace people in its current form. At the moment AI is just a crazy good productivity tool but people are starting to discover new applications for it.

I was a skeptic but I have used it to reduce my bills, find the best electricity tariffs, model my pension and last week it told me how to mend my broken dishwasher as in specifically step by step with pictures. It will definitely replace Google search engine, if I'm looking to buy something new then now I ask ChatGPT to compare options.

AI is the business opportunity of a lifetime and it'll take over everything that we touch but it's still early days. Trades are going to be safe but if you can imagine dreaming up a computer tool that could replace you in your job then you might be in trouble in 10-20 years time.

That video instruction is one step up from the AI pic instruction you did. It'll be interesting to see if real time analysing and real time guide to fixing your car could be on the menu in the future.

That's when I'll truly know how things are going to go!
 
That video instruction is one step up from the AI pic instruction you did. It'll be interesting to see if real time analysing and real time guide to fixing your car could be on the menu in the future.

That's when I'll truly know how things are going to go!
Yep exactly and I reckon that's very close already, the number of applications are going to be crazy and I really don't think that people have full knowledge of how big of an impact it is going to have on our lives.

I think some people are worried that it will start with robots who will replace mechanics and that kind of thing but that's way off but only because the robot technology is way off. AI will accelerate the development of that kind of technology and everything because it will help scientists and engineers to get answers quicker.

We read a lot of stuff on politics at the moment but to be quite honest I think in 20 years time we won't need politicians because AI will have all the information to decide upon the best course of action.

Everything related to AI will rapidly grow as a curve and we're somewhere in the bottom ANI section of this curve at the moment.

The+Future+of+AI+Chart.png
 


The thing I find fascinating with AI is that you end up with these emergent properties. Nobody taught the video models how things like physics works, they’ve just seen so many examples that they essentially become really good physics predictors without understanding a single physical law.

You can see the clips still often have that weightless/floaty aesthetic but it’s crazy how much that understanding of physics has improved from one model to the next.

We saw these emergent properties with other forms of AI. As you train it on more data it begins to understand things you never sought to teach it in the first place. Nobody taught an LLM the rules of iambic pentameter but it can do a good job of making a poem structured in it. This is what makes the “well they’re just regurgitating stuff they’ve already seen” ring really hollow with me. Like yes… that’s how literally every person works. Everything you do, think, and say is a product of combining the “data” you have taken in through your senses at some point in the past. When was the last time you had a truly original thought that wasn’t just some combination of previous thoughts? When we taught an AI to play chess better than any human we trained it on moves that humans have previously made, you can make the same argument that it’s just regurgitating what it’s seen people doing, and yet it still beats every person easily.

This is what underpins my belief (and I respect it is only a belief) that humans aren’t really special or different to these AI models in a fundamental sense. What we see as the difference - or a “soul” as some might call it - is really just the emergent properties that have come from us operating at enormous scales of data input that these LLMs and models don’t yet match (but soon will).

There will be a bunch of people who continue to say “it’s not technically AI, it’s just a model” but that semantic distinction won’t matter when you lose your job to it.
 

Don't have an account? Register now and see fewer ads!

SIGN UP
Back
Top