Out of curiosity when do you think AGI will appear?
Depending on who you listen too it’s between multiple decades and multiple centuries away. Personally I’m thinking the latter.
AGI with poor moderation will be the danger for sure.
You will also have the west who likely put in lots or safe guards but no guarantees others will.
Edit. Meant to say the coast runners bug caused quite a few games companies to trial automation testing.
Honestly I'm not sure. One of the things that you can witness throughout history is that we're not just inventing new technology faster but we seem to be improving it faster and faster too.
It took 1000 years to go from a wheel to a chariot. Another 1000 years or so to go to proper spoked wheels, 500 years to something approximating a bicycle and 100 years to a tyre and a horseless carriage. Now we have electric self driving cars, Formula 1 cars and are at the bleeding edge of what is possible aerodynamically.
In 1903, the Wright Brothers became the first verified humans to ever engage in flight - a distance of about 120 feet. 66 years later, we used those same basic ideas of thrust and lift and drag to put a human on the Moon. Now we have behemoth 757s, fighter jets and reusable space rockets.
The computer was invented arguably around 1950. 50 years later, it allowed instant worldwide communication, could calculate billions of operations per second and now is the basis of the entire world's economy, media, entertainment, banking, military, communication and social systems.
I mean, arguably we've only taken exactly 20 years to go from the world's first computer that could beat a Grandmaster in Deep Blue to a Chess AI that can beat every grandmaster who ever existed combined in Alpha Zero. But chess AI and LLMs and image generators are all singular task models. I'm not going to say it's easy to build a neural network that can perform a single task very well but it's within the bounds of our technology. The stock market has been changed by these task machines already.
The real question here is how big is the leap to generalised models from singular tasks and obviously, absolutely nobody knows the answer to that. The experts reckon 50ish years and that's possible but so is 100 years or 200 years. It took 3 billion years for genetic machines to go from singular task orientation to multi task orientation (or single cell to multi cell life) so maybe it's way off. Nothing that we can come up with has ever been anywhere within a tenth of a percentile as efficient as genetic machines.
One of the reasons that I can't put a date on it either, is that I think it won't happen spontaneously. AGI will come from people hoping to build AGI and there's political, safety, and other hurdles to overcome for teams working on it. My issue with it is that there's always some berk who is impatient and wants to utilise it for their business or politics or power and one of them somewhere will skip a step and then we're all fucked. We're not very good at solving forward looking problems historically and sort of have a "ah it will be fine" attitude to most things and if we take this stance with AGI (which it appears we are) then as mentioned, we're all fucked. We need to quarantine this stuff in the same way we do Uranium production because its significantly more dangerous than that, but quarantining an idea rather than a physical good is a very difficult process.
I don't think ChatGPT-4 is going to kill everyone but ChatGPT-400 very well might.