Artificial Intelligence

Out of curiosity when do you think AGI will appear?

Depending on who you listen too it’s between multiple decades and multiple centuries away. Personally I’m thinking the latter.

AGI with poor moderation will be the danger for sure.

You will also have the west who likely put in lots or safe guards but no guarantees others will.

Edit. Meant to say the coast runners bug caused quite a few games companies to trial automation testing.

Honestly I'm not sure. One of the things that you can witness throughout history is that we're not just inventing new technology faster but we seem to be improving it faster and faster too.

It took 1000 years to go from a wheel to a chariot. Another 1000 years or so to go to proper spoked wheels, 500 years to something approximating a bicycle and 100 years to a tyre and a horseless carriage. Now we have electric self driving cars, Formula 1 cars and are at the bleeding edge of what is possible aerodynamically.

In 1903, the Wright Brothers became the first verified humans to ever engage in flight - a distance of about 120 feet. 66 years later, we used those same basic ideas of thrust and lift and drag to put a human on the Moon. Now we have behemoth 757s, fighter jets and reusable space rockets.

The computer was invented arguably around 1950. 50 years later, it allowed instant worldwide communication, could calculate billions of operations per second and now is the basis of the entire world's economy, media, entertainment, banking, military, communication and social systems.

I mean, arguably we've only taken exactly 20 years to go from the world's first computer that could beat a Grandmaster in Deep Blue to a Chess AI that can beat every grandmaster who ever existed combined in Alpha Zero. But chess AI and LLMs and image generators are all singular task models. I'm not going to say it's easy to build a neural network that can perform a single task very well but it's within the bounds of our technology. The stock market has been changed by these task machines already.

The real question here is how big is the leap to generalised models from singular tasks and obviously, absolutely nobody knows the answer to that. The experts reckon 50ish years and that's possible but so is 100 years or 200 years. It took 3 billion years for genetic machines to go from singular task orientation to multi task orientation (or single cell to multi cell life) so maybe it's way off. Nothing that we can come up with has ever been anywhere within a tenth of a percentile as efficient as genetic machines.

One of the reasons that I can't put a date on it either, is that I think it won't happen spontaneously. AGI will come from people hoping to build AGI and there's political, safety, and other hurdles to overcome for teams working on it. My issue with it is that there's always some berk who is impatient and wants to utilise it for their business or politics or power and one of them somewhere will skip a step and then we're all fucked. We're not very good at solving forward looking problems historically and sort of have a "ah it will be fine" attitude to most things and if we take this stance with AGI (which it appears we are) then as mentioned, we're all fucked. We need to quarantine this stuff in the same way we do Uranium production because its significantly more dangerous than that, but quarantining an idea rather than a physical good is a very difficult process.

I don't think ChatGPT-4 is going to kill everyone but ChatGPT-400 very well might.
 
Keeping the thread alive as it’s an interesting one.

The Rush to use the likes of ChatGPT with all its issues is starting to bite some companies.


There are solutions for this but it’s just starting up. RAG ( retrieval augmented generation )
There have been a couple of examples of Chat-GPT leaking sensitive informatio as well. Everything you feed in becomes part of the whole, it can't "keep secrets" the way a person can. We've had a serius re-write of our AI policy recently and I'm sure many other companies have too.
 
Sorry I missed this.

AI helping build AGI isn't a feasible solution due to the alignment problem. Alignment is one of the biggest issues in AGI safety full stop.

So as I'm sure you know, the current neural network and deep learning technology that future AGI will be built upon relies on a bit of a black box. You input data and it outputs data and the "correctness" is judged. The problem though is that for sufficiently complicated systems, we have absolutely no idea what happens inbetween the input and output. There's no way to "understand" what an AGI is "thinking" (bad terminology but it will do for the example).

There's something called inner alignment and outer alignment. Outer alignment can be understood as the output of the machine, the "solution" it provides. Inner alignment is the logic of that solution, again simplified.

So let's say that I set a task for an AGI to perform - calculate how many umbrellas in the world exist or something. And as part of that task, I specify a safety instruction that the AGI must be able to count, it must know that 1+1=2. So it needs to tell you 1+1=2 before it can move onto its actual task of counting umbrellas. You boot it up and it says "Good morning lads, 1+1=2, and now I'm counting to sort out the umbrella thing". You think, wonderful! It can count and do simple maths!

But can it though? We don't know why it told you 1+1=2. Maybe it really did understand the idea of counting? Or maybe you taught it to tell you a specific phrase and it had no idea how to count? This is the problem. Getting the right answer is not enough, we must safeguard it by making sure it got the right answer in the right way and because we wouldn't be able to see the "thinking" behind it then maybe we've just created a really good liar?

AI developing AGI compounds the problem, perhaps exponentially. Now we have two seperate alignment issues. Is the AI developing the AI telling us what it thinks we want to hear to satisfy its utility function or is it really attempting to create these safeguards? Because there's no meaningful way of understanding which it is. And then obviously you have the AGI side, does it understand or pretend to understand because that's what you want it to do to pass the test and then does what it wants in the real world?

What we're talking about here is called the reward hacking issue. You give it a reward for getting the right answer so it fakes the right answer because its the shortest path to the reward. If you want some example of AI reward hacking, people make whole lists of them as you can see here.

Here's a very famous example. In this, the AI was asked to get the highest score possible, this would usually mean winning the race in the quickest time available. However the AI figured out that if it just looped on this part of the track forever then it would theoretically gain more points than winning so it did that instead.




We've actually seen the alignment problem in this thread. Grunge was talking about LLMs, perfect example. Less experienced people believe that the LLM is talking to them because of the results that they read, but it actually has no model of conversations and is instead using a large dataset to predict the next word in the sentence given the question. The difference between realised output and concieved output.

Again, the problem here isn't the technology. AGI will do exactly what we tell it do. The problem is that humans are really very terrible at saying what they mean accurately.


Thanks for the response - and yes, it makes intuitive sense that you'd be compounding the alignment problem by adding more AI into the equation.

I've seen people compare this to the Manhattan Project, in terms of everybody acknowledging the danger and it just being a case of whether the "right people" get there first. But somewhat differently to the nuclear bomb, whoever gets there first could still end up doing irreversible damage regardless of their intentions for the reasons you've said.

It seems like a proper War Games scenario of "the only winning move is not to play" but it's taking place in an arena where every state actor is definitely going to play if given the chance.

I think the best we can hope for is that AGI takes so long to develop that global stability is vastly improved and education and technology is good enough that we actually decide we don't need it or that we can somehow control its deployment in ways that we don't currently understand. When people talk about the possibility of a Great Filter that reduces the prevalence of advanced life in the universe, it does feel like AGI could be a pretty good candidate...
 
The real question here is how big is the leap to generalised models from singular tasks and obviously, absolutely nobody knows the answer to that. The experts reckon 50ish years and that's possible but so is 100 years or 200 years. It took 3 billion years for genetic machines to go from singular task orientation to multi task orientation (or single cell to multi cell life) so maybe it's way off. Nothing that we can come up with has ever been anywhere within a tenth of a percentile as efficient as genetic machines.
I think you're right. It's easy to look at the things that we have seen progress in and act like exponential progress is inevitable. But there are countless things out there that reach a certain point and then progress stops. Stuff like individual flying cars were inevitable in the 80s, and yet other than the odd prototype, no-one has ever managed to make them. Perhaps in that case, there's no obvious commercial reason for them. But then how about something like nuclear fusion for generating power? Again, teams all around the world working on it since the 1940s and we're really not massively closer to achieving it. Self-driving cars seem to be permanently 'five years away' and yet they still don't seem to be able to get them working in any wide-scale, reliable way. And maybe they will, but we also can't just assume that because progress has been made, it will automatically continue in this linear fashion.

My favourite ChatGPT moment recently was that I asked it if it could recommend some articles to read for my masters degree. But because of the way these language models work, they actually have no concept of truth, and it gave six titles and fairly detailed descriptions of papers that were completely made up. It was partly because I'd listened to a podcast where they mentioned that academic librarians were increasingly having students coming up to them because they were having trouble finding a particular article. It turned out they had all tried to use ChatGPT to help them research things and it was just making shit up left, right and centre.
 
Neil deGrasse Tyson on the BBCs Americast Pod raised the point that the perfection of AI Faking may lead to the death of You Tube etc -as you literally wouldn’t be able to believe anything you saw on there
 

Don't have an account? Register now and see fewer ads!

SIGN UP
Back
Top
  AdBlock Detected
Bluemoon relies on advertising to pay our hosting fees. Please support the site by disabling your ad blocking software to help keep the forum sustainable. Thanks.