Chat GPT

last of the doom-y posts here.

Bruce Schneier is an expert, not a crackpot. Not an outsider, not an outlier. Not a radical or someone prone to exaggeration. Someone to be listened to.


Same article;


What he's saying here is almost self-evident, if you've understood how the technology of the last decade has been used by corporations.

If not for ourselves, we perhaps should consider if this is a satisfactory state of affairs to hand down to the next generation.
 
last of the doom-y posts here.

Bruce Schneier is an expert, not a crackpot. Not an outsider, not an outlier. Not a radical or someone prone to exaggeration. Someone to be listened to.


Same article;


What he's saying here is almost self-evident, if you've understood how the technology of the last decade has been used by corporations.

If not for ourselves, we perhaps should consider if this is a satisfactory state of affairs to hand down to the next generation.
1984 on steroids
 
All of this makes me think, our best hope for future is a Super intelligence like what Iain Banks wrote in Culture series. Immensely powerful, but kind of treats humans like they matter, even though they don't :)
 
100% spot on. There's nothing "magic" about the human brain, other than it's incredible that evolution could have resulted in such an amazing thing. It is absolutely inevitable that man-made neural networks will soon be equally capable, and then a bit more capable and then a lot more capable and than massively more capable. Whilst we are limited by our own biology, synthetic AI will not be. It will be able to have more and more neurons, processing faster and faster and faster.

The thing that's actually pretty amazing is that GPT4 has around 100x less neurons than a human brain and yet it already knows more than any human on the planet. It understands umpteen languages and has the entire of wikipedia and umpteen other sources at its fingertips. We've invented a learning algorithm that is much better at learning than our own brains are.

There is something very magical about the human brain. We are taught how to learn.

Today’s ChatGPT challenge was to throw in a completely made up word (“hudemeflip”) into a conversational question. You, as a human, would be able to work out that made up word meant shop and be able to tell me other things to buy.

IMG_4081.jpeg

I changed the style of the question and it “seemed” to understand better.


IMG_4082.jpeg

It “feels” more like a parlour trick at this point. I just don’t see how we can replicate or improve on something like the human brain when we barely understand ourselves as it’s hard to observe in action. That said AI is going to be (and already is) an incredible tool.
 
I just don’t see how we can replicate or improve on something like the human brain when we barely understand ourselves as it’s hard to observe in action. That said AI is going to be (and already is) an incredible tool.
We'll just wait a year or so and you will see.
 
Gemini is powerful as fuck.


Humans are really really stupid.

lets-play.gif
 
I never thought we'd see HAL9000 in my lifetime and we are WAY beyond that already, in 2023.

God only knows what 2027,8,9 will bring.

Putting aside the "Terminator" risk for one moment - not that this should be dismissed - I cannot understand why the mainstream media are not making more of the risks AI poses to jobs. Over the next decade or so we are going to see job losses on an unimaginable scale. It's probably THE biggest issue we are going to have to deal with, and hardly anyone is even mentioning it.
 
Cmon. You lot haven't even used Google Gemini. It's just a video! They've had a year plus all the resources and all the minds to come up with that video. Just wait and see what it's really like.

People expect progress to continue at the same rate as we percieved it do have done over the last two years.

In all likelyhood, it won't. There is almost no chance.

The leap forward was leveraging Big data, on the latest GPU farms, by using a mathematical / computing technique, a novel algorithm.

We can get increase the data. We can use more GPU power. But the step forward was the algorithm. To repeat the last 2,3 years, we'd need another algorithmic improvement of the same significance. This is not likely at all. Transformers was a once in 20 year thing - a wholly new way of approaching the fundamental algorithm.

What we need now is replication, more efficiency, more refinement. The will to tackle the large pile of shortcomings that have become evident. And for it to all be packaged into genuinely useful applications. That will happen, but it's a 2-5 year thing.

Look, GPT is constantly tweaked. And what we see is, they improve one area of performance, but we get worse results than before in other areas. That clearly shows, transformers worked, and transformed performance. You can't invent it twice. So now, we are back to the normal slog of technological development.
 
Cmon. You lot haven't even used Google Gemini. It's just a video! They've had a year plus all the resources and all the minds to come up with that video. Just wait and see what it's really like.

People expect progress to continue at the same rate as we percieved it do have done over the last two years.

In all likelyhood, it won't. There is almost no chance.

The leap forward was leveraging Big data, on the latest GPU farms, by using a mathematical / computing technique, a novel algorithm.

We can get increase the data. We can use more GPU power. But the step forward was the algorithm. To repeat the last 2,3 years, we'd need another algorithmic improvement of the same significance. This is not likely at all. Transformers was a once in 20 year thing - a wholly new way of approaching the fundamental algorithm.

What we need now is replication, more efficiency, more refinement. The will to tackle the large pile of shortcomings that have become evident. And for it to all be packaged into genuinely useful applications. That will happen, but it's a 2-5 year thing.

Look, GPT is constantly tweaked. And what we see is, they improve one area of performance, but we get worse results than before in other areas. That clearly shows, transformers worked, and transformed performance. You can't invent it twice. So now, we are back to the normal slog of technological development.
I don't agree with anything above. (Apart from there being a bit of marketing spin about it not being real-time. And how long it takes to respond is frankly COMPLETELY irrelevant since all that means is when we have faster compute, it will respond faster. If the AI can do it at all, then the capability is there.)

But it sounds to me like you have your head in the sand somewhat (maybe you want to be in denial, because the future is pretty scary?)

The pace of development is not slowing. It's accelerating. We'll have Q* out in a few weeks I imagine, and if that understands maths as people say it does, then god knows what that will be capable of. And then Tesla's next big thing and then Anthropic's and then Meta's Llama updated and back to Google with something new.

Earlier this year I was saying we might get AGI in 3 to 5 years. Now I am certain we will have in 3 years and very. very probably next year IMO.
 
I don't agree with anything above. (Apart from there being a bit of marketing spin about it not being real-time. And how long it takes to respond is frankly COMPLETELY irrelevant since all that means is when we have faster compute, it will respond faster. If the AI can do it at all, then the capability is there.)

But it sounds to me like you have your head in the sand somewhat (maybe you want to be in denial, because the future is pretty scary?)

The pace of development is not slowing. It's accelerating. We'll have Q* out in a few weeks I imagine, and if that understands maths as people say it does, then god knows what that will be capable of. And then Tesla's next big thing and then Anthropic's and then Meta's Llama updated and back to Google with something new.

Earlier this year I was saying we might get AGI in 3 to 5 years. Now I am certain we will have in 3 years and very. very probably next year IMO.
AI is still limited though by our knowledge vs 'its' knowledge. If you ask ChatGPT to unify quantum mechanics and general relativity then it won't be able to do it and it won't anytime soon. The reason why is because that knowledge hasn't been invented by humans yet, we don't even know if those theories are correct but that's where AI can help.

The abstract in terms of AI is still however miles away and for me it isn't going to come until we invent a superior intelligence in terms of biology. Is that really AI though? Or is it actually a whole new species? Computing power isn't going to change this much, doing something faster doesn't relate to doing something smarter. In theory AI shouldn't need computing power because it would optimise itself, the human brain is a great example.

To say that AI understands math misses the point that humans invented math in the first place. So all AI is actually doing is parcelling and applying what we already know. This is great if you want to solve complex problems quicker but if you don't know the question let alone the answer then AI isn't going to help you.

For now AI will become a tool that we use to be more productive but that puts it fairly on par with robotics in manufacturing. I wouldn't say that it's really super advanced yet but yeah like robotics has done in manufacturing it's going to change a lot over the coming years.
 
I don't agree with anything above. (Apart from there being a bit of marketing spin about it not being real-time. And how long it takes to respond is frankly COMPLETELY irrelevant since all that means is when we have faster compute, it will respond faster. If the AI can do it at all, then the capability is there.)

But it sounds to me like you have your head in the sand somewhat (maybe you want to be in denial, because the future is pretty scary?)

The pace of development is not slowing. It's accelerating. We'll have Q* out in a few weeks I imagine, and if that understands maths as people say it does, then god knows what that will be capable of. And then Tesla's next big thing and then Anthropic's and then Meta's Llama updated and back to Google with something new.

Earlier this year I was saying we might get AGI in 3 to 5 years. Now I am certain we will have in 3 years and very. very probably next year IMO.

Google has one of the biggest data sets, that is why I think Gemini will be so strong. Imagine all the marketing data they have from Google and YouTube scary.
 
The applications in a few years will be really cool. I use M365 Copilot and while some things are useful, it still has a way to go.

I think these technologies will have positive and negative impacts ( just as personal computers, web browsers, social media, mobile devices have had).
 
Cmon. You lot haven't even used Google Gemini. It's just a video! They've had a year plus all the resources and all the minds to come up with that video. Just wait and see what it's really like.

People expect progress to continue at the same rate as we percieved it do have done over the last two years.

In all likelyhood, it won't. There is almost no chance.

The leap forward was leveraging Big data, on the latest GPU farms, by using a mathematical / computing technique, a novel algorithm.

We can get increase the data. We can use more GPU power. But the step forward was the algorithm. To repeat the last 2,3 years, we'd need another algorithmic improvement of the same significance. This is not likely at all. Transformers was a once in 20 year thing - a wholly new way of approaching the fundamental algorithm.

What we need now is replication, more efficiency, more refinement. The will to tackle the large pile of shortcomings that have become evident. And for it to all be packaged into genuinely useful applications. That will happen, but it's a 2-5 year thing.

Look, GPT is constantly tweaked. And what we see is, they improve one area of performance, but we get worse results than before in other areas. That clearly shows, transformers worked, and transformed performance. You can't invent it twice. So now, we are back to the normal slog of technological development.

I think a more cynical take is that the big leap was actually that they proved this technology could have commercial value. AI language models used to be stuck in a sort of esoteric wilderness where people with future vision were investing in them despite the potential capability not yet being evident. With the transformer and the original GPT, they saw something that could actually be applied to real-world situations and businesses.

That commercial viability is when we see big leaps in tech. A similar example might be that we went from a prototype jet plane to a commercial airliner in about 4 years. Same underlying innovation, but radically better application because of other subsequent innovations driven by commercial gain.

That's why I don't think this is likely to slow down as much as you propose. Progress does get harder and so maybe the leaps do become more incremental. But more incremental could still be pretty crazy. I generally don't bet against human ingenuity and when there is enough money to put the world's greatest minds in a room, we get things like Concorde, the Manhattan Project and the Apollo missions.
 
AI is still limited though by our knowledge vs 'its' knowledge. If you ask ChatGPT to unify quantum mechanics and general relativity then it won't be able to do it and it won't anytime soon. The reason why is because that knowledge hasn't been invented by humans yet, we don't even know if those theories are correct but that's where AI can help.
Honestly mate that is just so mistaken. The only bit I agree with the "any time soon" part, and that's only because you are describing one of the most difficult puzzles in all of science. But the idea that AI can only parrot out what humans have fed it, is I am afraid simply not true.

The abstract in terms of AI is still however miles away and for me it isn't going to come until we invent a superior intelligence in terms of biology. Is that really AI though? Or is it actually a whole new species? Computing power isn't going to change this much, doing something faster doesn't relate to doing something smarter. In theory AI shouldn't need computing power because it would optimise itself, the human brain is a great example.

To say that AI understands math misses the point that humans invented math in the first place. So all AI is actually doing is parcelling and applying what we already know. This is great if you want to solve complex problems quicker but if you don't know the question let alone the answer then AI isn't going to help you.

For now AI will become a tool that we use to be more productive but that puts it fairly on par with robotics in manufacturing. I wouldn't say that it's really super advanced yet but yeah like robotics has done in manufacturing it's going to change a lot over the coming years.

Yes, it's going to change a lot in a very short space of time, but the idea that only biological entities can become super intelligent is well, let's say, not the mainstream view. I think you are seriously underestimating just how clever these systems are becoming, and its moving VERY fast indeed.

The fact is we just don't know how they work and how they are doing what they can do, so it's impossible to say they aren't thinking or imagining or dreaming - we genuinely have no idea.
 
Last edited:

Don't have an account? Register now and see fewer ads!

SIGN UP
Back
Top