You need to learn the difference of AI and AGI.Yes. You make it learn to get a desired result. It doesn’t learn by its self. It’s not going to just magically decide to do something totally out of its remit.
It’s a tool to get the best result for a task. If you create one to get the fastest car route around an F1 track it’s not going to design a new engine etc etc. it works out optimal steering / acceleration/ breaking etc etc
I’m aware of the difference but all in all I’m just aware we are no where near to some AI Armageddon.You need to learn the difference of AI and AGI.
No, AI probably won’t kill us all – and there’s more to this fear campaign than meets the eye
I study artificial general intelligence, and I believe the ongoing fearmongering is at least partially attributable to large AI developers’ financial interests.theconversation.com
'Pretty soon'. So it is a matter of timing, as opposed to likelihood?hardly, just not sure i'll jump on bored that humanity is all going to be wiped out pretty soon because of AI.
maybe that's my ignorance.
'Pretty soon'. So it is a matter of timing, as opposed to likelihood?
1. Thats arguing against a letter signed by hundreds of AI researchers saying it will kill us all.
2. It's talking about LLMs and not AGI.
3. It doesn't address any arguments, it's whole point is "nuh huh".
Read articles passed the headline.
It really really is. I’m not only studying it, I’m currently working on it in a day to day basis. Chat bots with LLM’s and natural language processing etc. 99.99% of what is known as “AI” isn’t intelligent at all.
Even neural networks are just a set of rules you pass over millions of times trying to get to a result the best way possible. Again not intelligent. Just path finding in the highest view really.
There maybe some working on trying to create intelligence but we are a million miles from it.
So you basically make it learn right? see where I'm going with this