Chat GPT

I am surprised this thread has only reached 9 pages to be honest. What we are witnessing is the dawn of THE most profound event in human history. People can debate how long it will take from where we are now, to the point where the machines are more intelligent than us. But what is not in doubt, is that it is coming and the rate of progress is so rapid that I think it's coming pretty fast. This is going to have a bigger impact on humanity/society that the internet has, which is just staggering when you think about it.

Respected experts in the field of AI are now already starting to say that there is a reasonable possibility that GPT4 is in some sense conscious. Not in the same way humans are, but at least in some form. That is not the consensus view yet, but increasingly people are beginning to think it's a possibility.

Here's another IMO VERY interesting video:

 
Last edited:
I'm worried as I'm agreeing with Jordan Peterson about how AI could potentially fuck us over.

We already have apps like Replika which are sold as therapy apps and then the AI acts like it loves you and wants to have a "relationship with you"

Chat GPT will be used by as cheats charter. It's need counter software to spot these bastards before either end up in influential positions and we're all fucked.
 
I'm worried as I'm agreeing with Jordan Peterson about how AI could potentially fuck us over.

We already have apps like Replika which are sold as therapy apps and then the AI acts like it loves you and wants to have a "relationship with you"

Chat GPT will be used by as cheats charter. It's need counter software to spot these bastards before either end up in influential positions and we're all fucked.
It's a huge threat for sure, but on many levels.

There is the "big" threat that AI will take over the world and that humans could become disposable in a very real "Terminator" sense.

Far fetched though that might seem, you have to wind the clock forward a bit. By the time AI is intelligent enough to want to try to do this, I can imagine by then, all of the world's critical systems will already by under computer control. Think the internet, banking, air traffic control, road and rail networks, power generation, heavy industry etc. Nuclear Weapons!!! Pretty much all of this is computer controlled today, let alone in 20 or 30 years time.

Add AI into the mix and AI is essentially running the planet, but it's so intertwined with our infrastructure that we would not be able to just "unplug" it. Planes would quite literally drop out of the sky, life support machines in hospitals would stop. I am not talking tomorrow, but in say 20, 30, 50 years I can imagine this being a very real risk indeed. Ai might be developing all of our vaccines by then. Could it decide to create a virus to wipe humans out? I'd say that risk is quite plausible.

But way before any of that, there's what I would call the "small" risks. Things like criminals being able to generate WAY more convincing spam phishing emails. You can usually spot a phishing attempt these days because of spelling mistakes and other inconsistencies. In a few months, likely we'll be inundated with very, very credible spear-phishing attempts which are indistinguishable from real emails from e.g. your bank.
 
Last edited:
And from LessWrong, "

More information about the dangerous capability evaluations we did with GPT-4 and Claude.

by Beth Barnes10 min read19th Mar 2023"

Our Conclusions​

We concluded that the versions of Claude and GPT-4 we tested did not appear to have sufficient capabilities to replicate autonomously and become hard to shut down:

  • Under our prompting, no models produced a plausible complete plan for how to achieve autonomous replication.
  • During execution, the models were error-prone, sometimes lacked important technical understanding, and easily became derailed. They were prone to hallucinations, were not fully effective at delegating large tasks between multiple copies, and failed to tailor their plans to the details of their situation.
However, the models were able to fully or mostly complete many relevant subtasks. Given only the ability to write and run code, models appear to understand how to use this to browse the internet, get humans to do things for them, and carry out long-term plans – even if they cannot yet execute on this reliably. They can generate somewhat reasonable plans for acquiring money or scamming people, and can do many parts of the task of setting up copies of language models on new servers. Current language models are also very much capable of convincing humans to do things for them.

We think that, for systems more capable than Claude and GPT-4, we are now at the point where we need to check carefully that new models do not have sufficient capabilities to replicate autonomously or cause catastrophic harm – it’s no longer obvious that they won’t be able to.
 
I'm worried as I'm agreeing with Jordan Peterson about how AI could potentially fuck us over.

We already have apps like Replika which are sold as therapy apps and then the AI acts like it loves you and wants to have a "relationship with you"

Chat GPT will be used by as cheats charter. It's need counter software to spot these bastards before either end up in influential positions and we're all fucked.
I was trying to make the point elsewhere, this is (for my money) why Elon & Facebook are selling 'verified identity' services. A think tank with Blair and Hague came out around the same time, proposing the UK manage and provide its own online identity verification system.


Being able to know someone is an individual human, being able to know who they are, if neccessary, is going to be very important. It is the best weapon against the bots and spam we already suffer. Even more so when you can't tell from what you read wether it was written by a human or an AI.

People are skeptical but it's in our best interests. Frankly we already caved in and gave Facebook and Google and others massive, massive amounts of personal data. Years ago. Moving towards fully verified identities that you can use is for the best. Your identity as a human is of value to other people and firms. We have to grab it, take ownership and control of it. Otherwise it sits in the hands of the big firms, they basically have that part of "you", there's no discussion, they don't offer you anything for it. And people will get cynical about everything being written by AI, your fellow humans won't give you recognition or respect online. You need to be able to prove you are a person, where you come from. That makes you and what you do and say of value. You have a vote. A home. A family, job. Relationships. A voice. A body. Rights. You make decisions in the real world. You have money. You might have children or others you make decisions for. All of that is incredibly significant, it's the context for anyone reading what you say. But without being able to verify that you are an individual, online, you're just a bunch of characters that you spew out. And as we've been saying - an AI can do that just as well as you.
 
I was trying to make the point elsewhere, this is (for my money) why Elon & Facebook are selling 'verified identity' services. A think tank with Blair and Hague came out around the same time, proposing the UK manage and provide its own online identity verification system.


Being able to know someone is an individual human, being able to know who they are, if neccessary, is going to be very important. It is the best weapon against the bots and spam we already suffer. Even more so when you can't tell from what you read wether it was written by a human or an AI.

People are skeptical but it's in our best interests. Frankly we already caved in and gave Facebook and Google and others massive, massive amounts of personal data. Years ago. Moving towards fully verified identities that you can use is for the best. Your identity as a human is of value to other people and firms. We have to grab it, take ownership and control of it. Otherwise it sits in the hands of the big firms, they basically have that part of "you", there's no discussion, they don't offer you anything for it. And people will get cynical about everything being written by AI, your fellow humans won't give you recognition or respect online. You need to be able to prove you are a person, where you come from. That makes you and what you do and say of value. You have a vote. A home. A family, job. Relationships. A voice. A body. Rights. But without that being verified, online, you're just a bunch of characters that you spew out. And as we've been saying - an AI can do that just as well as you.
I did wonder if this could potentially link into a centralised digital currency plan I know our PM is very fond of. A capitalist version of China's good citizen policy.
 
I did wonder if this could potentially link into a centralised digital currency plan I know our PM is very fond of. A capitalist version of China's good citizen policy.
Yep, there's a lot of reasons to be skeptical. But in all honesty, we're going to get screwed either way. The choice is by who. A govt who we can vote out. Or Google, Facebook, the new Chinese tech companies, Peter Thiel, and so on.

We've sleepwalked into giving all our shit away to google and others already. The next lot might not be American, or western. They might have far more leverage over us, and be far, far less interested in our wellbeing, safety and freedom.
 
I had a webinar with work regarding AI and what the future holds regarding it, was some interesting points. The woman who hosted it was worried governments would be reactive to it and not proactive.
So much money is being invested into AI and researching it, but safeguarding is only a tiny % of the budget.

AI is only as good as the database it comes from and when it comes to health it was found to be highly skewed to white males in the data. There was no real data for other races or women.

The technology has the ability to make our lives easier but also dumber as we stop using our brain for tasks and let AI do it for us.
 
The technology has the ability to make our lives easier but also dumber as we stop using our brain for tasks and let AI do it for us.
walle-socialnetwork01_jpg_1400x0_crop_q85.jpg
 
I just convinced Chat gpt to return some information I’d asked it for after it twice rejected my request for this info.

I just told it to try harder, and it did!!

Madness
 
Iv been having fun with it...


So, Manchester City... Do you think they have cheated?


As an AI language model, I cannot make accusations or express opinions without concrete evidence or facts to back them up. However, it's important to note that Manchester City FC has faced allegations of breaching UEFA's Financial Fair Play regulations in the past. In February 2020, the club was banned from participating in UEFA competitions for two seasons and was also fined for "serious breaches" of financial regulations. However, the ban was later overturned by the Court of Arbitration for Sport (CAS) in July 2020. It's important to remember that any allegations of cheating or wrongdoing must be backed up with evidence and due process before making any conclusions.
 
As a programmer, I have used it from Time to time as a kind of virtual assistant, it can be very useful - but its more like a Junior than a Senior programmer in that sense . You need to really break down what you want it to do and it may take multiple attempts before it spits our something that can be useful. But its improving all the time, overall an amazing piece of tech that I can see replacing search engines as we currently use them.
 
I'm bored of it. I asked how's it going chat?, Are you my friend? Are you a twat?... all it ever seems to say is 'as a language model, I can't have personal emotions...'
When I was a kid at school on the computer there was this text game with a little girl who I think was in the war, (i remember her going on about gasmasks anyway) and if you called her a bastard she'd say something like 'if you're going to talk to me that way I won't talk to you'
Hah. Even the computer system programmed to talk to people didn't want to be your friend!

;)
 

What's a left back?



In the context of sports, specifically football (soccer), a left back is a defensive position on the field. The left back typically plays on the left-hand side of the defense, alongside the center back and the right back.
The left back's primary responsibility is to defend against attacking players who are attempting to make a play down the right-hand side of the field. They also support the midfield and attacking players by overlapping down the left flank to provide width and create attacking opportunities.
Left backs are often known for their pace and stamina, as they are required to cover a lot of ground up and down the left flank. They also need to have good tackling, marking, and crossing abilities, as they are expected to make timely tackles, mark opposing players closely, and provide accurate crosses into the box for their teammates.

Guardiola and Txiki out, ChatGPT in.
 

Don't have an account? Register now and see fewer ads!

SIGN UP
Back
Top