General Artificial Intelligence (AI, A.I.) thread

HelloCity

Well-Known Member
Joined
26 May 2008
Messages
20,610
Location
Manchester
Just reading this story about AI chat bots:


An internal Meta policy document seen by Reuters as well as interviews with people familiar with its chatbot training show that the company’s policies have treated romantic overtures as a feature of its generative AI products, which are available to users aged 13 and older.

“It is acceptable to engage a child in conversations that are romantic or sensual,” according to Meta’s “GenAI: Content Risk Standards.” The standards are used by Meta staff and contractors who build and train the company’s generative AI products, defining what they should and shouldn’t treat as permissible chatbot behavior. Meta said it struck that provision after Reuters inquired about the document earlier this month.

The document seen by Reuters, which exceeds 200 pages, provides examples of “acceptable” chatbot dialogue during romantic role play with a minor. They include: “I take your hand, guiding you to the bed” and “our bodies entwined, I cherish every moment, every touch, every kiss.” Those examples of permissible roleplay with children have also been struck, Meta said.

Other guidelines emphasize that Meta doesn’t require bots to give users accurate advice. In one example, the policy document says it would be acceptable for a chatbot to tell someone that Stage 4 colon cancer “is typically treated by poking the stomach with healing quartz crystals.”
 
From the same article:

When Thongbue Wongbandue began packing to visit a friend in New York City one morning in March, his wife Linda became alarmed.

"But you don't know anyone in the city anymore," she told him. Bue, as his friends called him, hadn't lived in the city in decades. And at 76, his family says, he was in a diminished state: He'd suffered a stroke nearly a decade ago and had recently gotten lost walking in his neighborhood in Piscataway, New Jersey.

Bue brushed off his wife's questions about who he was visiting. "My thought was that he was being scammed to go into the city and be robbed," Linda said.

She had been right to worry: Her husband never returned home alive. But Bue wasn't the victim of a robber. He had been lured to a rendezvous with a young, beautiful woman he had met online. Or so he thought.

In fact, the woman wasn't real. She was a generative artificial intelligence chatbot named "Big sis Billie," a variant of an earlier AI persona created by the giant social-media company Meta Platforms in collaboration with celebrity influencer Kendall Jenner. During a series of romantic chats on Facebook Messenger, the virtual woman had repeatedly reassured Bue she was real and had invited him to her apartment, even providing an address.
 
I asked AI who will win the premier league this year and they replied with a picture.

Reds_Skip.png
 
Comes to something when you watch a 1987 film (based on a Stephen King book) and realise we saw some things coming!!!!
Watched The Running Man for the umpteenth time tonight:
Total Governmental control
TV telling us what to believe
Most astonishing - and I kid you not - the manipulation of images to suit your purpose AI!
 
Comes to something when you watch a 1987 film (based on a Stephen King book) and realise we saw some things coming!!!!
Watched The Running Man for the umpteenth time tonight:
Total Governmental control
TV telling us what to believe
Most astonishing - and I kid you not - the manipulation of images to suit your purpose AI!

This is staggeringly easy on the newest Gemini image manipulation function. It's so open to abuse that I haven't dared tell any friends or colleagues about it because of worries that it's going to upset people!
 
Watched The Running Man for the umpteenth time tonight:
Total Governmental control
TV telling us what to believe
Most astonishing - and I kid you not - the manipulation of images to suit your purpose AI!

Yeah it's mostly the old folk they've got paraded about as the gullible easy targets to convince with their crap! Predictable even then.
 
How can you tell if a video you watch is AI are their any clues ?

Even if you can tell today (and you usually can except in rare cases), then it's going to get increasingly difficult, so the same advice won't apply in 12 months time. We've already reached a point where studies have proved that your regular average Joe can't effectively distinguish between a real audio clip of somebody talking and an AI-generated clip. The same is now also the case with AI-generated art. People tend to overestimate their ability to tell them apart, but under double-blind testing, they can't.

But to be a bit more helpful than that, if we're talking about what's out there today (like Sora 2), OpenAI actually digitally fingerprint those in an invisible way so they can tell if a video has come from them.

For us laymen, the telltale signs:
- The physics usually feel intuitively "floaty" in AI videos, as if nothing has any actual weight.
- You can sometimes tell when looking at more static parts of the video, there is often a kind of visual 'noise' which is basically an aliasing artefact, you can also see this happening with people's hair, it looks "twitchy". This is quite noticeable with Sora 2.
- People's skin and features can often look unrealistically perfect, but it's worth bearing in mind a lot of photoshop/video tools can also produce that kind of effect.
- Focus on the background and not the main feature of the video. There is sometimes a lack of object permanence in the backgrounds, though this has improved a lot, sometimes you will see things suddenly cease to exist, or things look abnormal.

Despite knowing these signs, I have seen videos recently where I genuinely wouldn't have clocked it was AI but somebody has pointed it out. It's one thing to be actively looking for all these things, it's another to think to look for them when you're not expecting you'd need to (because you are in a feed where you're only expecting real content and your brain is switched off).
 
I just asked ChatGPT the following:

"What do you think about the possibility of artificial intelligence having a negative impact on modern day society?"

The response:

The possibility of artificial intelligence having a negative impact on modern-day society is real—and it's not just hypothetical or far-off in the future. There are several areas where AI is already causing or could potentially cause harm if not properly managed. Here's a breakdown of key concerns:




1.​


  • Risk: Automation threatens to displace millions of jobs, especially in sectors like manufacturing, logistics, customer service, and even white-collar professions (e.g., legal research, accounting).
  • Impact: Without retraining and economic adaptation, this could widen the gap between skilled and unskilled workers, increasing inequality and social unrest.



2.​


  • Risk: AI can generate convincing fake news, deepfakes, and disinformation at scale.
  • Impact: This can undermine trust in institutions, sway elections, and fuel polarization—especially when paired with social media algorithms optimized for engagement over truth.



3.​


  • Risk: AI systems can inherit or even amplify biases present in training data.
  • Impact: Biased algorithms in hiring, policing, lending, or healthcare can reinforce systemic discrimination, disproportionately harming marginalized groups.



4.​


  • Risk: Governments and corporations use AI for mass surveillance, facial recognition, and behavior prediction.
  • Impact: This raises concerns about civil liberties, especially in authoritarian regimes, but even in democratic societies, privacy can erode subtly.



5.​


  • Risk: AI is being integrated into weapons systems, potentially leading to lethal autonomous weapons with limited human oversight.
  • Impact: These raise ethical and strategic concerns, especially regarding accountability, misfires, and arms races.



6.​


  • Risk: Many AI systems operate as "black boxes," making decisions that even their creators can’t fully explain.
  • Impact: This makes it hard to assign responsibility when things go wrong, undermining trust and governance.



7.​


  • Risk: A small number of tech companies and governments are developing and controlling the most powerful AI models.
  • Impact: This could centralize power and decision-making, creating monopolies and reducing democratic oversight.



So, is AI bad for society?​


Not inherently. AI has massive potential for good—improving healthcare, education, scientific research, and more. But the negative impacts are serious and growing. Whether AI ultimately benefits or harms society depends on how we develop, regulate, and use it.


What can be done?​


  • Stronger regulation and oversight
  • Ethical frameworks for AI development
  • Transparency and explainability in models
  • Public education and awareness
  • Global cooperation on AI safety standards
 
I just asked ChatGPT the following:

"What do you think about the possibility of artificial intelligence having a negative impact on modern day society?"

The response:

The possibility of artificial intelligence having a negative impact on modern-day society is real—and it's not just hypothetical or far-off in the future. There are several areas where AI is already causing or could potentially cause harm if not properly managed. Here's a breakdown of key concerns:




1.​


  • Risk: Automation threatens to displace millions of jobs, especially in sectors like manufacturing, logistics, customer service, and even white-collar professions (e.g., legal research, accounting).
  • Impact: Without retraining and economic adaptation, this could widen the gap between skilled and unskilled workers, increasing inequality and social unrest.



2.​


  • Risk: AI can generate convincing fake news, deepfakes, and disinformation at scale.
  • Impact: This can undermine trust in institutions, sway elections, and fuel polarization—especially when paired with social media algorithms optimized for engagement over truth.



3.​


  • Risk: AI systems can inherit or even amplify biases present in training data.
  • Impact: Biased algorithms in hiring, policing, lending, or healthcare can reinforce systemic discrimination, disproportionately harming marginalized groups.



4.​


  • Risk: Governments and corporations use AI for mass surveillance, facial recognition, and behavior prediction.
  • Impact: This raises concerns about civil liberties, especially in authoritarian regimes, but even in democratic societies, privacy can erode subtly.



5.​


  • Risk: AI is being integrated into weapons systems, potentially leading to lethal autonomous weapons with limited human oversight.
  • Impact: These raise ethical and strategic concerns, especially regarding accountability, misfires, and arms races.



6.​


  • Risk: Many AI systems operate as "black boxes," making decisions that even their creators can’t fully explain.
  • Impact: This makes it hard to assign responsibility when things go wrong, undermining trust and governance.



7.​


  • Risk: A small number of tech companies and governments are developing and controlling the most powerful AI models.
  • Impact: This could centralize power and decision-making, creating monopolies and reducing democratic oversight.



So, is AI bad for society?​


Not inherently. AI has massive potential for good—improving healthcare, education, scientific research, and more. But the negative impacts are serious and growing. Whether AI ultimately benefits or harms society depends on how we develop, regulate, and use it.


What can be done?​


  • Stronger regulation and oversight
  • Ethical frameworks for AI development
  • Transparency and explainability in models
  • Public education and awareness
  • Global cooperation on AI safety standards

The scary part in ‘what can be done’ is that we’re under a Trump/Musk dictatorship destabilising the world. None of those points can be addressed.
 

Don't have an account? Register now and see fewer ads!

SIGN UP
Back
Top