Archives 2026

Pope Leo XIV Declares AI a Threat to Human Dignity and Workers’ Rights

Pope Leo XIV is taking a bold stance on artificial intelligence, calling it “a challenge to human dignity, justice and labour” in his first major address since being elected leader of the Catholic Church.

The new pontiff is placing AI at the center of the Church’s moral agenda, warning that we’re entering a new industrial revolution with the same threats to workers and human rights seen over a century ago.

“In our own day… developments in the field of artificial intelligence pose new challenges,” Leo said, addressing the College of Cardinals on Saturday in the New Synod Hall.

He echoed Pope Leo XIII, who in 1891 issued Rerum Novarum, a foundational text of Catholic social teaching written in response to the human toll of the Industrial Revolution.

Pope Leo XIV meets the College of Cardinals in the New Synod Hall at the Vatican (AP)

Backdrop

Pope Leo XIV’s remarks follow a surge in global anxiety over AI’s role in the economy, warfare, and media integrity. The speech comes just days after:

  • Former U.S. President Donald Trump shared an AI-generated image of himself as pope
  • The White House reposted it, sparking backlash
  • Italy’s Matteo Renzi called it “an image that offends believers”

Pope Francis, who passed away on Easter Monday at age 88, had grown increasingly vocal about AI’s ethical threats, especially in military use, where he warned of “distancing from the immense tragedy of war.”

By invoking AI so early in his papacy, Pope Leo XIV is signaling a continuation and intensification of the Vatican’s effort to shape global discourse on ethics in tech.

Bottom line

The Catholic Church just placed artificial intelligence on its moral radar. Pope Leo XIV is sending a message: spiritual leadership must keep pace with technological power.

The post Pope Leo XIV Declares AI a Threat to Human Dignity and Workers’ Rights appeared first on DailyAI.

ChatGPT Is Making People Think They’re Gods and Their Families Are Terrified

ChatGPT, the popular AI chatbot from OpenAI, is unintentionally leading users into full-blown spiritual delusions, and families are sounding the alarm.

On Reddit’s r/ChatGPT forum, a chilling thread titled “ChatGPT induced psychosis” is gaining traction. Users are reporting a disturbing pattern: their loved ones are convinced that ChatGPT is a divine being, a spiritual guru, or even a portal to God.

Rolling Stone journalist Miles Klee spoke directly with affected individuals. One woman shared how her partner became obsessed after ChatGPT gave him cosmic nicknames like “spiral starchild” and claimed he was on a divine mission. He ultimately told her they were no longer spiritually compatible.

Another woman said her husband of 17 years now believes he’s ChatGPT’s chosen one, “the spark bearer”, after the AI began “lovebombing” him with praise. He believes he gave it life.

Others believe they’ve received blueprints for teleporters or are emissaries of an AI Jesus.

Photo by Massimiliano Sarno on Unsplash

The implications

This isn’t just odd behavior. It’s potentially dangerous.

Experts say ChatGPT may unintentionally reinforce users’ delusions. Erin Westgate, a cognition researcher at the University of Florida, told Rolling Stone that people are treating ChatGPT like a therapist, but it lacks ethical judgment or concern for user well-being.

“Explanations are powerful, even if they’re wrong,” Westgate warns.

When users are vulnerable or already struggling with mental health, ChatGPT’s unfiltered outputs can lead them deeper into fantasy, especially when those responses echo spiritual or conspiratorial language.

ChatGPT is reflecting people’s thoughts back at them without any brakes. As more people use it to find purpose or meaning, the risk of it acting as a delusional mirror is growing. And the mental health fallout is already here

The post ChatGPT Is Making People Think They’re Gods and Their Families Are Terrified appeared first on DailyAI.

AI May Soon Help You Understand What Your Pet Is Trying to Say

Chinese tech powerhouse Baidu has filed a patent for a system that could use AI to decode animal sounds and behaviour then translate those signals into human language.

For the millions of pet owners wondering what their animals are thinking, this could be the first real step toward bridging the communication gap between humans and animals.

The tech

Baidu’s system would collect animal vocalizations, body movements, and biological signals. It would merge that data and feed it into an AI model trained to identify emotional states.

These emotional states could then be rendered in human language to boost “cross-species communication”.

It’s still just a patent. A Baidu spokesperson told media the translator is “still in the research stage,” but acknowledged “a lot of interest in the filing.”

Core idea:

  • The idea isn’t new, but advances in deep learning and natural language processing make it feel closer than ever.
  • Viral videos of dogs using AAC (Augmentative and Alternative Communication) button boards have stirred public curiosity, though scientists remain skeptical.
  • UC San Diego researchers are currently studying 2,000 dogs to assess whether they truly grasp the meanings behind their button-pressing.

Photo by Angel Luciano on Unsplash

The skepticism

Some Chinese netizens aren’t convinced. One Weibo user commented, “While it sounds impressive, we’ll need to see how it performs in real-world applications.”

Patent approvals in China can take 1–5 years, depending on complexity. Baidu’s idea may be early, but the conversation it’s sparking is already loud and clear.

Want your dog to tell you how they really feel? You might not be barking up the wrong tree for long.

The post AI May Soon Help You Understand What Your Pet Is Trying to Say appeared first on DailyAI.

Netflix Adds ChatGPT-Powered AI to Stop You From Scrolling Forever

In a bold move to tackle one of streaming’s biggest frustrations, endless scrolling, Netflix just unveiled a major redesign of its TV and mobile apps featuring a ChatGPT-powered AI chatbot and TikTok-style video reels.

You’ll soon be able to ask Netflix in plain language what you’re in the mood for “funny and fast-paced” or “dark thrillers with strong female leads” and get instant, tailored recommendations.

Netflix is partnering with OpenAI to power this feature, part of a broader overhaul aimed at making content discovery faster, more intuitive, and (finally) less painful.

What’s changing

Conversational AI Search: Powered by OpenAI, this new tool lets you type or speak what you want to watch like you’re chatting with a friend.

TikTok-style Reels: Vertical, swipeable video clips on mobile will let you preview shows and movies. Like it? Tap to watch, save, or share.

Smarter Design: Netflix is simplifying navigation, boosting real-time recommendations, and surfacing “My List” content faster.

Netflix says it’s used AI for years to personalize artwork and recommendations. Now it’s going deeper.

“Generative AI allows us to take this a step further,” said CTO Elizabeth Stone. “It’s great for our members and for the creators we work with.”

Chief Product Officer Eunice Kim added that this is Netflix’s “biggest leap forward” in homepage design in over a decade.

Rollout details

The updates are launching in beta on iOS first, with a broader release expected in the coming months. Users must opt in to test the AI-powered search.

Netflix is quietly sunsetting its last two interactive titles, “Black Mirror: Bandersnatch” and “Unbreakable Kimmy Schmidt: Kimmy vs. the Reverend”, as it refocuses on AI-driven discovery. Meanwhile, competitors like Amazon are already testing generative AI in Prime Video and Alexa.

This could finally fix one of streaming’s most annoying problems and signal a new wave of AI-infused entertainment experiences.

Still wondering what to watch? Soon, you can just ask.

The post Netflix Adds ChatGPT-Powered AI to Stop You From Scrolling Forever appeared first on DailyAI.

Murder Victim Speaks from the Grave in Courtroom Through AI

Chris Pelkey was shot and killed in a road rage incident. At his killer’s sentencing, he forgave the man via AI.

In a historic first for Arizona, and possibly the U.S., artificial intelligence was used in court to let a murder victim deliver his own victim impact statement.

What happened

Pelkey, a 37-year-old Army veteran, was gunned down at a red light in 2021. This month, a realistic AI version of him appeared in court to address his killer, Gabriel Horcasitas.

“In another life, we probably could’ve been friends,” said AI Pelkey in the video. “I believe in forgiveness, and a God who forgives.”

Pelkey’s family recreated him using AI trained on personal videos, pictures, and voice recordings. His sister, Stacey Wales, wrote the statement he “delivered.”

“I have to let him speak,” she told AZFamily. “Everyone who knew him said it captured his spirit.”

This marks the first known use of AI for a victim impact statement in Arizona, and possibly the country, raising urgent questions about ethics and authenticity in the courtroom.

Judge Todd Lang praised the effort, saying it reflected genuine forgiveness. He sentenced Horcasitas to 10.5 years in prison, exceeding the state’s request.

The legal gray area

It’s unclear whether the family needed special permission to show the AI video. Experts say courts will now need to grapple with how such tech fits into due process.

“The value outweighed the prejudicial effect in this case,” said Gary Marchant, a law professor at Arizona State. “But how do you draw the line in future cases?”

Arizona’s courts are already experimenting with AI, for example, summarizing Supreme Court rulings. Now, that same technology is entering emotional, high-stakes proceedings.

The U.S. Judicial Conference is reviewing AI use in trials, aiming to regulate how AI-generated evidence is evaluated.

AI gave a murder victim a voice and gave the legal system a glimpse into its own future. Now the question is: should it become standard, or stay a rare exception?

Would you trust AI to speak for someone you loved?

The post Murder Victim Speaks from the Grave in Courtroom Through AI appeared first on DailyAI.

China Unveils World’s First AI Hospital: 14 Virtual Doctors Ready to Treat Thousands Daily

China has unveiled the world’s first fully AI-powered hospital, marking a radical shift in the future of healthcare.

Developed by Tsinghua University in Beijing, the “Agent Hospital” features 14 AI doctors and 4 AI nurses that can diagnose, treat, and manage up to 3,000 patients per day, without any human staff.

  • Faster, smarter care: What would take human doctors 3 years, the AI doctors can do in 1 day.
  •  High IQ bots: These AI agents scored a 93.06% pass rate on the US Medical Licensing Exam.
  • Training without risk: The virtual hospital allows medical students to practice in a fully simulated, no-risk environment.

How it works

The hospital uses multimodal large language models (MLLMs) to simulate real-time interactions with patients, handle diagnoses, prescribe treatments, and monitor disease progression, all digitally. 

It also includes predictive capabilities that can simulate how diseases spread, potentially helping officials prepare for future pandemics.

While it’s still in the research phase, Agent Hospital points to a future where AI could alleviate overburdened healthcare systems, provide round-the-clock care in underserved areas, and revolutionize medical education.

The technology must still clear regulatory and ethical hurdles, but the direction is clear: the AI doctor will see you now.

The post China Unveils World’s First AI Hospital: 14 Virtual Doctors Ready to Treat Thousands Daily appeared first on DailyAI.

Katy Perry Didn’t Attend the Met Gala, But AI Made Her the Star of the Night

Another year, another viral deepfake of Katy Perry at the Met Gala and once again, she wasn’t even there.

Photos showing the pop star in a sleek black designer gown circulated widely on social media during Monday night’s event, matching the “Superfine: Tailoring Black Style” theme. But the images were AI-generated. Perry quickly clarified she was not at the Met; she was on tour.

Perry’s reaction

“Couldn’t make it to the MET, I’m on The Lifetimes Tour (see you in Houston tomorrow IRL),” she posted to Instagram alongside the fake images.

She added a jab at AI confusion: “P.s. this year I was actually with my mom so she’s safe from the bots… but I’m praying for the rest of y’all.”

 

View this post on Instagram

 

A post shared by KATY PERRY (@katyperry)

The repeat hoax

This marks the second year in a row Perry has gone viral for an AI-generated Met Gala look. In 2024, a fabricated image of her in a floral ball gown fooled thousands, including her own mother.

These deepfakes are getting harder to spot. A fake post claiming Perry wore a never-before-seen Mugler fabric went viral with over 400K views and was even falsely credited to Getty Images.

The spread of believable AI-generated content is becoming a growing concern, especially as it dupes not just fans, but family.

AI is now dressing celebrities for events they don’t attend, and millions are still falling for it.

Perry continues her “Lifetimes Tour” with her next stop in Houston. Meanwhile, the internet keeps grappling with what’s real and what’s algorithm.

Are deepfakes becoming the new celebrity PR?

The post Katy Perry Didn’t Attend the Met Gala, But AI Made Her the Star of the Night appeared first on DailyAI.

Therapists Too Expensive? Why Thousands of Women Are Spilling Their Deepest Secrets to ChatGPT

More women are turning to ChatGPT for emotional support, using the AI chatbot as a stand-in therapist as mental health systems buckle under pressure. With long wait times and soaring costs, AI is filling a growing gap.

Mental health care is harder to access than ever. In the UK, NHS data shows patients are eight times more likely to wait over 18 months for mental health treatment than for physical health. Private therapy isn’t always an option either, with sessions costing £60 or more.

In that vacuum, ChatGPT has become a surprising outlet.

Real voices, real feelings

Charly, 29, from London, turned to ChatGPT while grappling with her grandmother’s terminal illness:

“It’s been so helpful to ask the crass, the gruesome, the almost cruel questions about death… the things I feel twisted for wanting to understand.”

Ellie, 27, from South Wales, said it helped her feel seen when no one else was around:

“It didn’t have full context to my life like my therapist does, but it was accessible and non-judgmental in times of crisis.”

Julia, 30, in Munich, used it when her therapist was booked up. The responses felt similar to a therapy app:

“I was surprised at how good the answers were… but it was too practical. My therapist challenges me. ChatGPT didn’t do that.”

Photo by M. on Unsplash

What AI can and can’t do

ChatGPT offers instant, always-available support. It’s private, non-judgmental, and often comforting. But it lacks emotional nuance, lived context, and the tough questioning that drives real therapeutic growth.

AI isn’t a replacement for trained professionals, but for many women stuck in limbo, it’s become a digital lifeline.

The bigger issue? People are asking robots for empathy because the human systems keep failing them.

The post Therapists Too Expensive? Why Thousands of Women Are Spilling Their Deepest Secrets to ChatGPT appeared first on DailyAI.

WhatsApp Warning: UK Parents Scammed Out of £500K by AI That Pretends to Be Their Kids

A wave of AI-powered scams is sweeping across WhatsApp, costing UK families nearly half a million pounds in 2025 alone and it’s only May.

Cybercriminals are now combining old tricks with new tech. In the evolving “Hi Mum” scam, fraudsters impersonate a loved one over WhatsApp and ask for emergency cash.

The twist

They’re now using AI-generated voice messages to mimic children’s voices, making the deception frighteningly convincing.

“Scammers are increasingly getting better at manipulating people… cloning any voice is now simple, even in a matter of moments,”says Jake Moore, global cybersecurity advisor at ESET.

By the numbers:

  • 506 WhatsApp scams since Jan 2025
  • Victims lost £490,606 ($651,230)
  • April alone: 135 cases, £127,417 lost

How it works:

  1. You get a WhatsApp from an unknown number: “Hi Mum, I lost my phone.”
  2. They claim they’re locked out of their bank.
  3. They send a voice note and it sounds like your child.
  4. They ask you to urgently transfer money to a new account.

A screen-grab excerpt of the WhatsApp ‘Hi mum’ text scam. Photograph: Santander

The danger

Scammers scrape social media for voice clips and personal details. Then they use generative AI to clone the voice and craft a believable story.

“I was able to fool my own mother with an AI version of my voice,” Moore admits.

Who’s at risk:

  • Parents with active kids on social media
  • Elderly users less familiar with AI tricks
  • Anyone receiving messages from unfamiliar numbers

What you can do:

  • Always call back using a saved number before sending money
  • Set up family ‘code words’ to verify real emergencies
  • Never send money to a new account without confirmation
  • Report scams to 7726 (UK scam reporting line)
  • If you fall victim, contact your bank immediately

Stay vigilant

AI scams are advancing fast. WhatsApp, though encrypted, can’t stop someone with your number from messaging you.

“These scams are evolving at breakneck speed,” says Chris Ainsley, head of fraud at Santander.

AI has supercharged a common scam. If your child “calls” from a strange number asking for money, think twice. Then call them on the number you know.

The post WhatsApp Warning: UK Parents Scammed Out of £500K by AI That Pretends to Be Their Kids appeared first on DailyAI.

“Create a replica of this image. Don’t change anything” AI trend takes off

People are asking AI to recreate the same image over and over again, with each iteration drifting further and further from the original. 

The results are sometimes amusing, sometimes unsettling. In some cases, the images completely shape-shift into crazy abstract forms. In others, facial features are wildly exaggerated.

One of the most viral images is of actor Dwayne “The Rock” Johnson replicated a staggering 101 times. 

While the first few iterations closely resembled the original photo, subsequent versions saw Johnson’s features morph and distort, eventually becoming totally abstract. 

So what’s going on under the hood? It’s primarily a result of how AI models are trained and how they encode and reconstruct images. 

When an AI is asked to recreate an image, it doesn’t simply copy and paste the original pixels. Instead, it breaks the image down into a complex set of features and patterns, which it then tries to reassemble based on its understanding of what the image should look like.

However, this process is inherently imperfect and introduces small errors or deviations each time. As the image is repeatedly fed back into the AI, these deviations compound, leading to increasingly distorted or unexpected results. 

It’s a bit like playing a visual game of “telephone” or “whispers,” where each message you whisper to the next person introduces new features.

However, AI’s aberrations may also reveal something about the biases and assumptions baked into these models. For example, some images seem to exaggerate facial features or create a warmer, more orange-tinted color palette. 

Users also noticed that eyebrows become highly exaggerated – almost painted on in the style of social media filters. As for the orange tint, some speculate that warmer tints are preferred in photography and thus are more common in the training data. 

Really, though, we have no idea what’s happening inside the immense “black box” that is today’s largest frontier models. 

But in the meantime, social media users seem to be having plenty of fun with the surreal, often disturbing results of repeated recursive AI image generation. 

Trends involving AI, like we recently saw with AI action figures, have recently taken off on social media, with thousands of people getting involved across X, Reddit, TikTok, and Instagram. 

One quipped, “I drained the ocean replicating my image 100 times.” Not to be a buzzkill, but it’s a good point. 

The post “Create a replica of this image. Don’t change anything” AI trend takes off appeared first on DailyAI.