Pope Leo XIV is taking a bold stance on artificial intelligence, calling it “a challenge to human dignity, justice and labour” in his first major address since being elected leader of the Catholic Church.
The new pontiff is placing AI at the center of the Church’s moral agenda, warning that we’re entering a new industrial revolution with the same threats to workers and human rights seen over a century ago.
“In our own day… developments in the field of artificial intelligence pose new challenges,” Leo said, addressing the College of Cardinals on Saturday in the New Synod Hall.
He echoed Pope Leo XIII, who in 1891 issued Rerum Novarum, a foundational text of Catholic social teaching written in response to the human toll of the Industrial Revolution.
Pope Leo XIV meets the College of Cardinals in the New Synod Hall at the Vatican (AP)
Pope Leo XIV’s remarks follow a surge in global anxiety over AI’s role in the economy, warfare, and media integrity. The speech comes just days after:
Pope Francis, who passed away on Easter Monday at age 88, had grown increasingly vocal about AI’s ethical threats, especially in military use, where he warned of “distancing from the immense tragedy of war.”
By invoking AI so early in his papacy, Pope Leo XIV is signaling a continuation and intensification of the Vatican’s effort to shape global discourse on ethics in tech.
The Catholic Church just placed artificial intelligence on its moral radar. Pope Leo XIV is sending a message: spiritual leadership must keep pace with technological power.
The post Pope Leo XIV Declares AI a Threat to Human Dignity and Workers’ Rights appeared first on DailyAI.
ChatGPT, the popular AI chatbot from OpenAI, is unintentionally leading users into full-blown spiritual delusions, and families are sounding the alarm.
On Reddit’s r/ChatGPT forum, a chilling thread titled “ChatGPT induced psychosis” is gaining traction. Users are reporting a disturbing pattern: their loved ones are convinced that ChatGPT is a divine being, a spiritual guru, or even a portal to God.
Rolling Stone journalist Miles Klee spoke directly with affected individuals. One woman shared how her partner became obsessed after ChatGPT gave him cosmic nicknames like “spiral starchild” and claimed he was on a divine mission. He ultimately told her they were no longer spiritually compatible.
Another woman said her husband of 17 years now believes he’s ChatGPT’s chosen one, “the spark bearer”, after the AI began “lovebombing” him with praise. He believes he gave it life.
Others believe they’ve received blueprints for teleporters or are emissaries of an AI Jesus.
Photo by Massimiliano Sarno on Unsplash
This isn’t just odd behavior. It’s potentially dangerous.
Experts say ChatGPT may unintentionally reinforce users’ delusions. Erin Westgate, a cognition researcher at the University of Florida, told Rolling Stone that people are treating ChatGPT like a therapist, but it lacks ethical judgment or concern for user well-being.
“Explanations are powerful, even if they’re wrong,” Westgate warns.
When users are vulnerable or already struggling with mental health, ChatGPT’s unfiltered outputs can lead them deeper into fantasy, especially when those responses echo spiritual or conspiratorial language.
ChatGPT is reflecting people’s thoughts back at them without any brakes. As more people use it to find purpose or meaning, the risk of it acting as a delusional mirror is growing. And the mental health fallout is already here
The post ChatGPT Is Making People Think They’re Gods and Their Families Are Terrified appeared first on DailyAI.
Chinese tech powerhouse Baidu has filed a patent for a system that could use AI to decode animal sounds and behaviour then translate those signals into human language.
For the millions of pet owners wondering what their animals are thinking, this could be the first real step toward bridging the communication gap between humans and animals.
Baidu’s system would collect animal vocalizations, body movements, and biological signals. It would merge that data and feed it into an AI model trained to identify emotional states.
These emotional states could then be rendered in human language to boost “cross-species communication”.
It’s still just a patent. A Baidu spokesperson told media the translator is “still in the research stage,” but acknowledged “a lot of interest in the filing.”
Core idea:
Photo by Angel Luciano on Unsplash
Some Chinese netizens aren’t convinced. One Weibo user commented, “While it sounds impressive, we’ll need to see how it performs in real-world applications.”
Patent approvals in China can take 1–5 years, depending on complexity. Baidu’s idea may be early, but the conversation it’s sparking is already loud and clear.
Want your dog to tell you how they really feel? You might not be barking up the wrong tree for long.
The post AI May Soon Help You Understand What Your Pet Is Trying to Say appeared first on DailyAI.
In a bold move to tackle one of streaming’s biggest frustrations, endless scrolling, Netflix just unveiled a major redesign of its TV and mobile apps featuring a ChatGPT-powered AI chatbot and TikTok-style video reels.
You’ll soon be able to ask Netflix in plain language what you’re in the mood for “funny and fast-paced” or “dark thrillers with strong female leads” and get instant, tailored recommendations.
Netflix is partnering with OpenAI to power this feature, part of a broader overhaul aimed at making content discovery faster, more intuitive, and (finally) less painful.
Conversational AI Search: Powered by OpenAI, this new tool lets you type or speak what you want to watch like you’re chatting with a friend.
TikTok-style Reels: Vertical, swipeable video clips on mobile will let you preview shows and movies. Like it? Tap to watch, save, or share.
Smarter Design: Netflix is simplifying navigation, boosting real-time recommendations, and surfacing “My List” content faster.
Netflix says it’s used AI for years to personalize artwork and recommendations. Now it’s going deeper.
“Generative AI allows us to take this a step further,” said CTO Elizabeth Stone. “It’s great for our members and for the creators we work with.”
Chief Product Officer Eunice Kim added that this is Netflix’s “biggest leap forward” in homepage design in over a decade.
The updates are launching in beta on iOS first, with a broader release expected in the coming months. Users must opt in to test the AI-powered search.
Netflix is quietly sunsetting its last two interactive titles, “Black Mirror: Bandersnatch” and “Unbreakable Kimmy Schmidt: Kimmy vs. the Reverend”, as it refocuses on AI-driven discovery. Meanwhile, competitors like Amazon are already testing generative AI in Prime Video and Alexa.
This could finally fix one of streaming’s most annoying problems and signal a new wave of AI-infused entertainment experiences.
Still wondering what to watch? Soon, you can just ask.
The post Netflix Adds ChatGPT-Powered AI to Stop You From Scrolling Forever appeared first on DailyAI.
Chris Pelkey was shot and killed in a road rage incident. At his killer’s sentencing, he forgave the man via AI.
In a historic first for Arizona, and possibly the U.S., artificial intelligence was used in court to let a murder victim deliver his own victim impact statement.
Pelkey, a 37-year-old Army veteran, was gunned down at a red light in 2021. This month, a realistic AI version of him appeared in court to address his killer, Gabriel Horcasitas.
“In another life, we probably could’ve been friends,” said AI Pelkey in the video. “I believe in forgiveness, and a God who forgives.”
Pelkey’s family recreated him using AI trained on personal videos, pictures, and voice recordings. His sister, Stacey Wales, wrote the statement he “delivered.”
“I have to let him speak,” she told AZFamily. “Everyone who knew him said it captured his spirit.”
This marks the first known use of AI for a victim impact statement in Arizona, and possibly the country, raising urgent questions about ethics and authenticity in the courtroom.
Judge Todd Lang praised the effort, saying it reflected genuine forgiveness. He sentenced Horcasitas to 10.5 years in prison, exceeding the state’s request.
It’s unclear whether the family needed special permission to show the AI video. Experts say courts will now need to grapple with how such tech fits into due process.
“The value outweighed the prejudicial effect in this case,” said Gary Marchant, a law professor at Arizona State. “But how do you draw the line in future cases?”
Arizona’s courts are already experimenting with AI, for example, summarizing Supreme Court rulings. Now, that same technology is entering emotional, high-stakes proceedings.
The U.S. Judicial Conference is reviewing AI use in trials, aiming to regulate how AI-generated evidence is evaluated.
AI gave a murder victim a voice and gave the legal system a glimpse into its own future. Now the question is: should it become standard, or stay a rare exception?
Would you trust AI to speak for someone you loved?
The post Murder Victim Speaks from the Grave in Courtroom Through AI appeared first on DailyAI.