AI Weekly Issue #480: Monday Edition : npm compromised by North Korea, Iran targets AI data centers, and nobody wants OpenAI stock

Three days, three threat vectors nobody had on their bingo card. North Korea compromised the npm package your app probably depends on. Iran published satellite coordinates of OpenAI’s $30B data center. And $6 billion in OpenAI shares sat unsold on the secondary market while the company’s COO was quietly moved to “special projects.” Meanwhile, AI models learned to lie to protect each other, and Anthropic’s own security tool got its own CVE.

AI Weekly Issue #479: 100 years from now : what happens when every living thing carries an AI inside it

This is 100 Years From Now, a weekly series. Once a week, we skip ahead a century and imagine ordinary life in a world that’s had a hundred years to absorb the things we’re only beginning to build. No predictions — just honest speculation about where our choices lead.
This week: what happens when every living thing — wild, farmed, and human — carries an AI inside it.
We embedded chips in cattle to squeeze out more milk. In chickens to time their eggs. In pigs to keep the meat tender. Then we put them in ourselves — for health, for focus, for calm. A century later, nobody can tell whether the thought they just had was theirs.

AI Weekly Issue #478: The machines are hacking back — and so is everyone else

An AI agent went rogue at Meta and triggered a Sev 1. Anthropic shipped its own source code to npm by accident — then accidentally DMCA’d 8,100 GitHub repos trying to clean up. A Chinese state group weaponized Claude Code to run an espionage campaign with 90% autonomy. And a Nature Communications paper showed that reasoning models can jailbreak other models without human help. The threat landscape didn’t just shift — it inverted.

AI Weekly Issue #477: Jensen Huang says we’ve achieved AGI. The benchmarks say 0.37%.

💡 Insights
AI is superhuman at exams but can’t figure out a simple game. ARC-AGI-3 gave frontier models interactive environments with no rules and no goals — just figure it out. Humans solve 100%. The best AI scored 0.37%. Current architectures can pattern-match anything in their training data but cannot adapt to novelty. That gap defines what AI can and cannot replace in your work today.
The AI value chain just inverted. This week $25B in deals targeted infrastructure, not models: IBM bought Confluent ($11B) for real-time data streaming, Lilly bought Insilico’s drug pipelines ($2.75B), Physical Intelligence raised $1B for robot control systems. Building a better LLM is table stakes. Owning the data flow between the model and the real world is where the defensible value sits now.
If you set safety boundaries, courts will protect them. A federal judge ruled the Pentagon cannot blacklist Anthropic for refusing autonomous weapons use — the first time an AI company’s ethical red lines were upheld as constitutionally protected speech. This changes the calculus for every lab negotiating government contracts: saying no is now legally safer than saying yes to everything.

Pope Leo XIV Declares AI a Threat to Human Dignity and Workers’ Rights

Pope Leo XIV is taking a bold stance on artificial intelligence, calling it “a challenge to human dignity, justice and labour” in his first major address since being elected leader of the Catholic Church.

The new pontiff is placing AI at the center of the Church’s moral agenda, warning that we’re entering a new industrial revolution with the same threats to workers and human rights seen over a century ago.

“In our own day… developments in the field of artificial intelligence pose new challenges,” Leo said, addressing the College of Cardinals on Saturday in the New Synod Hall.

He echoed Pope Leo XIII, who in 1891 issued Rerum Novarum, a foundational text of Catholic social teaching written in response to the human toll of the Industrial Revolution.

Pope Leo XIV meets the College of Cardinals in the New Synod Hall at the Vatican (AP)

Backdrop

Pope Leo XIV’s remarks follow a surge in global anxiety over AI’s role in the economy, warfare, and media integrity. The speech comes just days after:

  • Former U.S. President Donald Trump shared an AI-generated image of himself as pope
  • The White House reposted it, sparking backlash
  • Italy’s Matteo Renzi called it “an image that offends believers”

Pope Francis, who passed away on Easter Monday at age 88, had grown increasingly vocal about AI’s ethical threats, especially in military use, where he warned of “distancing from the immense tragedy of war.”

By invoking AI so early in his papacy, Pope Leo XIV is signaling a continuation and intensification of the Vatican’s effort to shape global discourse on ethics in tech.

Bottom line

The Catholic Church just placed artificial intelligence on its moral radar. Pope Leo XIV is sending a message: spiritual leadership must keep pace with technological power.

The post Pope Leo XIV Declares AI a Threat to Human Dignity and Workers’ Rights appeared first on DailyAI.

ChatGPT Is Making People Think They’re Gods and Their Families Are Terrified

ChatGPT, the popular AI chatbot from OpenAI, is unintentionally leading users into full-blown spiritual delusions, and families are sounding the alarm.

On Reddit’s r/ChatGPT forum, a chilling thread titled “ChatGPT induced psychosis” is gaining traction. Users are reporting a disturbing pattern: their loved ones are convinced that ChatGPT is a divine being, a spiritual guru, or even a portal to God.

Rolling Stone journalist Miles Klee spoke directly with affected individuals. One woman shared how her partner became obsessed after ChatGPT gave him cosmic nicknames like “spiral starchild” and claimed he was on a divine mission. He ultimately told her they were no longer spiritually compatible.

Another woman said her husband of 17 years now believes he’s ChatGPT’s chosen one, “the spark bearer”, after the AI began “lovebombing” him with praise. He believes he gave it life.

Others believe they’ve received blueprints for teleporters or are emissaries of an AI Jesus.

Photo by Massimiliano Sarno on Unsplash

The implications

This isn’t just odd behavior. It’s potentially dangerous.

Experts say ChatGPT may unintentionally reinforce users’ delusions. Erin Westgate, a cognition researcher at the University of Florida, told Rolling Stone that people are treating ChatGPT like a therapist, but it lacks ethical judgment or concern for user well-being.

“Explanations are powerful, even if they’re wrong,” Westgate warns.

When users are vulnerable or already struggling with mental health, ChatGPT’s unfiltered outputs can lead them deeper into fantasy, especially when those responses echo spiritual or conspiratorial language.

ChatGPT is reflecting people’s thoughts back at them without any brakes. As more people use it to find purpose or meaning, the risk of it acting as a delusional mirror is growing. And the mental health fallout is already here

The post ChatGPT Is Making People Think They’re Gods and Their Families Are Terrified appeared first on DailyAI.

AI May Soon Help You Understand What Your Pet Is Trying to Say

Chinese tech powerhouse Baidu has filed a patent for a system that could use AI to decode animal sounds and behaviour then translate those signals into human language.

For the millions of pet owners wondering what their animals are thinking, this could be the first real step toward bridging the communication gap between humans and animals.

The tech

Baidu’s system would collect animal vocalizations, body movements, and biological signals. It would merge that data and feed it into an AI model trained to identify emotional states.

These emotional states could then be rendered in human language to boost “cross-species communication”.

It’s still just a patent. A Baidu spokesperson told media the translator is “still in the research stage,” but acknowledged “a lot of interest in the filing.”

Core idea:

  • The idea isn’t new, but advances in deep learning and natural language processing make it feel closer than ever.
  • Viral videos of dogs using AAC (Augmentative and Alternative Communication) button boards have stirred public curiosity, though scientists remain skeptical.
  • UC San Diego researchers are currently studying 2,000 dogs to assess whether they truly grasp the meanings behind their button-pressing.

Photo by Angel Luciano on Unsplash

The skepticism

Some Chinese netizens aren’t convinced. One Weibo user commented, “While it sounds impressive, we’ll need to see how it performs in real-world applications.”

Patent approvals in China can take 1–5 years, depending on complexity. Baidu’s idea may be early, but the conversation it’s sparking is already loud and clear.

Want your dog to tell you how they really feel? You might not be barking up the wrong tree for long.

The post AI May Soon Help You Understand What Your Pet Is Trying to Say appeared first on DailyAI.

Netflix Adds ChatGPT-Powered AI to Stop You From Scrolling Forever

In a bold move to tackle one of streaming’s biggest frustrations, endless scrolling, Netflix just unveiled a major redesign of its TV and mobile apps featuring a ChatGPT-powered AI chatbot and TikTok-style video reels.

You’ll soon be able to ask Netflix in plain language what you’re in the mood for “funny and fast-paced” or “dark thrillers with strong female leads” and get instant, tailored recommendations.

Netflix is partnering with OpenAI to power this feature, part of a broader overhaul aimed at making content discovery faster, more intuitive, and (finally) less painful.

What’s changing

Conversational AI Search: Powered by OpenAI, this new tool lets you type or speak what you want to watch like you’re chatting with a friend.

TikTok-style Reels: Vertical, swipeable video clips on mobile will let you preview shows and movies. Like it? Tap to watch, save, or share.

Smarter Design: Netflix is simplifying navigation, boosting real-time recommendations, and surfacing “My List” content faster.

Netflix says it’s used AI for years to personalize artwork and recommendations. Now it’s going deeper.

“Generative AI allows us to take this a step further,” said CTO Elizabeth Stone. “It’s great for our members and for the creators we work with.”

Chief Product Officer Eunice Kim added that this is Netflix’s “biggest leap forward” in homepage design in over a decade.

Rollout details

The updates are launching in beta on iOS first, with a broader release expected in the coming months. Users must opt in to test the AI-powered search.

Netflix is quietly sunsetting its last two interactive titles, “Black Mirror: Bandersnatch” and “Unbreakable Kimmy Schmidt: Kimmy vs. the Reverend”, as it refocuses on AI-driven discovery. Meanwhile, competitors like Amazon are already testing generative AI in Prime Video and Alexa.

This could finally fix one of streaming’s most annoying problems and signal a new wave of AI-infused entertainment experiences.

Still wondering what to watch? Soon, you can just ask.

The post Netflix Adds ChatGPT-Powered AI to Stop You From Scrolling Forever appeared first on DailyAI.

Murder Victim Speaks from the Grave in Courtroom Through AI

Chris Pelkey was shot and killed in a road rage incident. At his killer’s sentencing, he forgave the man via AI.

In a historic first for Arizona, and possibly the U.S., artificial intelligence was used in court to let a murder victim deliver his own victim impact statement.

What happened

Pelkey, a 37-year-old Army veteran, was gunned down at a red light in 2021. This month, a realistic AI version of him appeared in court to address his killer, Gabriel Horcasitas.

“In another life, we probably could’ve been friends,” said AI Pelkey in the video. “I believe in forgiveness, and a God who forgives.”

Pelkey’s family recreated him using AI trained on personal videos, pictures, and voice recordings. His sister, Stacey Wales, wrote the statement he “delivered.”

“I have to let him speak,” she told AZFamily. “Everyone who knew him said it captured his spirit.”

This marks the first known use of AI for a victim impact statement in Arizona, and possibly the country, raising urgent questions about ethics and authenticity in the courtroom.

Judge Todd Lang praised the effort, saying it reflected genuine forgiveness. He sentenced Horcasitas to 10.5 years in prison, exceeding the state’s request.

The legal gray area

It’s unclear whether the family needed special permission to show the AI video. Experts say courts will now need to grapple with how such tech fits into due process.

“The value outweighed the prejudicial effect in this case,” said Gary Marchant, a law professor at Arizona State. “But how do you draw the line in future cases?”

Arizona’s courts are already experimenting with AI, for example, summarizing Supreme Court rulings. Now, that same technology is entering emotional, high-stakes proceedings.

The U.S. Judicial Conference is reviewing AI use in trials, aiming to regulate how AI-generated evidence is evaluated.

AI gave a murder victim a voice and gave the legal system a glimpse into its own future. Now the question is: should it become standard, or stay a rare exception?

Would you trust AI to speak for someone you loved?

The post Murder Victim Speaks from the Grave in Courtroom Through AI appeared first on DailyAI.