Hello Robot Unveils Open-source Robot Platform Called Stretch 4

Insider Brief

  • Hello Robot unveiled Stretch 4, a new open-source mobile manipulation robot platform designed for researchers and developers building physical AI systems for home and human-centered robotics applications.
  • According to the company, the $29,950 robot features omnidirectional mobility, a telescoping arm, multiple 3D lidar sensors, Nvidia Jetson Orin computing hardware and autonomous self-charging capabilities aimed at safe operation alongside people in homes and indoor environments.
  • Hello Robot said the platform was developed around practical home-assistance use cases, including support for people with severe mobility impairments, while emphasizing lightweight design and safety over traditional humanoid robot architectures.

PRESS RELEASE —Hello Robot, the team behind the Stretch mobile manipulation platform, today announced the release of Stretch 4. Available now for $29,950, Stretch 4 is an open-source robotics platform designed for researchers, developers, and application engineers building the next wave of Physical AI applications for general purpose robotics.

Designing Stretch 4 to meet real needs in homes has resulted in a platform with unprecedented potential. With Stretch 4, developers can confidently target new applications in which robots closely collaborate with people.Share

“In the last few years, Physical AI has brought us videos of robots doing amazing stunts. But these videos are missing something critical — the people the robots are meant to benefit,” says Aaron Edsinger, co-founder and CEO of Hello Robot. “With Stretch 4, Hello Robot is delivering a robot designed specifically to operate safely shoulder-to-shoulder with people. It is compact, lightweight, ready to work, and a delight to use.”

A Pathway to Robots in Everyday Life

From the beginning, Hello Robot has worked toward a future in which robots enhance life for everyone, including children, older adults, and people with disabilities. To this end, Hello Robot has been piloting Stretch to support individuals with severe mobility impairments. Controlling the robot through a mobile phone app, users have increased agency, accomplishing tasks such as fetching a drink of water, closing the blinds, and feeding themselves.

In developing Stretch 4 to meet the needs of these users, Hello Robot also is tackling some of the most difficult challenges in robotics: safe, reliable, and intuitive human-robot interaction. Their practical approach — rejecting the complexity of humanoid forms in favor of functional, lightweight, and safe design — positions Stretch to become a pathway to robots becoming a welcome part of everyday life.

“Designing Stretch 4 to meet real needs in homes has resulted in a platform with unprecedented potential,” according to Charlie Kemp, CTO and co-founder of Hello Robot. “With Stretch 4, developers can confidently target new applications in which robots closely collaborate with people. People are not an afterthought; they are the primary reason for Stretch 4’s design.”

Stretch 4 from Hello Robot is an open-source mobile manipulator designed for researchers, developers, and application engineers. Its distinctive features include a telescoping arm, an omnidirectional base, and a sophisticated sensor array for safe autonomous operation. Users have designed wide-ranging applications to have Stretch closely collaborate with people. (Credit: Hello Robot)

Working Safely, Shoulder to Shoulder

Stretch 4 is a distinctive robot featuring a telescoping arm, an omnidirectional base, and a sophisticated sensor array — including two hemispherical 3D LiDAR sensors, three high-resolution cameras, and six laser line sensors. The architecture follows the “sensor-rich” philosophy utilized by Waymo to achieve high-fidelity safety in autonomous driving, standing in contrast to the more minimalist, vision-only approaches seen elsewhere in the industry.

“You can’t cheat physics when it comes to robot safety,” added Edsinger. “The inherent physical properties of full-size humanoids means they can become potentially dangerous in the event of a system failure. In contrast, Stretch 4 has a unique, compact, and low-potential-energy design. It is intrinsically much safer when things don’t go as expected — as they invariably will.”

Stretch 4 is a major redesign based on customer feedback, resulting in a versatile and easy-to-use platform for developers. Notable features include:

  • Wide-angle 3D Sensing Head
    • The sensor head is fully-calibrated and rigidly fixed with respect to the arm and mobile base, simplifying autonomy
    • Two hemispherical 3D LiDAR and global-shutter fisheye RGB cameras observe the surroundings, dramatically reducing blind spots even when the arm is in use
    • One central, high-resolution RGB camera observes the gripper’s workspace, supporting dexterous manipulation
  • Omnidirectional Mobile Base
    • The mobile base provides quick and smooth motion in any direction
    • The large 20 cm wheels let the robot traverse indoor terrain, including carpets, rugs, and thresholds
    • Six laser-line sensors ring the base, sensing small hazards on the floor such as cords, rugs, and drop-offs
  • Greater Speed, Reach and End-of-arm Options
    • The arm, lift, and base operate at twice the speed of Stretch 3, while the total reach has been extended by 10%
    • The robot features 8 redundant degrees of freedom plus the gripper, including an ambidextrous wrist with an integrated depth camera that can be configured for either left- or right-handed operation
    • A new quick-release mechanism allows users to efficiently swap between a compliant gripper, parallel jaw gripper, and a tablet interface
  • New Power and Compute System
    • An NVIDIA Jetson Orin NX runs Physical AI models on the robot
    • An all-new power system enables up to eight hours of runtime
    • A docking station now supports autonomous self charging, enabling long-duration deployments

A Capable Platform that is Ready to Work

Hello Robot launched the first Stretch robot in 2020. Since then, more than a thousand users from 23 countries have pioneered the future of mobile manipulation with Stretch. Stretch is an adaptable platform that can be used in unique ways. Henry Evans, a non-verbal person with quadriplegia, had this reaction:

“I’ve had the privilege of working with the Hello Robot team for some time, and what strikes me about Stretch 4 is its versatility. It has an omnidirectional base, which gives it the freedom to move effortlessly, in any direction, and it has simple, intuitive controls, which make it feel like an extension of my body. For me that is particularly important, because Stretch 4 represents my only means of interacting with my physical environment. Stretch 4 gives me greater confidence, deeper independence, and a life with more possibility.”
Henry Evans, Co-Founder, Robots for Humanity

Berkeley Artificial Intelligence Research (BAIR) Lab is a leading academic research center advancing the foundations and real-world applications of artificial intelligence, with work spanning robotics, computer vision, machine learning, and human-centered AI systems.

“During graduate school, I spent nearly five years bringing robots into homes for experiments and demos. At that time, the only mobile robot I’d trust in my parents’ home was Stretch. When it comes to living and working alongside humans, safety-first design will win. To get there, we need more robots that prioritize people above all else. I’m glad Hello Robot continues its invaluable work in this direction, and I hope Stretch 4 helps users and researchers move closer to a world where everyday home-helper robots are no longer a dream, but a reality.”
Nur Muhammad “Mahi” Shafiullah, Postdoctoral Researcher, Berkeley AI Research (BAIR), UC Berkeley

University of Illinois Urbana-Champaign is home to a cutting-edge research center focused on how to develop next-generation smart homes that would allow people of all ages and abilities to live fuller, healthier, and autonomous lives.

“We are exploring the potential of Stretch to enable independence for older adults with a range of abilities and limitations; how a robot can support that in practical, everyday ways. Our research with Stretch is advancing understanding of how robots can and should function in home environments, with consideration for user needs and preferences.”
Wendy Rogers, Khan Professor of Applied Health Sciences, University of Illinois Urbana-Champaign

Available Now for Developers

Stretch 4 is available for purchase today for $29,950 at www.hello-robot.com. The Stretch platform empowers developers with open-source code, excellent support, and a global community.

University of Michigan Leading US Arm of International Research Team Funded With $6.2M Grant to Develop AI & Robotics Systems to Boost Shipbuilding Efficiency

Insider Brief

  • A University of Michigan-led research team has received a $6.2 million grant from Japan’s Ministry of Land, Infrastructure, Transport and Tourism to develop AI systems and autonomous robots designed to help shipbuilders detect construction problems before they cause costly delays.
  • According to the university, the project will develop robotic and AI “co-pilot” systems capable of comparing a ship’s actual construction progress against a digital twin of its intended design while identifying installation conflicts, routing problems and structural mismatches earlier in the building process.
  • The research involves University of Michigan, MIT and Japanese academic partners developing lidar-equipped robotic systems, multimodal AI models and a reconfigurable shipbuilding test platform, with complementary projects led by Yokohama National University, Osaka University, Osaka Metropolitan University and the National Maritime Research Institute.

A University of Michigan-led research team has received a $6.2 million grant from Japan’s Ministry of Land, Infrastructure, Transport and Tourism to develop AI systems and autonomous robots designed to help shipbuilders detect construction problems before they cause costly delays.

According to the university, the project will focus on developing robotic and AI “co-pilot” systems capable of comparing a ship’s actual construction progress against a digital twin of its design in real time. The systems are intended to help shipyard workers identify installation conflicts, routing problems and structural mismatches earlier in the construction process.

“We want to build a co-pilot system that uses AI and robotics to take some of the detective work off workers’ shoulders,” said Alan Papalia, UM assistant professor of naval architecture and marine engineering and the principal investigator of the American research team. “The system should automatically map what’s installed, identify where reality is drifting from the design, and suggest workable alternatives when something needs to change.”

The research is being led Papalia alongside researchers from the Massachusetts Institute of Technology and Japanese academic and industrial partners. The program is scheduled to run through early 2027 and is overseen by the Monohakobi Technology Institute, an R&D organization within Japanese shipping company NYK Line.

According to the university, the robots will move through partially completed ship interiors collecting lidar scans, camera imagery and other measurements.

AI models will use that data to build a digital model of the ship as constructed and compare it against the intended design plans.

Shipbuilding often becomes complicated as pipes, cables, electrical systems and equipment are installed in confined spaces under changing schedules. Researchers said construction changes can lead to situations where later components no longer fit properly or access routes become blocked, creating expensive rework and delivery delays.

The AI systems being developed are intended not only to identify mismatches but also to predict future conflicts and suggest alternative installation approaches along with operational tradeoffs for workers to evaluate, the university noted.

To train the AI models, researchers plan to simulate shipbuilding processes repeatedly to generate synthetic datasets while also interviewing shipyard workers in the U.S. and Japan so the systems better reflect how experienced tradespeople make decisions in real-world environments.

The project also includes development of a reconfigurable “Shipbuilding Test Block,” a physical ship-section model designed to test robotic systems across different outfitting scenarios and construction stages.

“It’s very complementary to our other research projects led by Japanese universities, in which the main focus is robots for automation of hull construction and steel welding,” noted Hideyuki Ando, managing director of the Monohakobi Technology Institute. “We wanted to partner with the University of Michigan because of their unique status as a high-output research university with a dedicated department for naval architecture and marine engineering.”

According to the university, the American team includes University of Michigan researchers focused on robotic systems, shipyard partnerships, worker interviews and the Shipbuilding Test Block, while MIT’s Faez Ahmed will lead development of AI models that can process multiple types of data and suggest workable solutions. Complementary Japanese projects are being led by Yokohama National University, Osaka University, Osaka Metropolitan University and the National Maritime Research Institute.

SAP and Cyberwave Deploy Fully Autonomous AI-Powered Robots in Live Warehouse

Insider Brief

  • SAP and robotics software company Cyberwave deployed fully autonomous AI-powered robots inside an active SAP logistics warehouse in St. Leon-Rot, Germany.
  • According to Cyberwave, the robots are autonomously handling box folding, packaging and shipping fulfillment tasks through integration with SAP Logistics Management and SAP’s Embodied AI Service.
  • Cyberwave said its platform combines Vision-Language-Action models and reinforcement learning to allow warehouse robots to adapt to changing objects, layouts and workflows while reducing robot training timelines from weeks to hours.

SAP and robotics software company Cyberwave have deployed fully autonomous AI-powered robots inside an active SAP logistics warehouse in Germany.

According to Cyberwave, the deployment is operating at SAP’s warehouse in St. Leon-Rot using SAP Logistics Management, the company’s cloud-native logistics execution platform. The robots are handling box folding, packaging and shipping fulfillment tasks autonomously inside live warehouse operations.

“By integrating AI-powered robotics directly into our live warehouse operations, we are proving that Physical AI is no longer a concept — it’s delivering real value today,” SAP’s head of warehouse and shipping Tim Kuebler said in the announcement. “At our St. Leon-Rot warehouse, SAP LGM provides the digital backbone that allows robots to be deployed quickly, operate reliably, and scale with our processes. This is a decisive step toward more resilient and efficient logistics operations.” 

The integration relies on SAP’s API-based logistics architecture and SAP’s Embodied AI Service, which translates warehouse tasks into robot commands through SAP Business Technology Platform and the Cyberwave robotics platform.

Cyberwave indicated the project reflects a broader shift in warehouse robotics from highly scripted automation toward AI systems capable of adapting to changing conditions, object types and workflows in real time.

“Robots no longer need to be painstakingly programmed for every object or scenario-they learn, adapt, and keep improving,” Cyberware co-founder and CEO Simone Di Somma. “That’s the shift we’ve been building toward.”

Cyberwave said its platform is designed to reduce that complexity of warehouse automation by allowing operators to train robots through demonstrations rather than extensive hand-coding. The system combines Vision-Language-Action models and reinforcement learning techniques intended to help robots generalize across different warehouse scenarios rather than memorize fixed motions.

According to the company, the approach reduces robot training timelines from weeks to hours and allows non-expert warehouse operators to teach robots new tasks while continuously refining performance through real-time operational feedback.

In late April, Accenture announced it had deployed humanoid robots in pilot with Vodafone Procure & Connect at a SAP warehouse in Duisburg, Germany, to test how physical AI can improve logistics efficiency, safety and operational decision-making.

Image credit: Cyberware

Persona AI & Under Armour Researching Performance Materials for Humanoid Robotics

Insider Brief

  • Humanoid robotics company Persona AI and Under Armour are collaborating on research into protective materials for humanoid robots designed for industrial environments.
  • According to Persona AI, the project will study how performance apparel materials handle heat, friction and repetitive motion as the company develops humanoid robots for welding, manufacturing and hazardous industrial work.
  • The companies said the early-stage research could help improve robot durability, thermal regulation and mobility while exploring how performance material technologies used in sports apparel may apply to industrial robotics systems.

Just because humanoid robots don’t get blisters doesn’t mean they don’t need a good pair of work gloves.

That’s the idea behind the collaboration between Persona AI and Under Armour to research protective materials for industrial humanoid robots.

According to Houston-based humanoid robot developer Persona AI, the collaboration will focus on how materials commonly used in human athletic and performance apparels perform under conditions such as heat, friction and repetitive movement for humanoid robots in manufacturing and hazardous industrial work.

“We chose to work with Under Armour because of their track record of innovation with these types of performance materials,” Persona AI CEO Nicolaus Radford said in the announcement. “As we develop humanoids for intense and potentially hazardous environments, this collaboration helps us understand how advanced materials can enhance long-term reliability, thereby informing solutions to better protect workers in the field.”

The work remains in an early research and development phase. Persona AI said the collaboration is intended to help the company better understand how material design could affect long-term robot reliability and operational performance in physically demanding jobs such as welding, heavy manufacturing and hazardous material handling.

Under Armour said the project also gives the company an opportunity to explore how concepts such as abrasion resistance, flexibility and heat management could apply outside traditional sports apparel markets as humanoid robotics systems move into industrial applications.

“This is an opportunity to apply our innovation expertise in a new context,” noted Kyle Blakely, senior vice president of innovation, design studio, development, and testing at Under Armour. “Robotics presents a fascinating new design challenge, and we aim to play a leading role in shaping performance solutions for these environments. As humanoid systems take on more physically demanding roles, we see real potential to create new market opportunities, and we’re exploring how concepts like thermal management, abrasion resistance, and flexibility translate beyond sport.”

Image credit: Persona AI

China’s Unitree Robotics Debuts ‘Transformable Manned Mecha’

Insider Brief

  • Unitree Robotics unveiled a $650,000 piloted mecha platform called the GD01 that can transform from a two-legged humanoid configuration into a four-legged quadruped system.
  • The company released video footage showing the 500-kilogram robot operating both with and without a human pilot while maneuvering through urban environments and traversing obstacles.
  • According to China Daily, the launch comes as Unitree prepares for a potential listing on Shanghai’s STAR Market, which could make it the first publicly traded humanoid robotics company on China’s A-share market.

Unitree Robotcs has unveiled a $650,000 mecha that can transform from two legs to four.

“The world’s first production-ready manned mecha,” the company noted in a LinkedIn post about the 500 kg GD01, which it noted for civilian use. “It can transform.”

The Chinese robotics company posted a video along with the short announcement showing the GD01 maeuvering with and without a human operator, walking a city street set and knocking down a cinder block wall before bending backwards to become a quadruped.

According to China Daily, the launch comes as Unitree prepares for a potential listing on Shanghai’s STAR Market, which could make it the first publicly traded humanoid robotics company on China’s A-share market.

“Please everyone be sure to use the robot in a friendly and safe manner,” the company added.

Image credit: Unitree Robotics

Robots for America Launches National Coalition to Advance U.S. Robotics Deployment Policy

Insider Brief

  • A coalition of robotics and automation companies launched a new industry group called Robots for America aimed at accelerating adoption of robotics and physical AI across U.S. manufacturing.
  • The coalition said it was formed following requests from officials tied to the White House Office of Science and Technology Policy, the Department of Commerce, the Small Business Administration and the U.S. Senate seeking a unified policy framework around robotics and manufacturing automation.
  • Founding members include Formic, Machina Labs, Standard Bots, Dexterity, Path Robotics, Chef Robotics and GrayMatter Robotics, with policy priorities focused on reducing automation deployment barriers, workforce development and autonomous logistics infrastructure.

A coalition of robotics and automation companies launched a new industry group called Robots for America.

Unveiled at the SCSP AI+ Expo in Washington last week, the coalition said it was formed following requests from officials at the White House Office of Science and Technology Policy, the Department of Commerce, the Small Business Administration and the U.S. Senate seeking a unified industry policy framework around robotics and manufacturing automation.

Founding members include Formic, Machina Labs, Standard Bots, Dexterity, Path Robotics, Chef Robotics, GrayMatter Robotics, Mytra, Mujin, Viam and the Digital Manufacturing & Cybersecurity Institute, among others.

“The U.S. has every ingredient it needs to lead the next era of manufacturing,” said CEO of Formic and Robots for America Founding Member Saman Farid. “The companies, the technology, the facilities are all here. What has been missing is a coordinated policy framework that removes the real barriers standing between American manufacturers and the automation they need. That is what Robots for America exists to build.”

According to the coalition, advances in physical AI and robotics are creating “an inflection point” for U.S. manufacturing as labor shortages, rising operating costs and global industrial competition increase pressure on domestic factories to automate.

The group said small and mid-sized manufacturers remain particularly vulnerable because many lack the capital, technical expertise and operational flexibility needed to deploy robotics systems at scale. Robots for America said one of its primary goals is reducing those barriers through policy changes and broader industrial coordination.

RFA’s initial policy framework focuses on five areas where federal action could accelerate robotics adoption:

  • Lowering the financial risk of robotics pilot programs.
  • Modernizing tax treatment of automation investments.
  • Streamlining permitting and regulatory approvals.
  • Expanding workforce development for robotics deployment.
  • Enabling autonomous logistics across supply chains.

Industry executives participating in the launch argued that the next phase of manufacturing competitiveness will depend heavily on how quickly U.S. factories adopt flexible automation and robotics systems capable of adapting to changing production demands.

Over the next several years, Robots for America said it plans to focus on building political representation for robotics and automation companies in Washington while expanding access to robotics technologies for manufacturers across the U.S. industrial base.

More innformation on the group’s plans can be found at  robotsforamerica.org.

Featured image: Robots for America Founding Members at AI+ Expo panel discussion, “Robots for America: Driving American Industry Forward.” From left to right: Micah Murphy (New American Industrial Alliance), Nick Ayala (GrayMatter Robotics), Edward Mehr (Machina Labs), Dean Banks (Formic). (Credit: Robots for America)

Robinhood Files for Second Venture Fund RVII to Open Early-Stage AI Startup Investing to Retail

Robinhood has filed a confidential registration for RVII, a second publicly traded venture fund that will expand beyond its first fund’s late-stage focus to include early-stage startups. The fundraising target has not yet been set, following RVI’s debut in March which raised several hundred million short of its $1 billion goal but has since more than doubled in share price to $43.69, driven largely by market enthusiasm for its AI-heavy portfolio including OpenAI, ElevenLabs, and Databricks.

CEO Vlad Tenev described Robinhood Ventures as a publicly traded venture capital firm with daily liquidity and no carry fees, removing the accreditation requirements that have historically excluded retail investors from private market returns. Tenev’s longer-term ambition is for retail investors to participate in seed and Series A rounds alongside traditional venture firms, fundamentally reshaping how startups access early capital.

Mira Murati’s Thinking Machines Lab Unveils Full-Duplex AI That Responds in 0.4 Seconds

Thinking Machines Lab, founded by former OpenAI CTO Mira Murati, has announced a new class of AI called interaction models, designed to process input and generate responses simultaneously rather than sequentially. The approach, known as full-duplex communication, enables the AI to respond mid-conversation in a manner closer to a natural phone call than a turn-based text exchange.

The company’s initial model, TML-Interaction-Small, claims a response latency of 0.40 seconds — roughly matching natural human conversational speed and reportedly faster than comparable models from OpenAI and Google.

The announcement is currently a research preview rather than a public product launch. A limited research preview is expected within months, with a broader release planned for later in 2026. Real-world performance remains to be independently verified.

GM Cuts 600 IT Jobs in AI Skills Swap, Hiring for Agent Development and Model Engineering

General Motors has laid off more than 600 salaried IT employees — over 10% of its IT workforce — as part of a deliberate restructuring to replace legacy technology expertise with AI-native capabilities. The company confirmed the cuts, which were first reported by Bloomberg, framing them as preparation for the future rather than simple cost reduction.

GM is actively hiring replacements with skills in AI-native development, data engineering, model training, agent development, prompt engineering, and cloud architecture. Chief Product Officer Sterling Anderson, Aurora co-founder hired in May 2025, has been consolidating GM’s disparate technology divisions into a single organisation.

Recent AI-focused hires include Behrad Toghi from Apple as AI lead and Rashed Haq, former head of AI and robotics at Cruise, as vice president of autonomous vehicles. The restructuring signals how large enterprises are rebuilding workforces around AI rather than simply layering tools onto existing teams.

Mira Murati’s TML upends how humans work with AI

Read Online | Sign Up | Advertise

Good morning, {{ first_name | AI enthusiasts }}. Both Mira Murati’s Thinking Machines and Ilya Sutskever’s SSI have spent the post-OpenAI era mostly out of view, making every public reveal feel that much bigger.

Murati’s lab just broke the silence with ‘interaction models,’ a new type of AI built for real-time collaboration across voice, video, and text — in a direct counter to the agentic-first direction the rest of the field is racing toward.


In today’s AI rundown:

  • TML’s new interaction models for real-time AI

  • Google traces software attack back to AI

  • Build a YouTube research bot in 15 minutes

  • Anthropic fixes Claude’s blackmail problems

  • 4 new AI tools, community workflows, and more

LATEST DEVELOPMENTS

THINKING MACHINES LAB

🗣️ TML’s new interaction models for real-time AI

Image source: Thinking Machines Lab

The Rundown: Thinking Machines Lab (TML) just introduced a research preview of interaction models, a new kind of AI system built to collaborate live across voice, video, and text — letting users talk, show, interrupt, and steer while the system keeps working.

The details:

  • The model takes in voice, video, and text in 200ms chunks, perceiving and responding in a streaming loop without the turn-taking pauses of other rivals.

  • A second background model handles slower reasoning, searches, and tool work, allowing the live model to keep talking and interacting with the user.

  • The system can also react to visual changes, count reps, translate live speech, and speak up at timed moments instead of waiting.

  • CEO Mira Murati said TML is focused on advancing human-AI collaboration, and that “the way we work with AI matters as much as how smart it is.”

Why it matters: Murati’s TML has been fairly quiet since its inception, but interaction models are one of the lab’s first big differentiators: models designed around how people naturally work together, not how long an agent can run solo. Whether it carves out its own market or gets absorbed by a frontier lab’s next update is the question now.

TOGETHER WITH YOU.COM

🧠 What’s the point of an LLM if it hallucinates?

The Rundown: It happens—LLMs hallucinate. Grounding your LLM, however, can help dramatically improve accuracy. In this guide, You.com explains what AI grounding is and how organizations can implement it to achieve more reliable outputs.

The playbook covers:

  • A three-part approach that outperforms RAG alone

  • Why grounding isn’t set-and-forget, and how to build audit trails

  • The open vs. closed platform trade-off (and what it means for your next model switch)

Get the guide.

GOOGLE

🔒 Google traces software attack back to AI

Image source: Google

The Rundown: Google’s Threat Intelligence Group confirmed the first known case of hackers using AI to discover and write a zero-day software security flaw, catching them before they could break past login protections on a widely-used web management tool.

The details:

  • The hack was intended to allow the user to get around two-factor authorization on the affected app, with Google working with the company to stop the attack.

  • Google pointed to unusually polished attack code, long explainer notes, and a made-up severity score as clues that the exploit was written with an AI.

  • GTIG’s John Hultquist called the find “the tip of the iceberg,” with Anthropic’s Rob Bair warning cybersecurity defenders’ lead is “months, not years.”

  • GTIG detailed other hacks, including software that lets AI remotely control a device, and AI-assisted malicious prompts and code from N. Korea and Russia.

Why it matters: We’ve already started to see what Anthropic’s Mythos can do on the cybersecurity front, but attackers aren’t too far off from having similar power. Even with careful rollouts, the next step up the release ladder is about to open the door to some serious security issues that will cause chaos for the many systems not ready for it.

AI TRAINING

📺 Build a YouTube research bot in 15 minutes

The Rundown: In this guide, you will learn how to build a Gumloop agent that tracks YouTube channels or search topics, reads transcripts, and turns the useful videos into a ranked research brief.

Step-by-step:

  1. Go to Gumloop agent builder, create an agent named YouTube Scout, and enable YouTube and Google Sheets in the right-hand section under “Apps”

  2. Prompt: Build me a YouTube scout for (niche). Check (channels/queries), find videos from the last (hours/days), read the transcript, and return a brief with title, link, 3-5 takeaways, why it matters, follow-up ideas, usefulness score, and a “what changed” summary. Track topics and videos in a Google Sheet

  3. Start small: one niche, a few trusted channels, one or two searches, and a 24-48 hour lookback window. The tighter the scout’s beat, the better the brief

  4. Run the agent, then review the Sheet it creates. Make sure each result has a source link, concrete takeaways, and a usefulness score

Pro tip: Dial in the signal score early. If the agent calls a mediocre video an 8, tell them why that should be a 5. You can also add a User Signal Score column for future runs.

PRESENTED BY TELY AI

💬 Market leaders get leads from ChatGPT and Google

The Rundown: Your buyers are asking AI questions — and AI is answering with your competitors, not you. Tely makes AI like ChatGPT, Google, and Claude recommend your business instead.

With Tely AI, you can:

  • Get recommended in ChatGPT, Google, Perplexity, and Claude in as little as 1 week

  • Fully hands-off: no writers, no agencies, no managing content

  • Costs less than hiring freelancers or maintaining a marketing team

  • Ideal for niche industries where expertise matters

Get leads from Google and ChatGPT on autopilot.

AI RESEARCH

🐍 Anthropic fixes Claude’s blackmail problems

Image source: Anthropic

The Rundown: Anthropic published a study detailing how it fixed Claude’s previously seen blackmail behavior, highlighting the need to teach the model “why” and tracing the problem to internet fiction that depicts AI as power-seeking and self-preserving.

The details:

  • Earlier tests put Claude models in fictional workplace situations, with older systems resorting to blackmail and threats to avoid shutdown.

  • Having Claude reason through ethical choices, not just copy the safe action, cut blackmail rates from 96% in Opus 4 to nearly 0% for every model after.

  • Fictional stories of well-behaved AI and constitution-based documents also helped reduce bad behavior by more than 3x.

  • Just 3M tokens of ethical reasoning data matched 85M tokens of behavioral examples, a 28x efficiency gain that held up in deeper training.

Why it matters: AI is still far from an exact science, and eliminating blackmail via essentially positive AI stories and constitution docs is another one of the many strange training quirks. A small dataset of ethical fiction outperforming 28x the behavioral data shows how much of alignment is still guesswork, even when the guesses work.

QUICK HITS

🛠️ Trending AI Tools

  • 🤖 Slackbot – Your AI assistant that searches, summarizes, and automates work right inside Slack*

  • ❤️ Lovable Aesthetics – Vibe coding with more control over layout, typography

  • ⚙️ Parallel Agents – Run up to 10 parallel computer-use agents in Replit

  • ☀️ Daybreak – OpenAI’s new Codex-driven cybersecurity product

*Sponsored Listing

📰 Everything else in AI today

OpenAI launched “The Deployment Company”, a $14B business to embed engineers inside enterprises to deploy its AI, also acquiring AI consulting firm Tomoro.

SoftBank’s Masayoshi Son is reportedly in talks for a $100B AI investment into France, with plans to build out new data centers in the country.

Anthropic reportedly signed a 7-year, $1.8B cloud infrastructure deal with Akamai, adding another compute avenue to power its Claude models.

China’s Kuaishou Technology is reportedly planning to turn its Kling AI video branch into its own company, with a projected valuation of $20B and plans to IPO in 2027.

Former OpenAI Chief Scientist Ilya Sutskever testified in the Elon Musk vs. OpenAI lawsuit, revealing his current shares of the company total nearly $7B.

COMMUNITY

🤝 Community AI workflows

Every newsletter, we showcase how a reader is using AI to work smarter, save time, or make life easier.

Today’s workflow comes from reader Sasha M. in Cape Coral, FL:

“I have a family of 5, and planning what we have for dinner was a nightmare. I have a Trello board full of hundreds of recipes that I use to plan our meals, and then I would place a grocery delivery order online. The whole process would take up to an hour.

I built a Claude plugin that includes multiple skills to help plan meals and order groceries. I have it on a schedule to run once a week. First, it asks me for details about the week: our schedule, any days that I’ll have fewer than 5 people eating, etc. Using an MCP to Trello, the first Claude Skill picks out 7 recipes and presents them to me.

Once I’ve approved the meal plan, Claude then creates an ingredients list that I check off anything I already have in my fridge/pantry. The plugin then runs a skill that goes to my grocery store website and adds all the ingredients to my cart. All I have to do is check the cart and click ‘Order.’”

How do you use AI? Tell us here.

🎓 Highlights: News, Guides & Events

  • Read our last AI newsletter: Google’s powerful new AI co-mathematician

  • Read our last Tech newsletter: ‘RAMageddon’ is coming for your laptop

  • Read our last Robotics newsletter: Figure’s robots make a bed together

  • Today’s AI tool guide: Build a YouTube research bot in 15 minutes

See you soon,

Rowan, Joey, Zach, Shubham, and Jennifer — the humans behind The Rundown