More women are turning to ChatGPT for emotional support, using the AI chatbot as a stand-in therapist as mental health systems buckle under pressure. With long wait times and soaring costs, AI is filling a growing gap.
Mental health care is harder to access than ever. In the UK, NHS data shows patients are eight times more likely to wait over 18 months for mental health treatment than for physical health. Private therapy isn’t always an option either, with sessions costing £60 or more.
In that vacuum, ChatGPT has become a surprising outlet.
Real voices, real feelings
Charly, 29, from London, turned to ChatGPT while grappling with her grandmother’s terminal illness:
“It’s been so helpful to ask the crass, the gruesome, the almost cruel questions about death… the things I feel twisted for wanting to understand.”
Ellie, 27, from South Wales, said it helped her feel seen when no one else was around:
“It didn’t have full context to my life like my therapist does, but it was accessible and non-judgmental in times of crisis.”
Julia, 30, in Munich, used it when her therapist was booked up. The responses felt similar to a therapy app:
“I was surprised at how good the answers were… but it was too practical. My therapist challenges me. ChatGPT didn’t do that.”
Photo by M. on Unsplash
What AI can and can’t do
ChatGPT offers instant, always-available support. It’s private, non-judgmental, and often comforting. But it lacks emotional nuance, lived context, and the tough questioning that drives real therapeutic growth.
AI isn’t a replacement for trained professionals, but for many women stuck in limbo, it’s become a digital lifeline.
The bigger issue? People are asking robots for empathy because the human systems keep failing them.
The post Therapists Too Expensive? Why Thousands of Women Are Spilling Their Deepest Secrets to ChatGPT appeared first on DailyAI.
Another year, another viral deepfake of Katy Perry at the Met Gala and once again, she wasn’t even there.
Photos showing the pop star in a sleek black designer gown circulated widely on social media during Monday night’s event, matching the “Superfine: Tailoring Black Style” theme. But the images were AI-generated. Perry quickly clarified she was not at the Met; she was on tour.
Perry’s reaction
“Couldn’t make it to the MET, I’m on The Lifetimes Tour (see you in Houston tomorrow IRL),” she posted to Instagram alongside the fake images.
She added a jab at AI confusion: “P.s. this year I was actually with my mom so she’s safe from the bots… but I’m praying for the rest of y’all.”
View this post on Instagram
A post shared by KATY PERRY (@katyperry)
The repeat hoax
This marks the second year in a row Perry has gone viral for an AI-generated Met Gala look. In 2024, a fabricated image of her in a floral ball gown fooled thousands, including her own mother.
These deepfakes are getting harder to spot. A fake post claiming Perry wore a never-before-seen Mugler fabric went viral with over 400K views and was even falsely credited to Getty Images.
The spread of believable AI-generated content is becoming a growing concern, especially as it dupes not just fans, but family.
AI is now dressing celebrities for events they don’t attend, and millions are still falling for it.
Perry continues her “Lifetimes Tour” with her next stop in Houston. Meanwhile, the internet keeps grappling with what’s real and what’s algorithm.
Are deepfakes becoming the new celebrity PR?
The post Katy Perry Didn’t Attend the Met Gala, But AI Made Her the Star of the Night appeared first on DailyAI.
China has unveiled the world’s first fully AI-powered hospital, marking a radical shift in the future of healthcare.
Developed by Tsinghua University in Beijing, the “Agent Hospital” features 14 AI doctors and 4 AI nurses that can diagnose, treat, and manage up to 3,000 patients per day, without any human staff.
Faster, smarter care: What would take human doctors 3 years, the AI doctors can do in 1 day.
High IQ bots: These AI agents scored a 93.06% pass rate on the US Medical Licensing Exam.
Training without risk: The virtual hospital allows medical students to practice in a fully simulated, no-risk environment.
JUST IN: China opens the world’s first AI hospital with 14 artificial intelligence doctors. pic.twitter.com/JZsavX9sIt
— Whale Insider (@WhaleInsider) May 3, 2025
How it works
The hospital uses multimodal large language models (MLLMs) to simulate real-time interactions with patients, handle diagnoses, prescribe treatments, and monitor disease progression, all digitally.
It also includes predictive capabilities that can simulate how diseases spread, potentially helping officials prepare for future pandemics.
While it’s still in the research phase, Agent Hospital points to a future where AI could alleviate overburdened healthcare systems, provide round-the-clock care in underserved areas, and revolutionize medical education.
The technology must still clear regulatory and ethical hurdles, but the direction is clear: the AI doctor will see you now.
The post China Unveils World’s First AI Hospital: 14 Virtual Doctors Ready to Treat Thousands Daily appeared first on DailyAI.
People are asking AI to recreate the same image over and over again, with each iteration drifting further and further from the original.
The results are sometimes amusing, sometimes unsettling. In some cases, the images completely shape-shift into crazy abstract forms. In others, facial features are wildly exaggerated.
One of the most viral images is of actor Dwayne “The Rock” Johnson replicated a staggering 101 times.
somebody on Reddit told ChatGPT to replicate an image of The Rock without changing anything 100 times over pic.twitter.com/IcRgmWHCWK
— Kristi Yamaguccimane (@TheWapplehouse) May 3, 2025
While the first few iterations closely resembled the original photo, subsequent versions saw Johnson’s features morph and distort, eventually becoming totally abstract.
ChatGPT prompted to “create the exact replica of this image, don’t change a thing” 74 times pic.twitter.com/u6E8aVThy2
— internet hall of fame (@InternetH0F) May 1, 2025
I tried the “Create the exact replica of this image, don’t change a thing.” trend. pic.twitter.com/xcAcsvBRJp
— Rish Agarwal (@rish404) April 30, 2025
So what’s going on under the hood? It’s primarily a result of how AI models are trained and how they encode and reconstruct images.
When an AI is asked to recreate an image, it doesn’t simply copy and paste the original pixels. Instead, it breaks the image down into a complex set of features and patterns, which it then tries to reassemble based on its understanding of what the image should look like.
However, this process is inherently imperfect and introduces small errors or deviations each time. As the image is repeatedly fed back into the AI, these deviations compound, leading to increasingly distorted or unexpected results.
It’s a bit like playing a visual game of “telephone” or “whispers,” where each message you whisper to the next person introduces new features.
However, AI’s aberrations may also reveal something about the biases and assumptions baked into these models. For example, some images seem to exaggerate facial features or create a warmer, more orange-tinted color palette.
Users also noticed that eyebrows become highly exaggerated – almost painted on in the style of social media filters. As for the orange tint, some speculate that warmer tints are preferred in photography and thus are more common in the training data.
Really, though, we have no idea what’s happening inside the immense “black box” that is today’s largest frontier models.
But in the meantime, social media users seem to be having plenty of fun with the surreal, often disturbing results of repeated recursive AI image generation.
Trends involving AI, like we recently saw with AI action figures, have recently taken off on social media, with thousands of people getting involved across X, Reddit, TikTok, and Instagram.
One quipped, “I drained the ocean replicating my image 100 times.” Not to be a buzzkill, but it’s a good point.
The post “Create a replica of this image. Don’t change anything” AI trend takes off appeared first on DailyAI.
A wave of AI-powered scams is sweeping across WhatsApp, costing UK families nearly half a million pounds in 2025 alone and it’s only May.
Cybercriminals are now combining old tricks with new tech. In the evolving “Hi Mum” scam, fraudsters impersonate a loved one over WhatsApp and ask for emergency cash.
The twist
They’re now using AI-generated voice messages to mimic children’s voices, making the deception frighteningly convincing.
“Scammers are increasingly getting better at manipulating people… cloning any voice is now simple, even in a matter of moments,”says Jake Moore, global cybersecurity advisor at ESET.
By the numbers:
506 WhatsApp scams since Jan 2025
Victims lost £490,606 ($651,230)
April alone: 135 cases, £127,417 lost
How it works:
You get a WhatsApp from an unknown number: “Hi Mum, I lost my phone.”
They claim they’re locked out of their bank.
They send a voice note and it sounds like your child.
They ask you to urgently transfer money to a new account.
A screen-grab excerpt of the WhatsApp ‘Hi mum’ text scam. Photograph: Santander
The danger
Scammers scrape social media for voice clips and personal details. Then they use generative AI to clone the voice and craft a believable story.
“I was able to fool my own mother with an AI version of my voice,” Moore admits.
Who’s at risk:
Parents with active kids on social media
Elderly users less familiar with AI tricks
Anyone receiving messages from unfamiliar numbers
What you can do:
Always call back using a saved number before sending money
Set up family ‘code words’ to verify real emergencies
Never send money to a new account without confirmation
Report scams to 7726 (UK scam reporting line)
If you fall victim, contact your bank immediately
Stay vigilant
AI scams are advancing fast. WhatsApp, though encrypted, can’t stop someone with your number from messaging you.
“These scams are evolving at breakneck speed,” says Chris Ainsley, head of fraud at Santander.
AI has supercharged a common scam. If your child “calls” from a strange number asking for money, think twice. Then call them on the number you know.
The post WhatsApp Warning: UK Parents Scammed Out of £500K by AI That Pretends to Be Their Kids appeared first on DailyAI.
RingCentral has expanded its AI Receptionist product with new links to Shopify, Calendly and WhatsApp, as the communications software company tries to push the product beyond basic call answering and into more routine customer service tasks.
The company said AI Receptionist, known as AIR, can now handle some order enquiries through Shopify, arrange appointments through Calendly, and respond to inbound WhatsApp messages. AIR is also being added to shared SMS inboxes and call queues, so it can answer texts and step in when phone lines are busy or staff are not available.
RingCentral said more than 11,800 businesses now use AIR.
The product is aimed mainly at smaller and mid-sized organisations that receive regular inbound enquiries, and RingCentral cited healthcare, financial services, legal, hospitality, and construction as areas where customers are using AIR for front-desk tasks and after-hours cover.
Keller Interiors, an installation company working for Lowe’s Home Improvement, said it deployed AIR in 33 locations. Beth Owens, chief of staff, said the company had a routing problem that was difficult to solve with staff. “RingCentral AIR solved a problem we didn’t have a good human answer for, how do you route every inbound call correctly, 24/7, across 33 locations, without building a call centre?” Owens said. She said Keller Interiors had reduced waiting times from 12 minutes to 90 seconds and saw customer satisfaction scores rise by three points in the course of four months.
Tara Breaux, vice-president of operations at Maple Federal Credit Union, said it used AIR to reduce hold times in branches. “We’ve reduced hold times by 90%, enabling faster service, less strain on staff, and more focus on the conversations that matter most.”
The new Shopify link is designed to let AIR answer basic questions about orders and customer support over the phone. The Calendly interface lets AIR schedule appointments using tools from Calendly, and using WhatsApp extends into the messaging app used widely by consumers and small businesses.
RingCentral is also adding automatic language detection. The company said AIR can recognise a caller’s language and continue the conversation in that language, offering 10 languages, including English, Spanish, French, Italian, German, and Portuguese.
Michelle Morgan, research manager for AI-enabled sales, customer service and contact centre strategies at IDC, said the update was an example of applied AI in daily business. “RingCentral’s expansion of AIR into Shopify, Calendly, WhatsApp, and intelligent call queues shows what applied AI should look like: every feature tied to a clear pain point,” she said.
Joe Fahrner, RingCentral’s vice-president of growth for AI products, gave the company’s more expansive view of the product, saying AIR is becoming a “digital employee” for small and mid-market businesses.
RingCentral said AIR is now available as a standalone product starting at $49 a month, including 100 minutes. Existing RingEX customers can add AIR starting at $39 a month, also including 100 minutes.
(Image source: Pixabay, under .)
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
The post RingCentral adds Shopify, Calendly, and WhatsApp to AI Receptionist appeared first on AI News.
The words “pressure” and “NHS” go hand in hand in the UK and unfortunately there is no sign of a reduction in the strain the institution suffers any time soon. As NHS England continues the struggle to reduce its 7.25 million waiting list, new policies are being introduced to move care away from hospitals and into the community, despite GPs’ warning of increased workloads and risk to patients. Add in looming doctor strikes and deepening staff shortages and the backdrop of the health service does not look rosy.
In a bid to relieve some of the burden, AI-enabled virtual care is emerging as a tool to manage the growing number of patients outside hospital settings. The technology is being implemented to help around three important areas – waiting lists, hospital capacity, and corridor care.
Michael Macdonnell, Deputy CEO at European virtual care provider Doccla, who has first-hand experience working in the NHS, commented, “The NHS is facing unprecedented pressure, with a 7.2 million patient waiting list, patients waiting in ambulances and in corridors, without the growing budgets of previous years.”
“AI underpins how virtual care works at scale. Machine learning models are used to identify patients at risk of deterioration by combining NHS and proprietary datasets, while continuous data from clinical-grade wearables (e.g.oxygen saturation, blood pressure, ECG) is analysed to detect early warning signs. The lets clinical teams intervene sooner and safely manage far larger patient groups than would otherwise be possible.”
Doccla and virtual care
Doccla is a company providing remote patient monitoring and virtual wards to NHS trusts. The Doccla model is “designed both to support earlier discharge and to prevent avoidable admissions, particularly for those with long-term conditions.”
There is already evidence for Doccla’s effectiveness, with the NHS seeing a 61% reduction in bed days, an 89% reduction in GP appointments, and a 39% drop in non-elective admissions. Not only has this AI-driven software improved efficiency, it is also reportedly saving the NHS approximately £450 a day compared with the cost of a hospital bed, the company says. Figures suggest that for every £1 spent on such technology, the NHS saves an estimated £3 compared with non-tech models.
Mr Macdonnell said, “At Doccla, we use machine learning to identify patients at risk of deterioration before they reach crisis point. Continuous data from clinical-grade wearables like oxygen saturation, blood pressure and ECGs, are analysed with medical records to detect early warning signs.”
The insights are allowing clinical teams to intervene sooner and manage larger caseloads compared with more traditional systems. AI may also be having a positive effect on clinician’s mental states, helping reduce administrative burden. For instance, large language models (LLMs) are being used to streamline clinical notes and present complex information to patients in a more accessible way. AI is not expected to replace clinicians, only make them more effective, so clinicians reading this can breathe a sigh of relief.
Clinical trust in this technology remains low and this will only grow through transparency and further evidence of success. Predictive models must also deliver accurate and fair outcomes in diverse patient groups before being deployed at scale in real-world clinical settings.
As the UK’s NHS works to move more care away from hospitals and into the community, with its “Fit for the Future: 10 Year Health Plan for England,” AI stands at the forefront of this transformation. The future of AI healthcare is set to allow patients to remain more independent and receive the care they need in familiar surroundings.
(Image source: Pixabay under licence.)
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
The post AI helping ease the UK’s NHS burden appeared first on AI News.
Ahead of the AI & Big Data Expo at the San Jose McEnery Convention Center, May 18-19, we spoke to Jerome Gabryszewski, the company’s AI & Data Science Business Development Manager about AI, processing data for AI ingestion, and local versus cloud compute.
The technology media is fond of quoting that data is ‘the new oil’, but the reality on the ground is that, despite having access to plenty of first-party information, actually leveraging it to the business’s advantage can prove problematic, especially at enterprise scale.
Should you chose a cloud-hosted AI model, or local compute? How do you get your ‘data house’ in order, so the smart models can produce meaningful results? And as ever, we like to encourage our interviewees to help us predict the next chapter in the fast-moving story of business IT in this AI-dominated business landscape.
Artificial Intelligence News: Moving from manual to automated data ingestion sounds great in theory, but it’s notoriously difficult. Where is HP seeing companies get stuck right now?
One of the most consistent friction points we see is that organisations underestimate the organisational and architectural debt behind their data. Before automation can take hold, they have to reconcile fragmented data ownership across departments, inconsistent schemas in systems, and legacy infrastructure that was never designed for interoperability. The technical lift of automation is often smaller than the governance and integration work that has to precede it.
Artificial Intelligence News: When AI models start updating themselves continuously, things can easily go sideways. How are you advising clients to handle risks like concept drift and data poisoning?
Continuous learning is where AI goes from a project to a liability if it isn’t governed carefully. What we advise clients is to treat model updates the same way they treat code deployments. Nothing goes to production without a validation gate. For concept drift, that means MLOps pipelines with automated drift detection and human-in-the-loop triggers before retraining kicks in. For data poisoning, it’s a data provenance problem as much as a security problem. It’s critical to know exactly where your training data comes from and who can touch it. The clients who get this right aren’t necessarily the most technically sophisticated; It’s those who’ve embedded AI governance into their risk frameworks before they scaled.
Artificial Intelligence News: I want to touch on HP’s hardware roots. What does a modern workstation or compute setup actually need to look like today to handle the sheer weight of an autonomous AI lifecycle?
HP’s roots here actually matter. The Z series has been purpose-built for the most demanding professional compute for over 15 years so when we talk about what an autonomous AI lifecycle actually requires from hardware, we’re not guessing, we’ve been iterating on this problem longer than most!
The answer isn’t a single machine, it’s a spectrum. At the individual developer level, you need local compute powerful enough to run real experiments without being cloud-dependent for every iteration. The ZBook Ultra and Z2 Mini handle the mobile and compact deskside tier professional-grade machines capable of running local LLMs and heavy workflows simultaneously.
The ZGX Nano is where things get really interesting for AI-first teams. It’s an AI supercomputer that fits in the palm of your hand (15x15cm), but it’s powered by the NVIDIA GB10 Grace Blackwell Superchip with 128GB of unified memory and 1,000 TOPS of FP4 AI performance. A single unit handles models up to 200 billion parameters locally. And when a team needs to scale beyond that, you connect two units together via high-speed interconnect and you’re working with models up to 405 billion parameters… no cloud, no data centre, no queue. It comes pre-configured with the NVIDIA DGX software stack and the HP ZGX Toolkit, so teams go from setup to first workflow in minutes, not days.
Moving up, the Z8 Fury gives power-user teams up to four NVIDIA RTX PRO 6000 Blackwell GPUs in a single system (384GB VRAM): That’s the full model development cycle running on-premises. And at the frontier, the ZGX Fury changes the conversation entirely. Powered by the NVIDIA GB300 Grace Blackwell Ultra Superchip with 748GB of coherent memory, it delivers trillion-parameter inference at the deskside, not the data centre. For teams running continuous fine-tuning and inference on sensitive data, it typically pays for itself in 8 to 12 months versus equivalent cloud compute.
And for organisations that need to cluster and scale further, the entire Z portfolio is designed with rack-ready form factors that drop into managed IT environments without compromising security or data residency.
Jerome Gabryszewski, AI & Data Science Business Development Manager, HP.
The larger point is this; the autonomous AI lifecycle creates a governance and latency problem, not a compute problem. Teams can’t keep sending sensitive training data to the cloud every time a model needs to update. HP’s portfolio gives organisations a hardware path that scales with their workflow maturity, from the developer’s desk all the way to distributed on-premises compute. The hardware finally matches the ambition of what these AI systems actually need to do.
Artificial Intelligence News: Gen AI compute costs are spiraling for a lot of enterprises. What is the practical fix for balancing that massive expense with modern cloud efficiency?
The cost problem is structural, not cyclical. Enterprise GenAI spend surged to $37 billion in 2025, and 80% of companies still missed their cost forecasts by more than 25%. The core tension is that unit inference costs are actually falling, but total spend keeps rising because use is growing faster than cost drops. The cloud API model was designed for experimental, low-volume workloads. It was never built to be the economic engine for production AI at scale.
The practical fix is a discipline problem before it’s an infrastructure problem: Draw a hard line between exploratory work and production workloads, and never use the same compute model for both. Early iterative work – prototyping, fine-tuning, model evaluation – should run on local hardware like the ZGX Nano or Z8 Fury, where you’re spending capital once instead of burning operational budget on experiments without a clear ROI path.
The organisations getting this right are running a three-tier model: Cloud for burst training and frontier model access you’ve genuinely earned, on-premises HP Z infrastructure for predictable high-volume inference, and edge compute where latency is critical. Independent analysis shows on-premises can deliver up to an 18x cost advantage per million tokens over a five-year lifecycle. The framing we use with clients is simple: cloud is for scale you’ve earned, not scale you’re hoping for.”
Artificial Intelligence News: Everyone wants their proprietary data to be ‘AI-ready.’ How do companies pull that off without exposing sensitive or siloed information?
The mistake most companies make is treating ‘AI-ready data’ as a data engineering problem when it’s really a data sovereignty problem, and those require different solutions. Sending proprietary data to a cloud model for processing isn’t just an exposure risk, it’s a governance failure waiting to happen, especially in regulated industries where even the act of transmitting data externally can trigger compliance violations.
The architecture that solves this is Retrieval-Augmented Generation (RAG) running on local infrastructure, which lets a model retrieve relevant context from your internal knowledge base at query time without ever training on it or exposing it externally. Your proprietary data stays on-premises, inside hardware you control. For example, a ZGX Nano or Z8 Fury running a locally hosted model can power a full RAG pipeline against sensitive internal documents with no data leaving the building and no token spend sent to a third party.
The access control layer is where this gets operationally serious; a well-architected RAG system enforces role-based permissions at the retrieval level, so the AI surfaces only what a given employee is entitled to see, the same way your document management system does. The combination of local compute, local model, local retrieval, and governed access is what actually makes proprietary data AI-ready without exposure.
The companies getting this right aren’t sending their crown jewels to the cloud to be processed; they’re bringing the intelligence to the data, not the other way around.
Artificial Intelligence News: If we combine autonomous AI with these modern cloud platforms, what happens to the day-to-day role of an enterprise IT team over the next couple of years?
I think Jensen Huang laid this concept out best. He said our job is not to wrangle a spreadsheet or type into a keyboard, that our work is generally more meaningful than that. And he’s drawn a sharp distinction between a job’s task and its purpose. In IT, for example, the task might be provisioning servers or triaging incidents, but the purpose is keeping the business resilient and moving forward. That distinction is exactly what’s playing out right now.
Gartner projects 40% of enterprise applications will have embedded AI agents by end of 2026, up from less than 5% just a year ago, which means the routine execution layer of IT is being absorbed fast but the governance and architecture layer is expanding just as quickly. What’s already happening in leading organisations is a change from IT teams executing tasks to designing and governing the agents that execute on their behalf.
The important gap is that only one in five companies has a mature governance model for that yet. This is where local-first infrastructure matters again. When your automation layer runs on hardware you control, you have full observability over agent behaviour that you simply don’t have when those workloads are abstracted into the cloud. The IT team of the next two years isn’t the team that keeps the lights on. It will be the teams that decide which agents get trusted with which decisions and makes sure the infrastructure underneath that judgement is something the business can actually stand behind.
(Image source: Pixabay, licence.)
Want to learn more about AI and big data from industry leaders? Check out the AI & Big Data Expo AI & Big Data Expo taking place in Amsterdam, California, and London. This comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
The post HP and the art of AI and data for the enterprise appeared first on AI News.
The US administration has added four more AI companies to its roster of favoured suppliers, with the Pentagon signing agreements with Microsoft, Reflection AI (which has yet to release a publicly-available model), Amazon, and Nvidia that mean their products can be used on classified operations. The companies join OpenAI, xAI, and Google as companies that the Department for Defense can deploy “for any lawful use.”
The phrase “any lawful use” formed the centre of the recent disagreement between Anthropic AI and the US administration, with CEO Darius Amodei claiming that it would let the US government use Anthropic technology to subject the American civilian population to surveillance, and produce autonomous weapons, areas of Anthropic’s use that he wanted walled off. The Pentagon cancelled a $200 million contract with the company, a decision which Anthropic swiftly took to court, claiming millions in lost revenues from the government and others influenced by the government’s decision. The Trump administration termed the company a “supply chain risk”, the first time a US-based company had ever been given such a status. Ensuing statements from government sources described Anthropic as a “woke” company.
The Pentagon’s statement on its new agreements reads, “The Department will continue to build an architecture that prevents AI vendor lock-in and ensures long-term flexibility for the Joint force.” The technologies will “give warfighters the tools they need to act with confidence and safeguard the nation against any threat.” The AIs will be used for ‘Impact Levels’ six (secret data) and seven (the most highly-classified materials) use-cases, helping create what the statement describes as an “AI-first fighting force”.
The Pentagon’s current use of generative AI is largely confined to non-classified tasks carried out inside the various defence departments, such as working on document drafting and summary, and research. The new suppliers will help defence forces “streamline data synthesis” too, but also “elevate situational understanding, and augment warfighter decision-making in complex operational environments.” It’s not clear whether those descriptions include domestic deployments inside US borders.
The expansion of the raft of AI suppliers to the US military and security forces means it will become more immune to apparent changes of heart by individual vendors affecting military and security operations. By broadening their technological base, the personal whims of individual company leaders become less relevant. Google and Amazon have in the past fired employees for protesting against their companies’ technology being used in weaponry and warfare.
Anthropic’s Claude AI had been used on classified material as part of Palantir’s Maven toolset, a role which the most recent signees may replace. However, the company’s Mythos model is reportedly in use currently by the National Security Agency in the context of the platform’s purported cyber warfare and defence abilities. Worldwide, Anthropic’s Mythos is currently under assessment by 40 organisations, of which only 12 have been named, with the UK’s MI5 and the US NSA thought to be among the remaining 28.
According to Axios, the US administration may be walking back on its most recent public stance on Anthropic. The website said it had a source in the White House who stated the administration was trying to find ways to “save face and bring ’em back in.” Anthropic’s Claude coding model is allegedly still in use by US government security organisations, and has been throughout recent events.
According to the White House, the US government “continues to proactively engage across government and industry to protect our country and the American people, including by working with frontier AI labs.”
(Image source: “BEST OF THE MARINE CORPS – May 2006 – Defense Visual Information Center” by expertinfantry is licensed under CC BY 2.0. Licence.)
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
The post US government increases AI suppliers and rethinks Anthropic’s role appeared first on AI News.
Google is testing Remy, a new AI personal agent for Gemini, according to Business Insider. The tool is designed to take actions for users in work and daily tasks.
Remy is being tested in a staff-only version of the Gemini app. The report said it reviewed an internal document and spoke with two people familiar with the matter. The internal description presents Remy as a “24/7 personal agent”, intended to turn Gemini into an assistant that can act on a user’s behalf.
Two people familiar with the project said Google employees are currently testing Remy. A Google spokesperson declined to comment. The report did not say when, or whether, Google plans to release Remy publicly. It also did not identify which Google services are included in the current employee test.
Task-taking assistant
Remy is part of Google’s broader work to expand Gemini beyond chat-based responses. Google already offers agent-related features, including Agent Mode, though access varies by subscription tier and region.
The report described Remy as more advanced, and is designed to integrate in Google services and monitor things most relevant to users, handling complex tasks and learning user preferences.
Gemini’s connected-app surface
Google’s Gemini support documentation shows the current scope of Gemini’s connected services, which can connect with other services to complete user requests and provide more relevant responses. Connected Apps include Google Workspace services (Gmail, Calendar, Docs, Drive, Keep, and Tasks), and – according to. Google’s help documentation – GitHub, Spotify, YouTube Music, Google Photos, WhatsApp, Google Home, and Android utilities.
Control questions
Google’s Gemini Privacy Hub will give context, working with connected apps, including Google apps and third-party services. Users can review and delete Gemini Apps Activity, change auto-delete settings, and manage whether data is used to improve Google AI. It also lets users manage access to other apps and data, as well as information they have asked Gemini to save.
Google’s existing Gemini documentation covers actions with different levels of user impact, including retrieving information from Workspace apps, creating calendar events, sending messages, opening apps, and controlling device or smart-home functions.
Google Research says AI agents should have well-defined human controllers, carefully limited powers, observable actions, and the ability to plan.
Google Cloud has also said agent activities should be transparent and auditable through logging and clear action characterisation. Its guidance emphasises limiting agent powers according to the intended purpose and user risk tolerance, using the least-privilege principle.
Remy’s reported preference-learning function also puts memory controls in focus. Google’s Privacy Hub says users can manage information they have asked Gemini to save and covers controls for personalisation based on past chats and Personal Intelligence.
The report did not provide technical details on Remy’s architecture, the model version behind it, or the level of autonomy being tested. It also did not say whether Remy can act independently without user confirmation. Those unanswered points mean it’s unclear how Remy handles approvals and logs completed-action.
The internal document describes Remy as a dog-fooding project, a term commonly used in technology companies when employees test products before any broader release. The report compared Remy’s concept with OpenClaw, an AI agent that drew attention earlier this year for its ability to autonomously reply to messages, conduct research on behalf of users, and take autonomous actions.
OpenAI CEO Sam Altman said in February that OpenAI was hiring OpenClaw’s creator, according to the report. Google DeepMind CEO Demis Hassabis has previously discussed the goal of building a digital assistant, but Google has not confirmed whether Remy will become a public Gemini feature.
(Photo by Kai Wenzel)
See also: Google made agentic AI governance a product. Enterprises still have to catch up.
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
The post Google tests Remy AI agent for Gemini as focus turns to user control appeared first on AI News.