Archives 2026

A guide to APIs, MCPs, and MCP Gateways

APIs and MCPs are often mentioned in the same breath as ways that systems can exchange information, but they are designed differently and have different purposes. This article hopes to explain the differences and how software developers and users should approach interaction with each.

An API is mainly found in software applications, while an MCP (Model Context Protocol), is used by large language models. APIs let one application talk to another, and an MCP lets an AI model use data and tools in structured ways. The difference comes about because LLMs, responding to user requests, need to choose which tools and information it thinks it needs to achieve an outcome.

APIs: Simple definition

An API sends a request in an agreed format to another software instance, and receives a response in the agreed format, with the details of each exchange’s protocols (or methods of behaviour) hard-coded. Developers write code to call out to an API and create code to parse, or handle, the response. This makes APIs precise and reliable – although the interchange can falter if either party changes the code governing the API’s behaviour.

APIs are still important to systems using LLMs, and many AI-based systems rely on APIs to function. A model may request data, and get responses via an API.

MCPs: Simple definition

MCPs are used when LLMs need access to data in situations like needing to query business data repositories, read the contents of particular files, or trigger an action. MCPs give models a structured way to access multiple data sources via one interface. An MCP server exposes data in a standard format according to rules set up in advance. These rules determine what is available and to whom or what.

MCP servers expose three kinds of ability:

  • Tools are actions the model may instigate, like creating a file or searching a database.
  • Resources are information the model may read as context.
  • Prompts are reusable templates that help users perform common tasks, without having to write a detailed prompt every time they perform the same action.

The important difference is that MCPs are designed for a model to be the direct consumer of data. The model suggests which tools or resources it requires according to what it thinks may be relevant to the user’s request.

Why MCPs are not an API wrappers

In some systems, APIs remain in use, but have an MCP placed between them and the user. An MCP server might call an API ‘behind the scenes’. However, an API could return more information by default than a model needs to achieve a task. But as every byte of data will need to be processed by the LLM, this can burn through many more tokens than are necessary. Too much information increases costs and can make the model’s answer less accurate.

For example, an API might return 50 database fields about a customer, but the LLM requires a single account status entry. Sending all 50 fields gives the model more to process, which doesn’t necessarily provide useful context. The LLM has no idea of the relevance of the data until it has used processing cycles to determine the fact. Additionally, it may base its responses on extraneous data it’s been given, and produce inaccurate answers.

In an ideal scenario, MCP tools are designed around the tasks a model needs to complete. If the user asks how many customers are subscribed to a particular service, or have bought a specific item, for example, the MCP tool will return the relevant numbers, rather than complete customer interaction records.

When each are used

Use an API when one application needs to communicate with another application when there is full knowledge between both parties as to what information is required. A website, mobile app, internal system, payment platform, or reporting tool will often use APIs.

If the end-consumer of data is an AI model that needs access to undefined information or actions, an MCP should be used. An AI assistant that answers staff questions (with variable input, therefore) or is tasked to review internal documents may use MCPs.

In many organisations, both exist. A customer app that can present specific information (an account balance, for instance) may call APIs. An AI assistant in the same app may use an MCP server because the nature of the queries it will create on behalf of the user will vary. Both may reach the same underlying data, but do so through different interfaces according to the type of system asking.

Security and gateways

A gateway is a device (usually instantiated in software) that fronts both types of service. It handles authentication, rate limits, logging, monitoring, and access control. If MCP use grows, organisations need to know which AI tools are requesting data from which systems, what data they are allowed access to, and what actions they can perform on that data. A gateway can create a place to manage these types of controls.

However, as they operate at the network layer (arbitrating and recording data movement), they do not solve problems that emanate from the software layer (including LLMs, deterministic code, or user activity). In cybersecurity terms, they can be thought of as a firewall: useful in certain contexts, but like firewalls, they can be circumvented, represent a single point of failure, and might give a false sense of security. MCP and API gateways are arguably perimeter defences, that will not reliably prevent data-related incidents. These are still possible when caused by software, either deterministic, ‘traditional’ code or an LLM.

(Image source: Pixabay under licence.)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post A guide to APIs, MCPs, and MCP Gateways appeared first on AI News.

AI agent governance takes focus as regulators flag control gaps

Australia’s financial regulator has warned financial firms that AI agent governance and assurance practices are poorly governed. The warning comes as banks and superannuation trustees expand AI in internal and customer-facing operations.

The Australian Prudential Regulation Authority said it conducted a targeted review of selected large regulated entities in late 2025 to assess AI adoption and related prudential risks. It found that AI was being used in all entities reviewed, but maturity varied in risk management and operational resilience. APRA said boards showed strong interest in AI for productivity and customer experience. However, it found that many were still building management of AI risks.

The regulator also raised concerns about reliance on vendor presentations and summaries. It said boards were not always giving enough scrutiny to risks like unpredictable model behaviour and the effect of AI failures on critical operations.

APRA said boards should develop a better understanding of AI in order to set strategy and oversight coherently. It said AI strategy should align with an institution’s risk appetite and include monitoring and defined procedures that should be taken in the event of errors.

APRA noted regulated entities were trialling or introducing AI in software engineering, claims triage, and loan application processing. Other use cases cited included fraud and scam disruption and customer interaction.

Some entities were treating AI risk in the same terms as that of other technologies, but that approach doesn’t account for models’ behaviour and bias.

It identified gaps in model behaviour monitoring, change management, and decommissioning, and stated a need for inventories of AI tools and named-person ownership of AI instances. It also pointed out the requirement for human involvement in high-risk decisions.

Cybersecurity was another area of concern. APRA said AI adoption was changing the threat environment by adding additional attack pathways such as prompt injection and insecure integrations.

Identity and access management practices had not adjusted in some instances to non-human elements such as AI agents. The volume of AI-assisted software development was placing pressure on change and release controls.

APRA said entities should apply controls on agentic and autonomous workflows which included privileged access management, configuration, and patching. It also called for security testing of AI-generated code.

Some institutions had become dependent on a single provider for many of their AI instances, ARPA noted, and only a few had been able to show an exit plan or substitution strategy for AI suppliers.

APRA said AI can be present in upstream dependencies, which entities may not be aware of.

Identity and access

The focus on identity and permission controls is also reflected in new standards work by the FIDO Alliance. The group has formed an Agentic Authentication Technical Working Group and is developing specifications for agent-initiated commerce.

FIDO said some existing authentication and authorisation models were designed for human interaction, not delegated actions performed by software. It said service providers need ways to verify who or what authorises actions and under what conditions.

Vendors have presented their solutions to FIDO for review, including Google’s Agent Payments Protocol and Mastercard’s Verifiable Intent framework. The Centre for Internet Security, a non-profit funded largely by the Department for Homeland Security, has published AI security companion guides that map CIS Controls v8.1 to large language models, AI agents, and Model Context Protocol environments.

Its LLM guide covers prompt and sensitive-data issues, and an MCP guide focuses on secure access by software tools, non-human identities, and network interactions.

(Photo by julien Tromeur)

See also: Google warns malicious web pages are poisoning AI agents

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AI agent governance takes focus as regulators flag control gaps appeared first on AI News.