They are starting with a Financial Crimes AI Agent designed to compress AML investigations from hours or days into minutes. The companies say the agent will assemble evidence across a bank’s core systems, evaluate activity against known typologies, surface the highest-risk cases, reduce false positives, improve SAR narratives, and keep the human investigator in control. Early deployments are being developed with BMO and Amalgamated Bank, with broader availability planned for the second half of 2026.
That is the headline.
But I think the more important story is underneath it.
I feel that this is a public blueprint for the agent-first bank: a governed data foundation, a reasoning engine, source-linked conclusions, clear human decision rights, and an architecture designed to scale from one regulated workflow into several others. Anthropic’s adjacent announcements on 5 May, including ten financial-services agent templates, Microsoft 365 add-ins, connectors, managed-agent tooling, and audit logging, make the direction even clearer.
Finance is moving from copilots that help people think to agents that help institutions act.
So, in my eyes, the conclusion for banks is straightforward.
Your future platform is a modern banking platform with an AI operating layer on top of it, an not as some will try to make you think “an AI layer instead of architecture”. That means the thin-core argument I make in my book Rip Out the Core gets stronger. The centre that creates balances, postings, obligations and evidence still has to remain deterministic, factual and explainable. The AI sits around that centre, enriching it, interrogating it, protecting it and orchestrating work across it. The part that moves money cannot be allowed to guess.
What was actually announced
FIS’s release is unusually explicit about both the operational target and the delivery model. The Financial Crimes AI Agent is meant to assemble evidence automatically across a bank’s core systems, evaluate activity against known typologies, and prioritise cases for review. FIS says client data will remain inside FIS-controlled infrastructure, every agent conclusion will be traceable, every decision auditable, and human investigators will stay in control. The firm is also very open about the intended expansion path. The same governed platform is meant to extend into credit decisioning, deposit retention, customer onboarding and fraud prevention. In other words, AML is the opening act, not the whole show.
Anthropic’s side of the story reinforces that interpretation. On 5 May it released ten ready-to-run agent templates for financial services, including a KYC screener, a statement auditor, a month-end closer and a model builder. It also described the technical packaging quite plainly. Skills, connectors and subagents, with support for long-running managed agents, per-tool permissions, credential vaults and full audit logs. It added Microsoft 365 integrations and a growing connector ecosystem, including Dun & Bradstreet, FactSet and Moody’s. A day earlier, Anthropic had also announced a new enterprise AI services company with Goldman Sachs, Blackstone and Hellman & Friedman, making it fairly obvious that it wants to be more than a model vendor. It wants to be part of the machinery by which work gets redesigned.
Trade and business coverage broadly matched the official line. Finextra’s report followed the announcement closely, emphasising the same core claims around AML case compression, evidence assembly across bank core systems, and early deployments with BMO and Amalgamated Bank. Reuters, covering Anthropic’s broader finance push the next day, reported that financial services had become its second-largest source of enterprise revenue after technology, and that Anthropic had launched ten new AI agents for banks and insurers. That matters because it frames the FIS partnership not as a one-off experiment, but as part of a wider commercial campaign to become a financial workflow platform.
Why this matters beyond AML
There is a temptation to file this under compliance tooling and move on.
That I feel would be a mistake.
The significance is not that one vendor has found a particularly expensive back-office process and promised to make it less miserable, although that part I think is useful. The significance is that FIS has effectively published a reference architecture for regulated agent adoption.
It also explains why AML is the right place to start. FIS notes that the UN estimates $2 trillion in illicit funds flows through the global financial system annually, while U.S. institutions spend roughly $35–40 billion a year on AML operations and still force investigators to spend most of their time manually gathering evidence from disconnected systems.
As we all know, the current process is costly, fragmented, slow and politically painful. It is precisely the sort of workflow where intelligent evidence assembly can create obvious value without handing final judgement to a model. That makes it a commercially smart beachhead and, from my perspective, a much easier thing to approve in a steering committee than “let’s have an LLM underwrite the mortgage book”.
But maybe we should keep a little scepticism in the back pocket. These are vendor claims, not yet broad independent production results. Finextra’s reporting largely mirrors the release. The announcement tells us a lot about intent and architecture, and relatively little about failure rates, exception handling, investigator override behaviour, false-negative drift or how the system behaves when the source data is a mess, which in banking it often is because reality enjoys sabotaging slide decks. The Bank of England has already said there is still little evidence that advanced AI is being used in core financial decisions at a level that creates systemic risk today, partly because firms themselves still see interpretability and predictability as binding constraints.
But the broader direction is maybe hard to miss.
Defensively, agents of this kind should speed up evidence gathering, improve consistency, reduce low-value manual work and let experienced investigators focus on judgement. Offensively, or at least adversarially, it is reasonable to expect the opposite side of the market to learn just as fast. The FCA has warned that AI adoption in financial services brings growing risks including sophisticated AI-enabled fraud, identity abuse and opaque decision-making. FATF has highlighted AI-enabled deepfakes and the use of advanced technology in fraud detection and risk scoring, while Europol says generative AI is already accelerating and concealing online fraud schemes. On the cyber side, CrowdStrike and Palo Alto Networks have both described frontier AI as compressing the time between vulnerability discovery and exploitation, turning AI into both burglar and guard dog at the same time.
As such we can assume that if banks use AI to compress investigation and detection, fraudsters will use AI to compress evasion, impersonation and attack preparation.
And with that the circle keeps repeating…
What banks should take from this all
The most revealing sentence for me in the FIS release may be the least glamorous one.
For most institutions, financial-crime data sits locked in disconnected systems and is impossible to act on at the required speed.
FIS can make this product look tidy because it already sits in the middle of transactions, payments, deposits, credit and customer activity for some financial institutions. That gives it a data position many banks do not have inside their own estates. My read, and this is an inference rather than a vendor slogan, is that the announcement is really a stress test of bank governability. If your evidence is fragmented, your interfaces brittle, your event model thin and your access trail patchy, your agent will not become magical. It will simply discover, very efficiently, that your plumbing is worse than you hoped.
That is why the Rip Out the Core thin-core argument survives this moment intact.
In fact, I think it gets stronger.
The centre of the bank still needs to be a deterministic system of record for balances, postings, contracts and evidence. Around that core, you need AI-aware seams with governed APIs, event streams, identity-rich access, traceable tool use, testable policy enforcement, and proper observability.
The interaction pattern below is the one banks should get comfortable with. The agent assembles and reasons, but the bank governs, records and decides.
The practical programme that follows from this is not a big-bang “AI transformation”.
It is progressive modernisation with sharper priorities.
The European Banking Authority says banks are experimenting with multiple deployment approaches, often relying on third-party cloud APIs, open-source and vendor models in combination. That means the winning discipline is not picking one magic box. It is deciding which capabilities stay deterministic, which can be agent-assisted, which can be agent-driven with approvals, and what evidence you need at each boundary.
The roadmap below is the one I would start with. It is a recommendation, but it is built directly on the announced FIS design choices and the pressure points regulators are already highlighting.
Horizon
Modernisation move
Why it matters now
Practical deliverable
First 90 days
Inventory financial-crime data, interfaces, tool permissions and manual evidence steps
Agents fail where source systems are fragmented or poorly governed
Case-flow map, data lineage map, control inventory
Next 6 months
Thin the core and harden the seams
Keep money movement deterministic while exposing governed access for agents
API and event catalogue, entitlements model, audit schema
Next 9 months
Build observability and continuous assurance into agent workflows
AI adoption without runtime visibility is just wishful thinking with a budget
Prompt/tool logging, exception dashboards, model monitoring, review sampling
Next 12 months
Pilot one high-friction, human-in-the-loop use case
Prove value where evidence assembly is costly and judgement remains human
AML, onboarding review, fraud case triage
Next 18 months
Expand by capability, not by fashion
Avoid agent sprawl and duplicated controls
Governed roadmap into fraud, onboarding, retention, selective credit workflows
Ongoing
Rework procurement and third-party oversight
DORA, model risk and dependency concentration are not optional side quests
The lazy version or definition of best-of-breed is in trouble.
The serious version is alive and well.
Even the cloud providers are telling you this in plain language.
Amazon Web Services frames the market as build or buy. Use Bedrock if you want model access and flexibility, or buy Claude for Enterprise through the marketplace if you want a packaged, governed service with procurement, billing and security controls already in place. Google Cloud is pushing the same direction from another angle, with an enterprise agent platform that supports Anthropic models alongside its own and emphasises model choice, orchestration, security and governance. In general the smart banks are commonly combining one to three deployment approaches and relying heavily on third-party services, mainly through cloud APIs.
So in my view, best-of-breed is not dead.
But the unit of value is shifting from standalone application boxes to governable capabilities living inside a composed architecture.
That has consequences for ISVs. The winners are likely to be the providers that already occupy meaningful workflow positions, own or broker high-value data, expose clean APIs, publish events, support traceability and fit into enterprise control frameworks.
FIS is making that bet openly.
Anthropic is making the complementary bet that model intelligence plus services plus connectors plus ecosystem is more defensible than being “just the model”.
Reuters reported that Anthropic’s chief executive warned that some of today’s SaaS incumbents could lose value or go bust if they do not address AI head-on. That sounds dramatic because it is dramatic. But the underlying logic is sound. If your product is mostly a thin UI over routine knowledge work with weak data gravity and weak controls, AI will not destroy you by magic. It will simply make your margin harder to defend.
Why this should make you read my book
If this announcement lands with a thud rather than a gasp, it is because too many banks still treat modernisation as either a heroic demolition project or a problem to postpone until next year’s budget.
FIS and Anthropic are effectively saying the opposite. If you want bank-grade agents, you need governable data, controlled infrastructure, open seams, clear evaluation criteria and somebody accountable for the decisions.
That is also why this moment strengthens, rather than weakens, the case for progressive modernisation. The future bank will indeed have a layered AI model across it. But it will not be AI all the way down. The future bank will have a thinner core, cleaner capability boundaries, better observability, better partner selection, better third-party oversight and a much lower tolerance for mystery processes hiding in the walls. You do not get there by buying another shiny showroom kitchen and hoping the old pipes behave. You get there by knowing what must stay factual at the centre, what can be modular at the edges, and how to modernise one capability at a time without setting fire to the house.
That, in the end, is the real sales story for my book. This is not a book about yesterday’s core strategy. It is a practical guide for building the kind of bank that can survive, govern and exploit this next phase of AI without becoming dependent, opaque or reckless in the process.