Somewhere in the near future, and the leading edge of that future is already visible today, you will tell an AI to book you a flight, and the AI will book the flight. Not research flights and show you options. Not open a browser tab and wait for you to click confirm. It will find the flight, evaluate the options against your preferences, choose one, enter your payment credentials, and complete the purchase. You will receive a confirmation. The transaction will have happened without your hand touching a keyboard or a screen at any point in the process.
This is what the payments industry is beginning to call agentic commerce. The agent is the AI. The commerce is the purchase. The question that the industry is now trying to work out, with some urgency, is who is responsible for what happens when the agent gets it wrong.
The technical capability has been developing steadily. The large language models that power AI assistants have, over the last two years, acquired the ability to browse the web, execute actions on it, call APIs, and complete multi-step tasks without continuous human direction. The step from browsing to purchasing requires only that the AI be given access to payment credentials, which is a technical problem that has been solved in various ways already. Several major AI platforms now offer versions of this capability for limited use cases. The expansion is underway.
Mastercard published a piece in late 2025 noting that agentic commerce would expand significantly in 2026, and that critically, so would the guardrails around it. The specific challenge they identified is authentication. In the current card payment system, authentication works on the assumption that a human is on the other end of a transaction. The card is presented. A PIN is entered. A biometric is verified. A 3D Secure challenge pops up for the cardholder to complete. These mechanisms are designed for human interaction. An AI agent that is completing a purchase on your behalf cannot enter your PIN in the way you would. It cannot respond to a biometric prompt with your face. The authentication architecture assumes a person and the person is, increasingly, not there.
The industry is working on new models for agent authentication. Some proposals involve issuing a specific credential to the AI agent that is tied to the user’s account but separate from their personal payment credentials. The agent gets its own token, scoped to certain permissions, with spending limits or merchant category restrictions built in. When the agent uses this token, the issuer knows it is an agent transaction, can apply different risk models, and can hold the agent to certain accountability standards. Think of it like a corporate card issued to an employee with pre-set rules. The employee, in this case, is software.
The fraud question is adjacent and significant. Payment fraud has always been an arms race between those trying to steal and those trying to prevent theft. AI is on both sides of that race. The same capabilities that allow an AI to navigate a checkout flow on your behalf allow a malicious AI to impersonate a legitimate user and execute unauthorised transactions. Deepfake attacks, in which AI-generated video or audio is used to pass biometric checks, were occurring, according to one report, every five minutes globally in late 2025. The fraud detection systems that banks and payment networks use are also increasingly AI-driven, applying machine learning to transaction patterns in real time to identify anomalies. The systems that defend and the systems that attack are both getting smarter, and neither is likely to stop.
There is a subtler question embedded in agentic commerce that the industry has not fully engaged with yet, which is the question of what happens to consumer choice when the choice is made by an algorithm. When you choose a product or a service, you bring preferences, values, and sometimes idiosyncratic reasons to the decision. You might choose a smaller vendor because you prefer to support independent businesses. You might pick the slower shipping option because you know the cheaper one has worse labour practices. You might decline a subscription because you read a news article this morning that changed how you feel about that company. An AI agent optimising for price and speed and compatibility with your stated preferences will not carry any of these considerations unless you have explicitly programmed them in. And the explicit programming of values into a purchasing AI is not something most people will do with care.
The more an agent handles purchasing on behalf of users, the more the commercial outcome becomes a function of the agent’s incentives rather than the user’s values. This is not paranoia. It is a straightforward consequence of delegation. When you delegate a decision, you hand over the value judgments embedded in it along with the task. The agent’s training, the commercial relationships of the platform running the agent, the data it was optimised on, all of these shape what gets purchased and from whom. These are not inconsequential decisions. Collectively, across millions of transactions, they shape which businesses succeed and which do not.
For the payments industry, the practical priority right now is the plumbing. How do agents authenticate? How does liability work when an agent makes a purchase the user did not intend? If the agent books the wrong flight because it misunderstood the prompt, is the liability with the user who gave the instruction, the platform that built the agent, or the merchant who completed the sale? These are not hypothetical questions. They are the questions that will define how agentic commerce is structured, because without clear liability rules, neither merchants nor financial institutions will be comfortable enabling it at scale.
What is happening is genuinely new in payments terms. The transaction, the physical or digital act of paying, has always involved a human making a final choice, even if supported by technology. The removal of that final human step is not just a user experience change. It is a structural shift in what the payments system is designed to do and who it is designed to serve. Getting it right matters for a straightforward reason: a payment that goes wrong because an AI decided wrong is no less real than a payment that goes wrong for any other reason. The money moves. The consequences follow. The accountability for those consequences needs to be worked out before the volume gets too large to address retroactively.
The industry is building the future fast. It is worth building the rules at a comparable speed.
Leave a Reply