Last year the EU published a regulation that is designed to ensure human-centric and trustworthy Artificial Intelligence (AI), while ensuring a reasonable level of protection of health, safety, fundamental rights enshrined in this new Act which includes democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the European Union yet support technological innovation. It is a massive piece of legislation that has huge implications for agentic commerce.
There is a thesis that states when a consumer is presented by a software license agreement, nobody really reads it, but they will click ‘accept’ to move on. What publication has 13 Chapters, 113 Articles, 180 Recitals and 144 pages of heavy legal text? The answer is the EU Artificial Intelligence (AI) Act. The EU AI Act was published in the Official Journal of the European Union on 12 July 2024. Has anyone read this important foundational document that describes the regulation of AI in the EU? What are the implications for this comprehensive EU Artificial Intelligence (AI) Act for agentic commerce?
The EU AI Act claims to be the world’s first comprehensive law regulating Artificial Intelligence. It uses a risk-based approach, grouping AI systems into four categories: unacceptable, high, limited, and minimal risk. The Act also describes AI practices that shall be prohibited. Unacceptable risk systems (such as social scoring or manipulative AI) are banned; high-risk systems face strict controls, including registration, regular assessment, and human oversight; limited-risk systems (like chatbots) have transparency requirements; and minimal risk systems (such as spam filters) are largely unregulated. I’m not sure how this Act will address the viral and politically motivated AI animations of Donald Trump and Vladimir Putin that have circulated online. I don’t think one cares but this technology could be used against private EU citizens. Non-compliance with the prohibition of the AI practices shall be subject to fines of up to 35 million Euros or up to 7% of the offender’s total worldwide annual turnover for the preceding financial year, whichever is higher. These levels of fines are like the penalties imposed for large-scale privacy/data protection failures under European law and are designed to be both severe and dissuasive for global businesses. We have yet to see whether these fines will be imposed on any business or individual found responsible for non-compliance with the Act.
Implications for Agentic Commerce
The Act doesn’t specifically spell out Agentic Commerce in any of its 144 pages but reading between the lines, Agentic Commerce, where AI agents autonomously act on behalf of shoppers, the Act has several key implications:
- Risk Classification (Chapter III): Many agentic AI systems are expected to be deemed “high-risk,” especially if they influence financial decisions, access to essential services, or handle sensitive private customer data. This will trigger significant compliance obligations in the Act, such as detailed documentation, risk assessments, transparency, and ongoing human oversight.
- Prohibited AI Practices (Chapter II): Under the Act an AI agent would be banned if it used manipulative or deceptive techniques to exploit a user's vulnerabilities, such as age, disability, or socio-economic status, to cause "significant harm". A system that targets low-income individuals, for example, with predatory financial products would be illegal. An authentic AI agent must act within the constraints that the consumer has determined upon its deployment.
- Transparency & Auditability (Chapter IV): Agentic agents must clearly state when users interact with AI, explain decisions, and provide audit trails. Sellers and platforms must ensure users know they are dealing with an AI agent, not a human. This is super important for agentic commerce, and the Act is very prescriptive in this area.
- Consent (Chapter VI, Article 61) & Human Control (Chapter III, Article 14): The Act stresses meaningful, informed consent and the ability to intervene or override agentic transactions, which is expected to require rethinking consent and authority models where AI agents are performing autonomous purchasing of goods and services. Could an AI agent be instructed to book a hotel room based on the loyalty points the customer will earn and the budget the customer has predefined? Does this constitute human control? The Act does not state the human control must be real-time and the time when the AI agent finalises the purchase transaction.
- Compliance (Chapter III, Article 8): On paper the law does not address all the potential agentic commerce scenarios, some of which are yet to be defined. This means that the Act presents some uncertainty legally, especially regarding AI agents, consumer protection, and where does the liability exist in fully automated purchases. Early adopters must construct strong internal compliance policies and engage with regulators proactively.
- Data quality and bias mitigation (Chapter III, Article 10): Businesses must ensure that training data on large language models, for example, is relevant, representative, and, to the extent possible, free of errors and bias. The requirement for the data sets to be to the best extent possible complete and free of errors should not affect the use of privacy preserving techniques in the context of the development and testing of AI systems. This is especially important for applications like credit scoring, where bias could lead to discrimination. An AI agent could be deployed to find the best household insurance policy but may not secure the best policy because certain regional biases may exist in the risk assessment model.
As you dive deeper into the 13 Chapters, 113 Articles, and 180 Recitals there remains a big question about liability. How to clarify who is responsible when an AI agent causes damage, and how does the Act address the "accountability gap". The Act, as mentioned in Chapter III, mandates that effective human oversight must be maintained. This can be challenging for AI agents but is required, particularly for decisions with significant consequences, such as completing a purchase for an airline ticket or a hotel room, for example. Another area that the Act doesn’t directly address is digital identity in the context of agentic commerce and payments. However, there is an interesting Recital 15, which describes the notion of “biometric identification” as in the automated recognition of physical, physiological and behavioural human features such as the face, eye movement, body shape, voice, prosody, gait, posture, heart rate, blood pressure, odour, keystrokes characteristics, for the purpose of establishing an individual’s identity. This is where Electronic Identification, Authentication and Trust Services (eIDAS) regulation (eIDAS 2.0) ought to play an important role. EU Member States will offer certified wallets to its citizens and businesses under the eIDAS 2.0 regulation. The European Digital Identity (EUDI) Wallet is a completely different topic, but its intent is to streamline identity verification for a variety of services, including opening bank accounts and confirming identity for payments. How the EUDI supports AI agents in agentic commerce is not yet considered win the EU AI Act.
In summary, the EU AI Act does not outlaw agentic commerce but aims to force a more rigorous approach to its development and deployment. For any AI agent operating in the EU, the framework mandates compliance, emphasises transparency and human oversight, and imposes strict penalties for harmful or high-risk applications. Many retail and travel businesses need to carefully navigate the evolving regulations for agentic commerce and deployments of AI. Edgar, Dunn & Company (EDC) work closely with businesses to adopt a proactive, principle-based, and agile strategy as agentic commerce is expected to rapidly evolve in the next 12 months.
The content of this article does not reflect the official opinion of Edgar, Dunn & Company. The information and views expressed in this publication belong solely to the author(s).
Mark is a Director in the London office and heads up the Retailer & Hospitality Payments Practice for EDC. He has over 25 years of experience of consulting strategy in the payments and fintech industries. Mark works with leading global merchants, and payment suppliers to retailers and hospitality merchants, to develop omnichannel acceptance strategies. He uses the 360° Payment Diagnostic methodology developed by EDC to identify cost efficiencies and new growth opportunities for retailers and hospitality merchants by defining an appropriate mix of payment methods, acceptance channels, innovative consumer touchpoints, and optimizing Payment Service Providers and acquiring relationships. Outside the payments and fintech industry Mark is a passionate snowboarder.
.webp)


