Is Your App Breaking EU Law? A Business Owner’s Guide


The EU AI Act applies to any business whose app uses AI and serves EU users, regardless of where the company is incorporated or headquartered. As of April 2026, three layers of the Act are already being enforced, yet most business owners building AI-enabled apps remain unaware. The regulation classifies AI based on what it does and who it affects, not on its technical sophistication, meaning a simple hiring tool can face stricter rules than a complex recommendation model. Penalties for serious violations can reach up to €35 million or 7% of global annual revenue, exceeding GDPR maximum fines. The real question is not whether the Act applies to your app, it almost certainly does, but which obligations apply, when they take effect, and what non-compliance could cost a business of your size. This guide provides non-technical business owners with clear, plain-English answers to those questions, along with a five-point diagnostic to identify specific exposure.

Introduction

The EU AI Act (Regulation EU 2024/1689) came into force in August 2024. It is the world’s first comprehensive legal framework for artificial intelligence, and it applies to your business if your app uses AI and serves users in the European Union regardless of where your company is based or registered. A business headquartered in Bangalore, Boston, or Beirut that sells an AI-powered app to EU users is in scope. No exceptions.

As of April 2026, prohibited AI practices have been banned for over a year. GPAI model obligations for apps integrating foundation models have been live since August 2025. A third layer, Article 50 transparency rules covering chatbot disclosure and AI-generated content labelling arrives in August 2026, four months from now. Annex III high-risk AI system obligations are proposed for December 2027.

The most expensive mistake a business owner can make right now is treating all of these as a single future deadline. They are not. This guide explains what applies to your specific app, when it applies, and what the consequences look like at your company’s revenue level.

How the EU AI Act classifies your app

The EU AI Act does not regulate artificial intelligence based on how technically sophisticated the underlying system is. It regulates AI based on what it is used for and who it affects.

A simple keyword-matching tool used to screen job applications is classified as a high-risk AI system and carries strict compliance obligations. A complex large language model used to generate e-commerce product descriptions carries lighter obligations. The sophistication of the technology is irrelevant. The use case and its impact on real people is the only classification question that matters.

Business owners frequently assume their app’s AI is “basic” and therefore unregulated. That assumption is often wrong — and the cost of getting the classification wrong ranges from €700,000 for a company with €10 million in revenue to €7 million for a company with €100 million in revenue, before reputational damage is factored in.

The four categories every app falls into

Every AI feature in every app falls into one of four categories under the EU AI Act. Your category determines your obligations, your deadlines, and your financial exposure.

Category 1: Prohibited AI

These AI practices are illegal in the EU. There is no compliance path. They must not exist in your product. The full prohibited list is defined in Article 5 of the EU AI Act.

Prohibited AI

Prohibited practices include AI that uses subliminal techniques to manipulate user behaviour in ways that cause harm, AI that performs social scoring, generating a trustworthiness or reliability score based on behaviour across different contexts — AI that conducts predictive policing based solely on profiling, emotion recognition systems in workplace and educational settings, and real-time biometric identification of individuals in public spaces for law enforcement with narrow exceptions.

If your app uses any of these features, this is not a future compliance problem. It is a current legal violation that has been enforceable since February 2025

Category 2: High-risk AI

High-risk AI systems, the formal term for regulated AI workflows that influence sensitive decisions are legal to operate but only if you meet a substantial compliance framework. The Annex III list of high-risk use cases defines this category by the decision the AI influences and the person it affects, not by industry or technology type.

High Risk AI

Annex III covers hiring and recruitment tools, employee monitoring and management systems, credit and lending decision support, educational access and assessment systems, healthcare triage and clinical decision support, essential services eligibility assessment, law enforcement risk tools, and biometric identification systems.

If your app uses AI in any of these contexts, even as a supporting feature rather than the core product, it is almost certainly a high-risk AI system. Compliance obligations include a documented risk management system, data governance controls, technical documentation covering the system’s architecture and purpose, human oversight mechanisms, transparency obligations, and a conformity assessment before the system can be placed on the EU market.

Ailoitte’s enterprise AI security and compliance practice is built specifically to meet these requirements for regulated industries including healthcare, fintech, and enterprise software with zero-retention data handling and OWASP-aligned security controls built into every development workflow.

The proposed deadline for full Annex III compliance is December 2027. That extension is not permission to wait. Retrofitting compliance into a production AI workflow that was not designed to accommodate it is significantly more disruptive — and significantly more expensive than designing governance in from the start. For the full breakdown of what the Parliament delay actually changed and what it did not, see Ailoitte’s analysis at EU Parliament Backed an AI Act Delay — Here Is What It Actually Means for Your App

Category 3: Limited-risk AI

Limited-risk AI systems carry one primary obligation under Article 50: users must know they are interacting with AI. If your app uses a chatbot or virtual assistant, the user must be informed it is an AI system at the start of the interaction. If your app generates images, audio, or video using AI, that content must be labelled as artificially generated. If your app generates written content that users might mistake for human-authored text, disclosure is required.

Limited Risk AI

These transparency rules activate on 2 August 2026 and were not extended by the Parliament-backed delay to Annex III obligations. Four months is not a long implementation window when legal review, design changes, and engineering work across multiple product surfaces are required. If you are still unclear on exactly what the Parliament delay covered and what it left unchanged, Ailoitte’s detailed breakdown of the delay clarifies the distinction in full.

Category 4: Minimal-risk AI

Spam filters, entertainment recommendation engines, AI-powered gaming features, and inventory management tools are examples of minimal-risk AI. These carry no specific EU AI Act obligations, though GDPR and general consumer protection law continue to apply.

Minimal Risk AI

One important caveat: classification is not permanent. If your product changes and your AI begins influencing decisions about individuals in sensitive contexts, the risk tier changes with it. Any significant product pivot that alters how your AI is used should trigger a re-classification review before the change ships.

What is already enforced and what most business owners get wrong

The most dangerous misconception among business owners right now is that the EU AI Act is a future problem. As of April 2026, three layers of the Act are already in force.

Prohibited practices: banned since February 2025. If your app contains any of the features described in Category 1, it has been in violation of EU law for over a year. Enforcement is active.

GPAI model obligations: live since August 2025. If your app integrates a third-party AI model, any large language model, image generation model, or multimodal foundation model,  you are operating as a downstream deployer under Chapter V of the EU AI Act. These obligations are in force. Most business owners who use foundation model APIs have not reviewed their deployer responsibilities. Data governance and how AI models handle your users’ data is a live obligation, not a future one.

Ailoitte’s guide to AI and data governance covers how to balance innovation with the governance obligations the EU AI Act now requires.

AI literacy obligations: in force since February 2025. Any business that develops or deploys AI systems serving EU users must ensure that the people involved in building and operating those systems have a sufficient working understanding of AI risk and the applicable legal obligations. This is not a 2027 requirement.

What EU AI Act fines look like for a business your size

Before the regulatory fines, consider the commercial consequence that arrives first. Enterprise buyers in financial services, healthcare, and regulated industries are already requiring evidence of AI governance as a condition of procurement. These questions are appearing in RFPs, vendor security reviews, and due diligence checklists today. A company that cannot answer them does not get a chance to discuss price.

On the regulatory side, the EU AI Act penalty structure is tiered by violation severity and expressed as the higher of a flat amount or a percentage of global annual turnover.

VIOLATION TYPE MAXIMUM FINE AT €10M REVENUE
Using a prohibited AI practice €35M or 7% of global revenue €700,000
Violating Annex III high-risk obligations €15M or 3% of global revenue €300,000
Misleading information to regulators €7.5M or 1% of global revenue €100,000

These penalty figures exceed the GDPR maximum of €20 million or 4% of global turnover for the most serious violations. EU regulators have made clear that AI compliance is not optional. The penalty structure reflects that intent.

Five questions that identify your EU AI Act exposure

Five Question Rule

These five questions determine your regulatory exposure more accurately than any general guide. Answer them for every AI feature in every product you operate. If any answer is “we do not know,” that uncertainty is itself the compliance gap — and it needs to be resolved before August 2026.

Q1 Does any AI feature in your app influence jobs, credit decisions, insurance outcomes, educational access, healthcare triage, or access to essential services?

If yes: your product almost certainly contains a high-risk AI system under Annex III. The December 2027 compliance framework applies. Architecture decisions, risk management documentation, and human oversight design must start now — not in late 2026.

Q2 Does your app use a chatbot, virtual assistant, or any surface where AI generates content that users interact with?

If yes: Article 50 transparency obligations apply from August 2026. Disclosure language, synthetic content labelling, and opt-out flows must be in place within four months. This deadline has not been extended.

Q3 Does your app integrate a third-party AI model or API from any provider?

If yes: establish whether your product is operating as a provider or a deployer under the Act. These roles carry different obligations. Getting this classification wrong is itself a compliance failure. Read Ailoitte’s guide at ailoitte.com/insights/eu-ai-act-provider-vs-deployer for a practical breakdown.

Q4 Does any AI feature process personal data about EU users in a way that informs automated decisions about those users?

If yes: both GDPR and the EU AI Act may apply simultaneously. The interaction between the two frameworks creates additional obligations that neither regulation fully covers on its own. These two frameworks must be assessed together, not separately.

Q5 Have you documented the risk classification of every AI feature in your product in writing?

If no: you have no defence if a regulator questions your classification decisions, an enterprise buyer requests governance evidence, or an affected user challenges an AI-driven outcome. Written classification documentation is the foundation of every other compliance obligation under the Act.

Does EU AI Act compliance give your business a competitive advantage?

Yes  and significantly more than most business owners expect.

Enterprise procurement teams in financial services, healthcare, legal, and insurance are now routinely including AI governance sections in their vendor questionnaires. A company that can answer those questions clearly and provide documented evidence of AI risk classification, human oversight design, and compliance planning wins contracts that competitors who are treating this as a 2027 problem cannot access.

The EU AI Act is following the same market trajectory as GDPR. Companies that built data privacy into their products before GDPR enforcement began gained measurable advantages in enterprise sales cycles, security reviews, and regulated market entry. The companies that retrofitted privacy controls after enforcement began spent more, moved slower, and lost deals during the catch-up period.

The EU AI Act is following the same market trajectory as GDPR. Companies that built data privacy into their products before GDPR enforcement began gained measurable advantages in enterprise sales cycles, security reviews, and regulated market entry. The companies that retrofitted privacy controls after enforcement began spent more, moved slower, and lost deals during the catch-up period.

According to the IAPP’s 2025 AI Governance in Practice report, 68% of enterprise procurement teams at companies with over 500 employees now include AI governance requirements in vendor due diligence up from 31% in 2023.

Ailoitte’s AI development services are designed with compliance built into the engineering workflow from day one, so every AI product we build is audit-ready without a separate compliance retrofit. The question is not whether your buyers will ask. It is whether you can answer.

Compliance is not the cost. Ignoring it is.

What to do next

If you have read this guide and the honest answer to one or more of the five questions above is “we do not know,” that is the most important finding in this article.

Not knowing your EU AI Act classification is not a neutral position. It is an unquantified legal and commercial exposure that compounds every month you wait. The question is not whether to address it. It is how quickly.

Book a free 30-minute EU AI Act consultation. Get A clear answer on which obligations apply to your specific product.

FAQs

Does the EU AI Act apply to my business if we are not based in the EU?

Yes. The EU AI Act has explicit extraterritorial reach under Article 2. It applies to any provider placing an AI system on the EU market, any deployer using an AI system within the EU, and any business outside the EU whose AI system produces outputs used by EU-based individuals or organisations. This territorial scope mirrors the approach taken by the GDPR. A company based in the United States, India, or the United Kingdom that offers an AI-powered product to EU users is subject to EU AI Act obligations for that product regardless of where its servers are located or where the company is incorporated.

What is the difference between a provider and a deployer under the EU AI Act?

A provider is a company that develops an AI system or has one developed under its direction, then places it on the EU market under its own name or brand. A deployer is a company that uses an AI system developed by someone else in the course of its professional activities. Providers carry significantly heavier obligations — technical documentation, conformity assessments, CE marking, and EU database registration. Deployers carry lighter but still substantial obligations — following the provider’s instructions, maintaining logs, assigning human oversight, and in certain cases conducting a fundamental rights impact assessment. The distinction is not always clean. If your company takes a third-party foundation model, fine-tunes it on your data, and releases it as your own product, you are the provider of that system — not just a deployer of the original model. Getting this classification wrong is a compliance failure with its own regulatory consequences.

My app already uses a chatbot. What do I need to do before August 2026?

Under Article 50 of the EU AI Act, which activates on 2 August 2026, any app deploying a chatbot or virtual assistant must clearly inform users at the start of the interaction that they are communicating with an AI system. This disclosure must be explicit — it cannot be buried in terms and conditions. Additionally, any AI-generated images, audio, or video content your app produces must be labelled as artificially generated. This applies to deepfake-style content, AI-synthesised voices, and AI-generated images used in contexts where users might mistake them for authentic human-created content. Four months is not a long implementation window. The disclosure design, legal review of the language, engineering implementation across all relevant surfaces, and QA testing all need to happen before 2 August 2026.

Does the EU AI Act interact with GDPR? Do both apply to my app?

Yes, and they interact in ways that create obligations neither regulation fully covers alone. The EU AI Act and GDPR apply simultaneously to any AI system that processes personal data about EU individuals which includes the vast majority of AI-powered app features. The EU AI Act’s data governance requirements under Article 10 require that training data, validation data, and test data meet quality criteria and are handled with appropriate governance controls. These requirements overlap with GDPR’s data minimisation, purpose limitation, and accuracy principles  but they are not identical. Meeting GDPR does not automatically satisfy the AI Act’s data governance requirements. Ailoitte’s guide to AI and data governance at ailoitte.com/blog/ai-and-data-governance-balancing-innovation-and-ai-ethics covers how to approach this intersection in practical terms balancing the innovation objectives of AI development with the governance obligations both frameworks impose. If your AI system processes personal data about EU users and falls into the Annex III high-risk category, a dual-framework assessment covering both GDPR and the EU AI Act is the correct approach.

How do I know if my app’s AI feature is classified as high-risk under the EU AI Act?

A high-risk AI system is defined by its use case, not its technology. Your AI feature is almost certainly high-risk if it is used to screen, rank, or filter job candidates; assess creditworthiness or eligibility for financial products; determine access to educational programmes or evaluate learning outcomes; support healthcare triage or clinical decision-making; assess eligibility for essential public services; or identify individuals using biometric data. These use cases are listed in Annex III of the Act. If your product operates in any of these areas even as a supporting tool rather than the primary decision-maker you should treat the system as high-risk and seek a professional classification assessment. If you are unsure, the conservative approach is to treat the feature as high-risk and document your reasoning. A written classification record that concludes a feature is not high-risk is better protection than no record at all.

What happens if my app is found to be non-compliant after enforcement begins?

The enforcement mechanism under the EU AI Act operates through national market surveillance authorities in each EU member state. If a non-compliant AI system is identified, authorities can require corrective action, restrict or suspend market access, and impose administrative fines. For using a prohibited AI practice, fines reach up to €35 million or 7% of global annual turnover. For violating Annex III high-risk AI system obligations, fines reach up to €15 million or 3% of global annual turnover. For providing misleading information to authorities, fines reach up to €7.5 million or 1% of global turnover. In certain EU member states, national AI legislation adds criminal liability on top of administrative fines for the most serious violations. Italy’s Law No. 132/2025, for example, establishes criminal penalties for unlawful dissemination of AI-generated content. Regulatory exposure is not limited to civil fines.

Discover how Ailoitte AI keeps you ahead of risk

Sunil Kumar

Sunil Kumar is CEO of Ailoitte, an AI-native engineering company building intelligent applications for startups and enterprises. He created the AI Velocity Pods model, delivering production-ready AI products 5× faster than traditional teams. Sunil writes about agentic AI, GenAI strategy, and outcome-based engineering. Connect on

LinkedIn



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews



A Republican lawmaker charged in an alcohol-related driving offense won’t have to appear in court again until after the Legislature adjourns for the year.

A June 10 arraignment hearing is set for Rep. Elliott Engen, a Lino Lakes Republican who faces three misdemeanor charges following an arrest early Friday. He was stopped for speeding and other infractions in White Bear Lake; officers detected alcohol and he later tested well above the legal limit for driving, according to a citation.

Engen has apologized for a lapse in judgment; he promised to learn from his actions and “do better.” Aside from being a second-term legislator, he is also a candidate for state auditor.

A second lawmaker, GOP Rep. Walter Hudson, was in Engen’s truck at the time of the stop and an open bottle of alcohol was found in a rear seat. Hudson, a second-term legislator from Albertville, was in possession of a permitted handgun, which could cause him legal problems if he is determined to have been intoxicated.

Police officers wrote in their report that Hudson disclosed he had the gun as the truck was being searched. The report said police took the firearm for safekeeping and said he could pick it up at a later time, which Hudson agreed to.

“I regret the poor decisions that were made during this incident, and commend the White Bear Police Department for their professional response,” Hudson said in a written statement. “I’m grateful that no harm was done to ourselves and others.”

Two lawmakers stand and look around
Rep. Walter Hudson, R-Albertville, (center) and Rep. Bidal Duran, R-Bemidji, (right) join other Republican lawmakers gather in the House chambers Jan. 27, 2025.
Tim Evans for MPR News file

A third, unidentified passenger was in the truck as well, according to police. Hudson and that person were transferred to the police department until they could arrange rides.

The Minnesota lawmakers had been at the Capitol late into the evening Thursday as the House debated procedural motions on gun, immigration and social media legislation. The motions failed on 67-67 votes.

There is no indication yet that either Hudson nor Engen had been drinking on Capitol grounds, which would be a violation of a House rule against consumption of alcohol or drugs in spaces under that chamber’s control.

According to a White Bear Lake Police report, Engen initially said he had not been drinking when asked by the police officer who pulled him over — “nothing at all,” he is quoted as saying. He performed a field sobriety test, which the report says showed signs of impairment.

Engen gave a preliminary breath sample there, the report says, which estimated a 0.142 blood alcohol level. After he was taken by squad car to the police department “Engen spontaneously stated, ‘Sir, I had a drink three hours ago,’” the report says.

He told the Minnesota Star Tribune in an interview Monday that he had also consumed alcohol in the afternoon on Thursday as well.

Engen is charged with two impaired driving offenses and speeding. White Bear Lake police also said he was driving a vehicle with expired registration and an inoperable headlight.

Engen has not returned calls from MPR News. A court docket lists a “notice of appearance” on Tuesday.

He is being represented in the criminal case by Chris Madel, an Excelsior attorney who waged a brief Republican campaign for governor.



Source link