Why Anyone Building on AI Should Understand EU's AI Act
The EU AI Act is a small beacon of thoughtful AI safety in the global sea of confusion.
Technology regulation can massively disrupt the status quo for teams, creating complexity in complying with unique national interests. If you’re on a team that’s building a product with customers in multiple countries, staying up to date on regulatory rules, provisions, laws, and oversight is essential. For those simply implementing AI within their own organizations should understand EU’s goals at ensuring AI use stays safe.
In the world of AI, regulatory compliance is a fast-moving target. Complying with the EU’s AI Act is likely the best choice for product owners.
The Power of Regulation
If you haven't noticed, nearly every website now bombards you with cookie consent pop-ups—full-page modals blocking access until you either surrender your data or attempt to decipher a complex set of cookie settings. This is a direct result of regulation.
In 2018, the EU passed the General Data Protection Regulation (GDPR), introducing strict requirements for handling data collected from EU citizens. These broad and complex rules came with punishing fines, forcing companies worldwide to take compliance seriously. I personally consulted to organizations around compliance with the GDPR, and I can attest to the costly and disruptive efforts it took to comply, both by product teams, but also to the entire company. I offer this only as a perspective that early compliance or alignment to regulation might reduce future risk and cost.
Fast forward to 2024, when the EU introduced the world’s most comprehensive AI legislation. As of February 2025, all companies offering services or selling to EU citizens are expected to comply with the AI Act. Even if you aren’t selling or servicing citizens in the EU, you might want to comply with the legislation.
The Fractured World of Compliance
Leading AI experts have emphasized the need for governance and oversight, warning of numerous risks AI presents to individuals, businesses, and nations. Privacy, bias, security, personal liberty, and even safety concerns—including potential AI retaliation—are at stake. We are already witnessing these risks: AI-driven propaganda, difficulty in distinguishing facts from misinformation, and increased mental health crises among social media users. These issues stem not from human actions alone but from self-learning algorithms operating with little oversight and AI-generated content that is often indistinguishable from human-created material.
Despite these warnings, governments are struggling to keep up. Many lack the expertise and resources to enact meaningful legislation. Even private companies, despite some altruistic efforts, face challenges due to competitive pressures and difficulties in agreeing on oversight mechanisms. For example, Facebook’s recent decision to eliminate its fact-checking team highlights the challenge of enforcing AI governance.
At the national level, regulation remains inconsistent. In 2023, recognizing Congress’s failure to pass AI legislation, President Biden issued an executive order titled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” While the order laid out demands and deadlines for federal agencies to provide oversight, it lacked enforcement mechanisms.
Then, on January 23, 2025, President Trump issued an order revoking the 2023 directive, instead prioritizing global AI dominance. This shift has created uncertainty around compliance requirements in the U.S.
Without federal oversight, individual U.S. states have begun crafting their own AI regulations. If your AI-powered product operates in California or other states with emerging legislation, you may need to comply with varying requirements.
For companies expanding beyond North America or the EU, understanding AI laws in each jurisdiction—including smaller regional regulations—is essential.
Why the EU AI Act Stands Out
Among global AI regulations, the EU’s AI Act is the most advanced, comprehensive, and assertive. Signed into law in June 2024, it governs the use and deployment of AI technologies within the EU. The Act establishes multiple compliance deadlines, with the final one set for June 2026.
A full breakdown of the legislation deserves its own article, but its core objectives include:
Transparency in risk areas (especially when AI impacts critical human outcomes).
Bias mitigation and ethical AI training to prevent algorithmic discrimination.
Secure AI development with protections against internal bad actors.
Defense against external threats to AI systems.
Safeguarding intellectual property, privacy, and user safety.
Restricting excessive surveillance while preserving anonymity rights.
Compliance with existing EU laws, including GDPR and copyright regulations.
Special protections for vulnerable groups, including children.
Why You Should Comply
Some software providers argue that the AI Act is too restrictive, potentially stifling innovation over exaggerated fears. However, the legislation aligns with the EU’s broader regulatory culture, which prioritizes citizen protection over unrestricted technological development. If competition or innovation begins to lag, the EU may adjust its approach. For instance, in early 2025, the European Commission repealed the AI Liability Directive, which would have governed AI-related legal claims. While this change does not alter the AI Act itself, it suggests the EU is willing to balance regulation with economic growth.
That said, here are three key reasons why product teams should consider compliance:
It’s the most established AI regulation. The AI Act is widely recognized as a comprehensive framework balancing risk management with innovation.
It’s clear and well-structured. Unlike the vague U.S. executive orders, the AI Act provides transparent guidelines and clear legal expectations.
It’s adaptable. The Act differentiates AI oversight based on risk levels. AI used for life-or-death decisions (e.g., healthcare applications) faces stricter rules than AI used for hiring decisions or document editing. This tiered approach reduces compliance burdens for lower-risk AI applications.
How Is Your Team Managing Compliance?
I’d love to hear how your company is navigating global AI regulations. Do you have a dedicated legal team? Is compliance embedded in your product development process, from early user research to final deployment? How are you ensuring that your AI technologies align with evolving legal landscapes?
Let’s discuss!