Back to Insights
AI Act

The AI Act Explained: What Companies Need to Do and When

The AI Act is the world's first comprehensive law regulating artificial intelligence. Here's a practical breakdown of who it applies to, what you need to do, and when it all takes effect.

Hanna
Hanna
Co-Founder & Tech Lawyer
October 15, 2025·8 min read

The AI Act is the world's first comprehensive law regulating artificial intelligence. It aims to ensure that AI systems used in Europe are safe, transparent, and respect fundamental rights. The law takes a risk-based approach, meaning the stricter the risk, the heavier the rules.

The Act officially entered into force on 1 August 2024, but the obligations roll out gradually between 2025 and 2027. This gives companies time to prepare, build internal awareness, and set up the right governance and documentation.

So what does this mean for you in practice? Let's break it down.

1. Who the AI Act applies to

The AI Act applies to providers and deployers of AI systems, as well as importers, distributors, and other intermediaries.

Providers are the organizations that build or place an AI system on the market. This includes AI vendors, SaaS companies with embedded AI features, or anyone marketing an AI system under their own name.
Deployers are organizations that use AI systems in a professional context. Most businesses fall into this category, even if they didn't build the AI themselves.

Providers face the heaviest pre-market obligations. Deployers are responsible for how AI is used in practice, including oversight, data protection, and fair decision-making.

2. What companies need to do, by role

Providers

If you build or market AI systems, you will need to:

  • Implement risk management and data governance processes.
  • Maintain technical documentation and logging capabilities.
  • Test for robustness, accuracy, and cybersecurity.
  • Conduct a conformity assessment and affix a CE mark before placing high-risk systems on the market.
  • Provide clear instructions and transparency information to users.
  • Set up a post-market monitoring system to track performance and incidents.

Deployers

If your company uses AI in daily operations, such as HR tools or chatbots, you will need to:

  • Maintain human oversight and make sure people can review or override AI decisions.
  • Keep records and logs of how the system is used.
  • Ensure data accuracy and relevance, especially for high-risk applications.
  • Conduct a fundamental rights impact assessment (FRIA) if the use case could significantly affect individuals.
  • Inform people when AI is being used in decision-making processes that concern them.
  • Protect personal data and ensure secure handling of both inputs and outputs.

Even small or mid-sized businesses are expected to have basic awareness and governance in place.

3. When it all takes effect: a simple timeline

DateWhat happensWho is affectedWhat to do
2 February 2025General provisions and definitions take effect. Unacceptable-risk AI systems (such as social scoring) are banned.EveryoneReview your AI systems to ensure you are not using or providing prohibited applications. Begin building an AI inventory.
2 August 2025Governance and transparency rules begin, including for general-purpose AI providers.Providers and deployersCreate or update your AI policy, start logging system usage, and map your providers and data flows.
2 August 2026Full obligations for high-risk AI systems take effect.Providers and deployers of high-risk systemsProviders must have CE-marked systems and full documentation. Deployers must implement oversight, data checks, logging, and impact assessments.
2 August 2027Final rules apply to AI systems that are safety components or embedded in regulated products (such as medical devices).Providers and deployers in regulated sectorsIntegrate AI Act compliance with existing product safety processes.

4. Why this matters now

Even though most obligations start in 2026, companies should not wait. Building an AI inventory, setting internal policies, and identifying high-risk uses take time. Early preparation helps reduce costs, avoid rushed compliance, and prevent data protection issues.

If your organization uses public or unapproved AI tools, now is also the time to get visibility. Shadow AI (AI use outside official company approval or oversight) creates blind spots that make it impossible to prove compliance when the AI Act takes effect.

5. How Trustflo helps

Our platform is designed to make compliance practical, not painful. It automatically detects shadow AI in your environment, maps your tools to AI Act categories, and creates a live inventory. We also generate the documentation required by the AI Act, such as risk records and usage logs, and provide templates for impact assessments, so you can create a clear paper trail and stay ready for audits - without hiring a legal team or slowing innovation.

The AI Act is about trust, transparency, and accountability. Whether you build AI or simply use it, visibility is the first step toward compliance. Start mapping your systems now, and you will be ready when the rules become real.

Ready to get compliant?

See how Trustflo helps you discover shadow AI and prepare for the AI Act.