What is the EU AI Act?

The EU AI Act (AI Act) aims to establish a common regulatory and legal framework for the development and use of AI in the EU.

UK Companies / Companies based outside the EU could be affected in several ways, for example, if their website offers a chatbot function in the EU, they could be caught by the transparency obligations.

What do SaaS Companies need to consider?

Does your product fall within the definition of AI system?

According to Article 3 of the EU AI Act – ‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

Identify your AI systems risk category

Your obligations depend on the risk category of the AI functionality provided. The 4 risk categories:

Unacceptable risk systems:

Prohibited since the February 2025 e.g. emotion recognition systems in the workplace and in education or inappropriate use of social scoring and crime prediction systems

High Risk systems:  

AI systems that create a high risk to the health and safety or fundamental rights of individuals and so are subject to onerous requirements e.g.:

  • Risk management system (Article 9) – You need to establish, document, and maintain a risk managementsystem throughout the lifecycle of high-risk AI systems to effectively identify and mitigate risks.
  • Transparency and provision of information (Article 13) – Providers are required to inform users that they are interacting with an AI system and provide clear instructions for its use. They must also supply deployers with detailed instructions on the operation, limitations, and risks associated with the AI system.
  • Post-market monitoring and reporting (Article 72 + 73) involve establishing a monitoring system which collects and analyses data on the performance of these AI systems throughout their lifetime, ensuring they continue to comply with regulations, and reporting serious incidents to market surveillance authorities.

Limited Risk systems:

AI systems that pose a transparency risk.

Minimal Risk systems:

AI systems with no significant restrictions, such as spam filters.

Do you have the required skills and knowledge?

Article 4 of the EU AI Act mandates AI literacy as a fundamental compliance requirement. Organisations deploying AI must ensure that their employees, contractors, and relevant third parties have the necessary skills and knowledge to deploy AI responsibly and manage associated risks.

The implication of this is that it places a greater onus on businesses to put into practice education programmes that demonstrates comprehension and application, going well beyond one-off training sessions.

Challenges you could face

  • Complexity in Risk Assessment: Determining the exact classification of AI services and its risk category can be difficult.
  • Dynamic AI Models: Frequent updates to AI models make achieving compliance a moving target.

Enforcement and penalties

Penalties for non-compliance range from €7.5 million to €35 million or from 1 % to 7% of the company’s global annual turnover, depending on the severity of the infringement.

Prohibited practices

Up to the higher of €35,000,000 or 7% of the undertaking’s global annual turnover.

High-risk AI and specific transparency requirements

Up to the higher of €15,000,000 or 3% of the undertaking’s global annual turnover.

GPAI models

Up to the higher of €15,000,000 or 3% of the provider’s global annual turnover.

Incorrect, incomplete or misleading information

Up to the higher of €7,500,000 or 1% of the undertaking’s global annual turnover.

Tips you can use to prepare for EU AI Act Compliance

  1. Identify Your AI Systems –Assess the systemic risk category.
  2. Document & Report – Create required documentation and transparency reports.
  3. Mitigate Risks – Implement testing and incident reporting processes; Provide regular team training; Ensure testing of high-risk AI systems in real world conditions.
  4. Build Internal Governance – Align legal, product and operational teams. Develop governance processes, documentation and policies.

How can Waterfront help?

  1. Review and draft contracts and policies: Ensuring compliance with the EU AI Act, such as drafting policies covering human oversight, incident management and record keeping. We can also help you put in place an AI policy with clear guidelines on the use of AI within the company and consider updates to existing policies to reflect the use of AI.
  2. Risk assessments: Identifying gaps in existing contractual agreements and help you identify what category your AI system falls under.
  3. Regulatory updates: Keeping your business up to date with evolving EU regulations.

Reach out to Andrew Gordon if you’d like to discuss any of the above services.