After exploring the main goals of the EU AI Act, the region it applies to, the rollout schedule, and how other leading economies regulate AI in the first part of this series, we now arrive at the heart of the matter. In this second blogpost, we take a closer look at the key elements of the AI Act and what they mean in practice. We’ll break down the most critical duties for banks, financial service providers, and their suppliers, and outline the vital steps they must take to ensure compliance. Let’s dive in!
Key Elements of AI Act and Implications for Banks
With a strong focus on risk management, transparency, and accountability, the AI Act introduces a risk-based approach to AI governance. For banks and financial institutions, compliance with this regulation will be critical as AI-powered solutions, such as credit scoring and fraud detection, fall under its scope. This section provides an overview of the AI Act’s core elements, with particular emphasis on aspects relevant to the banking industry.
A Risk-Based Approach to AI Regulation
The AI Act categorises AI systems into four risk levels, each with corresponding regulatory obligations. This tiered approach ensures that higher-risk AI applications face stricter requirements while allowing minimal oversight for low-risk systems.
Unacceptable Risk – Prohibited AI Practices
AI systems deemed to pose an unacceptable risk are banned outright under the AI Act. These include social scoring systems that evaluate individuals based on behaviour or socioeconomic status, like China’s social credit system, real-time biometric identification in public spaces (e.g., facial recognition for mass surveillance), with just a few exceptions for law enforcement purposes or manipulative or exploitative AI that can harm vulnerable groups, such as children or individuals with disabilities.
For banks, this means they cannot deploy AI systems that rank customers based on arbitrary behavioural data for service eligibility or impose discriminatory lending practices based on opaque profiling.
High Risk – Strict Compliance Requirements
AI applications classified as high risk require rigorous compliance measures, as they significantly impact people’s rights and financial stability. This category includes credit scoring and risk assessment systems, which influence loan approvals and terms or automated insurance claims processing, where AI determines claim validity and payouts.
Banks using high-risk AI systems must:
- Undergo a conformity assessment before deploying such AI models.
- Register the AI system in an EU-wide database for transparency.
- Maintain comprehensive data logging and documentation for auditing purposes.
- Implement human oversight mechanisms to ensure AI-driven decisions can be reviewed and overridden if necessary.
These requirements will likely increase compliance costs for financial institutions but will also enhance the reliability and fairness of AI-based banking operations.
Limited Risk – Transparency Obligations
AI systems that fall under the limited-risk category require clear disclosure and user awareness. An example: chatbots and virtual assistants used in customer service interactions.
For these systems, banks must inform customers that they are interacting with an AI system and clarify the type of data being processed. While these transparency obligations are relatively light, non-compliance could erode customer trust and invite regulatory scrutiny.
Minimal Risk – Voluntary Guidelines
AI applications classified as minimal risk are subject to no mandatory requirements, but organisations are encouraged to adhere to ethical AI principles. An example: basic automation tools that enhance operational efficiency without making impactful decisions.
Banks leveraging minimal-risk AI tools should follow industry best practices, such as implementing internal ethical guidelines and maintaining a commitment to fairness and transparency.
Regulating General-Purpose AI Models
Recognising the rapid advancement of AI technologies, the AI Act dedicates a separate section to general-purpose AI models, such as large language models (LLMs) that power AI-driven customer support and document processing or image and video generation tools, which could be misused for fraudulent purposes.
While banks may not directly develop these models, their integration into financial services (e.g., AI-assisted loan documentation or automated investment strategies) necessitates due diligence and compliance with emerging standards.
The purpose of the regulation is to ensure that the developers and providers of such models possess comprehensive knowledge of their models throughout the entire AI value chain, thereby enabling their integration into downstream AI systems and fulfilling the obligations prescribed by the law.
Sanctions and Enforcement
The EU AI Act establishes a system of penalties designed to be effective, proportionate, and dissuasive. High-risk breaches may incur fines of up to 35 million EUR or 7% of annual turnover, while other violations can result in fines of up to 15 million EUR or 3% of turnover, whichever is higher. Emphasising transparency, accountability, and fundamental rights, the Act aims to balance rigorous oversight with the need for innovation. Importantly, the penalty framework also considers the challenges faced by SMEs and start-ups, ensuring that while violations are punished, smaller organisations are not overly burdened.
Getting Ready: Key Steps for Banks to Comply with the EU AI Act
To ensure compliance with the EU AI Act, banks must take several proactive steps. First, they need to assess their existing AI systems and categorise them according to risk levels, identifying high-risk areas like e.g. credit scoring. A comprehensive risk management framework must then be developed, ensuring that robust measures are in place for these high-risk systems. Collaboration with AI providers is crucial to ensure alignment with regulatory standards and maintain compliance. Furthermore, banks should prioritise AI literacy and continuously train staff on the evolving legal landscape and its practical applications. A solid governance framework should be implemented, starting with securing leadership support and assembling a diverse, skilled team across legal, compliance, data science, and cybersecurity domains. Setting clear priorities and gradually implementing AI governance initiatives will be key. Staying engaged with industry developments and contributing to AI governance discussions will also help build a well-informed, collaborative environment that keeps the bank ahead of regulatory changes.
How AI System Providers Can Meet the EU AI Act?
AI system providers play a pivotal role in helping organisations comply with the EU AI Act. They are responsible for ensuring that their AI solutions meet the necessary regulatory requirements, especially for high-risk applications. This involves providing clear documentation on the system’s design, functionality, and risk assessment, as well as ensuring that the AI systems are transparent, explainable, and free from bias. Providers must also implement strong data protection measures and ensure that their AI models are continuously monitored for compliance throughout their lifecycle. Additionally, they need to assist their clients in integrating compliance into their workflows, offering support for audits, updating systems to meet evolving regulatory standards, and providing guidance to educate them on AI-related compliance requirements. Effective collaboration with users and maintaining an open dialogue with regulatory authorities are essential for AI providers to navigate the complex compliance landscape.
Conclusion
In conclusion, successfully navigating the EU AI Act and its complex requirements demands a proactive and adaptable approach. For financial institutions, this means not just complying with regulations but embracing AI with a forward-thinking mindset. By carefully preparing, collaborating with the right partners, and staying engaged with ongoing regulatory changes, they can turn AI’s challenges into opportunities. The journey doesn’t end here; continuous learning, adjustment, and a commitment to responsible AI will be key as the landscape evolves. Stay tuned for more insights as we continue to explore this ever-changing field.
And feel free to book an appointment with our expert anytime.