EU AI Act: A Milestone in Artificial Intelligence Regulation
On August 1, 2024, the European Union’s Artificial Intelligence Act (EU AI Act) officially came into force, signalling the arrival of the world’s first comprehensive AI regulation. While most of its provisions will become mandatory only in August 2026, certain rules – particularly those banning prohibited AI practices – took effect in February 2025. This makes now the perfect time to examine the purpose and key elements of the AI Act, plus what it means for the financial sector and technology providers offering AI-powered business solutions.
Due to the breadth and complexity of the EU AI Act, we’ve split this topic into two parts. In this first article, we’ll explore the Act’s core objectives, its territorial scope, and its implementation timeline, together with key milestones. We’ll also examine the broader regulatory landscape, comparing the EU’s approach to AI governance with those of other major economies. In our next blogpost, we’ll dive into the key elements of the AI Act and outline the most critical requirements for banks, financial service providers, and their suppliers, as well as the essential steps they need to take to ensure compliance.
The Core Objectives of the AI Act
At its heart, the AI Act aims to create a harmonised legal framework across the EU that both nurtures innovation and safeguards our fundamental values. In an era where rapid AI development presents unprecedented challenges – not only for sectors like finance but for all industries – the regulation provides clear, risk-based criteria to address concerns such as algorithmic bias, opacity, and data misuse. As the first law of its kind, it sets a global benchmark by balancing stringent protections for health, safety, and individual rights with measures designed to stimulate technological progress and economic growth. Its horizontal approach ensures that these rules apply uniformly across all sectors and Member States, offering legal certainty and paving the way for a trusted, human-centric AI ecosystem that underpins a competitive, integrated internal market.
International AI Regulatory Landscape
The EU AI Act is the most comprehensive legal framework for artificial intelligence to date, but it is not the only regulatory effort worldwide. Other major economies have also recognised the need for AI governance, though their approaches vary significantly.
In the United States, there is currently no overarching federal AI regulation, however agencies such as the Federal Trade Commission (FTC) have issued guidelines on ethical AI use. Donald Trump’s return to office marked significant changes in U.S. artificial intelligence (AI) regulation. After taking office, President Trump revoked the executive order signed by Joe Biden in 2023, which aimed to ensure the safe and ethical governance of AI development. In his new executive order, Trump emphasised promoting innovation and strengthening U.S. technological leadership in the AI field. As part of this approach, he announced the “Stargate” infrastructure project, a $500 billion initiative designed to enhance the country’s AI infrastructure and solidify its leadership in the global tech race.
China, in contrast, has implemented a state-controlled regulatory framework, embedding AI governance into broader national strategies like the New Generation Artificial Intelligence Development Plan. The goal of this plan is for China to take a leading role in the development and application of artificial intelligence by 2030. The country focuses on balancing rapid AI development with strict government oversight.
Meanwhile, Japan has adopted an ethics-based approach under its Society 5.0 initiative, promoting AI to enhance human-centric innovation while aligning with global best practices. Japan is currently working on comprehensive AI legislation, which is expected to come into effect in 2025.
In 2022, Canada introduced the Artificial Intelligence and Data Act (AIDA), aiming to ensure the safe and responsible use of AI systems. The AIDA focuses on regulating high-risk AI systems, minimising individual and collective harms, and enhancing transparency and accountability. Canada, one of the early adopters of AI regulation, continues to refine its Pan-Canadian AI Strategy with a focus on responsible AI governance and international harmonization.
Despite the variety of national strategies, international coordination on AI regulation remains a significant challenge. However, global organisations are working to create alignment. The OECD’s AI Principles and UNESCO’s AI Ethics recommendations are being adopted by various countries to establish common ground. Additionally, the G7 nations are actively engaged in discussions to develop shared regulatory approaches, while UNESCO, ISO, and other international bodies continue to work on building comprehensive AI regulatory frameworks.
These efforts highlight the increasing recognition of the need for global collaboration in AI governance. Common principles such as transparency, accountability, and the mitigation of bias are emerging as universal priorities. However, significant political and economic differences continue to impede the development of a unified global regulatory framework. For financial institutions and technology providers operating across multiple markets, this fragmented landscape creates a complex web of compliance requirements, forcing them to stay vigilant as global standards evolve.
To achieve synergy in AI regulation, it is essential to harmonise key areas such as data privacy, algorithmic transparency, and ethical AI use. Establishing international standards for AI safety, fairness, and accountability can help mitigate risks and ensure that AI technologies benefit society as a whole. Enhanced cooperation between countries and international bodies is crucial to address these challenges and create a cohesive global regulatory environment.
Territorial Scope of the AI Act
The EU AI Act has a broad territorial scope, like the GDPR. It applies not only to companies operating within the EU but also to those outside the EU if they offer AI systems to EU users or if their AI-generated outputs are used within the EU. This extraterritorial reach means that businesses worldwide must ensure compliance if they develop, market or deploy AI solutions in the EU, reinforcing the Act’s role in shaping global AI governance.

With these deadlines approaching, banks and other financial institutions should start preparing now to ensure compliance and maintain a competitive edge in the evolving AI landscape. You can find here the detailed timeline.
Conclusion
AI regulation continues to evolve, not only to keep up with rapid technological advancements but also to refine key details and ensure practical implementation. For the banking sector, these regulatory changes bring both challenges and opportunities. Compliance is not just a legal requirement – it enhances trust, strengthens market position, and ensures responsible AI adoption.
Stay with us as we continue to analyse the EU AI Act! In our next post, we’ll dive into its key components and what they mean for banks, financial service providers, and their partners – along with the vital steps they must take to stay compliant. Don’t miss it!
And feel free to book an appointment with our expert any time.