The EU AI Act: A Step Forward in AI Regulation, but a Blind Spot in Immigration Enforcement

The European Union (EU) is on the cusp of regulating the use of artificial intelligence (AI) with the EU AI Act, a landmark piece of legislation that aims to regulate harmful uses of AI based on its capacity to cause harm. The act will require makers of AI tools, such as ChatGPT, to disclose any copyrighted materials used in building their systems, making it the West’s first comprehensive set of rules governing the rollout of AI.[0]

However, the proposed law does not ban the many harmful and dangerous uses of AI systems in the context of immigration enforcement, a lethal blind spot that could lead to a global race towards more intrusive technologies to prevent or deter migration. To prevent harm, EU lawmakers must prohibit the use of certain AI-based practices and protect the rights of migrants, refugees, and asylum seekers against harmful aspects of AI.

The governance approaches of the EU and the United States (US) touch on a wide range of AI applications with international implications, including more sophisticated AI in consumer products, a proliferation of AI in regulated socioeconomic decisions, an expansion of AI in a wide variety of online platforms, and public-facing web-hosted AI systems, such as generative AI and foundation models.[1]

The EU and the US are jointly pivotal to the future of global AI governance. Ensuring that EU and US approaches to AI risk management are generally aligned will facilitate bilateral trade, improve regulatory oversight, and enable broader transatlantic cooperation. At present, the EU lags behind China and the US in AI and cannot afford to lose further ground.[2] It should align its approach to AI closer to the UK’s to enable interoperability and limit further damage to AI development and adoption caused by its heavy-handed approach to tech regulation.

The EU has established the European Centre for Algorithmic Transparency (ECAT), its first center to research how big corporations use algorithms to amass their wealth.[3] The center will enforce the Digital Services Act directives, enable experts to check how algorithms work, and identify and address any potential risks posed by these platforms.

The EU AI Act categorizes high-risk AI into two tiers: regulated consumer products and AI used for impactful socioeconomic decisions.[1] AI systems categorized as high-risk must adhere to data quality, accuracy, robustness, and non-discrimination standards. Additionally, these systems must incorporate technical documentation, record-keeping, a risk management system, and human oversight. Providers who sell or use AI systems with high-risk coverage will have to comply with these regulations and provide proof of their conformity or be subject to penalties up to 6% of their annual global revenue.

In conclusion, the EU AI Act represents a significant step forward in regulating harmful uses of AI. However, the act’s blind spot in not banning the harmful use of AI systems in the context of immigration enforcement and the EU’s heavy-handed approach to tech regulation could cause it to fall further behind global rivals. The EU must align its approach to AI closer to the UK’s to enable interoperability and remain competitive in the global tech landscape.

0. “EU to legislate on ChatGPT and AI tools” Private Equity News, 28 Apr. 2023,

1. “The EU and U.S. diverge on AI regulation: A transatlantic comparison and steps to alignment” Brookings Institution, 25 Apr. 2023,

2. “The EU Should Learn From How the UK Regulates AI to Stay Competitive” Center for Data Innovation, 21 Apr. 2023,

3. “EU sets up research hub to analyze Big Tech’s AI algorithms By Cointelegraph”, 19 Apr. 2023,