Will the EU's AI Act Stifle the Competitiveness of Europe in the AI Race?

Will the AI Act stifle the competitiveness of the EU in the race for AI Supremacy?

Will the EU's AI Act Stifle the Competitiveness of Europe in the AI Race?
The EU is set to implement the first comprehensive AI regulation. Source. Unsplash.

The EU is set to come up with the world's first comprehensive AI regulation, termed the ‘AI Act’. The problem, however, is that corporate players say that the ‘overregulation’ of AI could stifle the continent's competitiveness in the AI race. In this article, we’ll delve into what the AI Act will mean for Europe, the U.S., and other key industry players, and whether Europe’s competitiveness in the AI Race will be made weaker by the new regulation.

The AI Act

The European Union has been a global leader when it comes to digital regulation, especially when it comes to regulating digital innovations and the internet space as a whole. The EU’s General Data Protection Regulation, GDPR, for instance, has been a benchmark for online privacy, pushing all websites to include cookie consent warnings, allowing users to choose what the website can track.

In the same breath, the EU has had its eyes on Artificial Intelligence and since 2021, has been coming up with the world’s first comprehensive AI regulation, which they termed the ‘AI Act’. The AI Act, which is still in development, but nearing completion, is set to be the benchmark for AI regulation at a time when AI is becoming increasingly proliferated in our daily lives.

How Will the AI Act Work?

In essence, the AI Act will break down AI technologies into four categories:

  1. Unacceptable Risk: These are AI systems that pose a risk to human life and to society and will include AI systems that:

    • Conduct real-time and remote biometric identification such as facial recognition. Think of WorldCoin for instance.

    • Undertake social scoring, i.e., grade people as per their personal characteristics, socio-economic status, or behavior. Think of China’s credit score system.

    • Undertake cognitive manipulation of people or specific vulnerable groups. This could be something like AI-powered voice-activated toys that encourage aggressive behavior in kids.

  2. High Risk: These are AI systems that adversely affect fundamental rights and human safety. They’ll be broken down into two:

    • AI systems that are utilized in products that fall under the EU’s product safety regulations. Examples of these products are medical devices, lifts, cars, aviation, and toys.

    • AI systems that fall into 8 specific areas and which will be required to be registered in an EU database. These include:

      1. Law enforcement.

      2. Education and vocational training.

      3. Asylum, migration, and border control management.

      4. Application of the law and assistance in legal interpretation.

      5. Worker management, employment, and access to self-employment.

      6. Management and operation of critical infrastructure.

      7. Access to and enjoyment of essential private services and public services and benefits.

      8. Biometric identification and categorization of natural persons.

  3. Generative AI: These are generative AI systems that will need to comply with the following transparency requirements:

    • Publishing summaries of copyrighted data used for training.

    • Disclosing that content was generated by AI.

    • Designing these models to prevent them from generating illegal content.

  4. Limited Risk: These are AI systems that will be required to comply with “minimal” transparency requirements that would “allow users to make informed decisions”.1 Users should be made aware that they are interacting with AI and they can then decide whether they want to continue using it. Examples of such AI systems include systems that are used to create and manipulate audio, video, and images such as AI systems used to create ‘deepfakes’. [Source: EU Parliament Website]



Concerns from Corporate Entities

The AI Act and its nearing implementation have flared up concerns from industry players, including big tech companies who wrote a joint open letter2 with other corporate entities and interests; citing concerns that the law, which they see as likely overregulation would impede on Europe’s competitiveness in the AI Race.

Essentially, these corporate entities were telling the European Commission to back off and water down its proposed upcoming law, to allow them to maneuver and exploit AI for their best interests. Mind you, these are the same companies who have a vested interest in AI and the use of our data to train these models and then sell them back to us for various use cases.



The Data Requirement Nightmare

For Artificial Intelligence systems to get really good at anything, they need to be trained with billions of data sets entailing real-world and real-person data. This huge data requirement is a nightmare for user privacy as it would mean, for AI to really excel, a lot of our data needs to be put into AI training models and this includes wide-ranging and in-depth data. It could include data on demographics such as age, medical data such as medical conditions, weight, height, family medical history, financial data such as employment status, average pay, and so much more.

Therefore, a health AI system, for instance, that is being built to diagnose diseases such as cancer early, could require billions of gigabytes of personally identifiable and sensitive health data from populations, including from EU countries. As such, the AI Act, which will keenly require disclosure of what information AI systems will need, could mean that the development of such a system may be impeded as it may fall into the unacceptable risk or high-risk category, requiring cumbersome disclosures and being bound by far-reaching limitations.

From experience, however, we know that big tech and big pharma hate regulation that requires them to act responsibly with user data, and thus their lobbying to have the AI Act watered down.

Europe’s Competitiveness in the AI Race

Europe, unlike the United States and China, has been left behind when it comes to the development of AI systems. The U.S. and China are the global leaders in the A.I field, having come up with the most sophisticated A.I systems available today. ChatGPT, the world’s most popular AI system is a product of the United States with China heralding formidable AI systems via some of its tech behemoths such as Baidu, Google’s equivalent in China.

As such, Europe has been behind these two, but only by about a year. Nonetheless, since before the acknowledgement that there was a need for AI regulation in 2021, Europe has been taking steps to catch up including by investing in AI systems and technologies. Starting in 2019, the EU has invested over 4 billion Euros in AI systems including in the Digital Cities program under the GATE Institute, a group of AI projects aimed at improving societal life in Europe.



The Digital Cities AI Project

The Digital Cities program, for instance, is aimed at mapping cities such as Sofia, Bulgaria digitally, and creating a digital twin of the city. With this virtual twin, researchers with the help of AI systems can get insight into numerous scenarios such as air quality at certain points, how wind impacts highrises, how traffic flows within the city, etc. All this data is important as the researchers can then test numerous changes such as the introduction or removal of certain road intersections (and how that would affect traffic), urban planning changes, the introduction of Air Conditioning systems at various points in the city (and how that would affect air quality), and so on.


The Race for Artificial Intelligence: Can Europe Compete? - DW Documentary

Learn more about the project and the overall topic of this article by watching the documentary: ‘The race for Artificial Intelligence: Can Europe compete?’ below.

Courtesy of DW Documentary.


The point is, Europe is quickly catching up to the U.S. and China and what’s impeccable is that unlike in these two countries where private money is what’s driving AI development, the EU is itself investing in AI development in the continent. A by-product of this is that the AI systems developed with EU funding will be natively built as per the AI Act and in line with the regulation and its tenets. A resultant effect is that the EU will aid in showing how the AI Act will function, and its benefits or drawbacks, including what effect it will have on AI development.

So, will the AI Act impede on the EU’s competitiveness in the AI Race?

No. I don’t think so. If anything, it will provide a much-needed framework for protecting users and their privacy from exploitation by AI companies, ensuring disclosure on what data is collected, how it will be used, and how users can opt-out. Such a framework is crucial at a time when AI adoption is at an all-time high and security, especially with emergent technology, is key; and the AI Act will provide just that.



Footnotes:


  1. EU AI Act: first regulation on artificial intelligence. European Parliament Website. 2023. ↩

  2. Open letter to the representatives of the European Commission, the European Council and the European Parliament. Artificial Intelligence: Europe's chance to rejoin the technological avant-garde. Google Drive. 2023. ↩