March 14th, 2024 Legal Updates

Exploring the EU’s AI Act and Potential Prospects for the GCC Region

Earlier this year, the EU unanimously endorsed the passage of the Artificial Intelligence Act (the “Act”). The Act’s initial proposal, put forth by The European Commission back in April 2021, laid out comprehensive standards and regulations for the use of AI technologies and the protection for associated risks.

Notably, the Act provides for a general definition of artificial intelligence systems. These include software that utilizes techniques such as machine learning, logic-based systems, and statistical approaches to produce content, predictions, recommendations, and decisions that influence the environment.

We provide a general overview of the Act’s provisions within this article, and with the GCC’s increasing utilization of diverse AI technology, it is crucial to identify its potential implications for the region:

Risk-Based Approach

The Act adopts a risk-based approach in striking a balance between mitigating the risks posed by AI systems and encouraging innovation in compliance with fundamental rights. The Act categorizes AI systems based on risk levels:

  1. Systems presenting ‘unacceptable’ risks: likely face prohibition, and such prohibited examples include systems utilizing manipulative techniques, exploitation of vulnerable groups, or real-time biometric identification.
  2. Systems presenting with ‘high’ risks: required to meet strict requirements, as they have the potential to endanger safety and fundamental rights. Common examples of practices engaging those types of risks include those falling below the EU health and safety regulations, or those utilized in critical sectors such as biometric identification, employment, infrastructure, education, and law enforcement. The Act obliges providers of these types of AI systems to undergo testing and requirements related to transparency and cybersecurity prior to entry into the EU market.
  3. AI systems associated with limited or minimal and low risks: those involving human interaction such as chat boxes, content manipulation such as deepfakes, and biometric categorization. They have less stringent transparency obligations and are not required to meet the same standards that systems with ‘high’ risks are obligated to meet. Nonetheless, AI systems with minimal risks are required to meet certain limited transparency obligations. In addition, the Act encourages developers of minimal and low risk systems to voluntarily adopt codes of conduct that resemble the ones imposed on ‘high’ risk systems.
Governance, Enforcement and Sanctions

A governance structure is outlined within the Act. Member states are to establish and appoint national authorities tasked with ensuring compliance with the Act. In addition, a central European Artificial Intelligence Board at the EU level is referred to under the Act tasked with the same activities. These governance authorities are permitted to have access to confidential information pursuant to confidentiality obligations in exercising rights to prohibit, restrict, or recall non-compliant AI systems exposing risks to health, safety, and fundamental rights. The Act grants these compliance authorities the right to impose fines for non-compliance based on the seriousness of the violation(s) and resulting risks, such as administrative fines of up to €35 million or 7% of the worldwide annual turnover of infringing systems.

Measures to Support Innovation

The Act refers to the European Commission’s strategy in establishing a regulatory sandbox by Member States or the European Data Protection Supervisor to support and encourage innovation. The aim is to provide a control within a defined timeframe for the testing, approval, and development of new AI systems prior to entry into the EU market, with specific measures put forward for start-ups and small scaled businesses. In addition, it is envisaged that the sandbox should facilitate individuals to utilize personal data for the development of AI systems while adhering with GDPR regulations.

What does this mean for the GCC Region?

Across the GCC region, the utilization of AI systems within critical national sectors such as finance, transportation, healthcare, education, energy, and agriculture is growing at a rapid rate. The emergence of the Act may have significant implications on the use of such AI systems and technology.

Similar to the effects of GDPR, the Act may prompt policymakers within the GCC region to endorse similar regulatory frameworks, which use the European Commission’s standards as the starting point for regulating the use, and deployment, of AI and machine learning technologies.

This in turn may impact AI developers within the GCC region to adopt similar testing and transparency requirements, which are designed to ensure adherence to high standards for personal data protection, and the permissible use of machine learning, when developing AI-powered software and services.

In addition, the Act may also affect the trade of AI services and technology between the GCC and EU because of the risk-based approach system adopted by the Act, and the types of AI technology permitted to enter the EU market. While these implications present possible financial and regulatory challenges for GCC member states and AI developers, they also offer opportunities for increased competitiveness and expanded market reach

Authors: Feras Gadamsi, Partner and  Liana Rashid, Trainee Lawyer.

For further information, please contact Alex Saleh (alex.saleh@glaco.com) or Feras Gadamsi (feras.gadamsi@glaco.com)