Research
General-Purpose Artificial Intelligence (GPAI) Models and GPAI Models with Systemic Risk: Classification and Requirements for Providers
Aug 8, 2024
A Primer on the Artificial Intelligence Act of the European Union
ResearchPublished Nov 20, 2024
Photo by Getty Images/jiefeng jiang
The European Union (EU)'s Artificial Intelligence (AI) Act is a landmark piece of legislation that lays out a detailed and wide-ranging framework for the comprehensive regulation of AI deployment in the European Union covering the development, testing, and use of AI.[1] This report is one of several intended to serve as succinct snapshots of a variety of interconnected subjects that are central to AI governance discussions in the United States, in the European Union, and globally. This report, which focuses on key aspects of the EU AI Act, is not intended to provide a comprehensive analysis but rather to spark dialogue among stakeholders on specific facets of AI governance, especially as AI applications proliferate worldwide and complex governance debates persist. Although we refrain from offering definitive recommendations, we explore issues that are relevant to AI developers and legislators.
The U.S. Congress has been working on numerous bills to manage different risks and opportunities stemming from a rapid acceleration of AI development. As of June 1, 2024, there were more than 90 pieces of legislation introduced in the 118th U.S. Congress designed to impose restrictions on high-risk AI, propose evaluation or transparency requirements, create or designate an AI regulatory oversight authority, protect consumers through liability measures, or mandate AI studies.[2] The long list of proposed legislation indicates that, so far, Congress has generally been pursuing a decentralized approach to AI regulation.
Meanwhile, the European Union has chosen a different path. In August 2024, the EU AI Act—a comprehensive, risk-based AI regulation—entered into force.[3] This report on the EU approach to AI serves as a primer to familiarize U.S. legislators and their staffs with this alternative model of regulation. The purpose of this report is not to argue for or against the EU approach but to offer additional perspective for the U.S. AI legislation debate and to help legislators and their staffs understand the requirements that U.S. companies operating in the EU market will have to follow.[4]
When the European Union initiated the AI regulatory process, the European Commission defined its objectives as ensuring that AI systems are safe and respect fundamental rights and EU values, providing legal certainty to encourage investment and innovation, enhancing governance and enforcement of existing laws, and preventing market fragmentation.[5] To accomplish these goals, the European Union decided to adopt a regulatory approach that would "introduce a proportionate and effective set of binding rules for AI systems" that are "clear and robust" and applied across all sectors, tailoring "the type and content of such rules to the intensity and scope of the risks that AI systems can generate."[6] In other words, the Act imposes obligations and requirements based on the risk posed by the AI system, where risk is defined as "the combination of the probability of an occurrence of harm and the severity of that harm."[7] The higher the risk, the stricter the rules.
The EU AI Act framework categorizes risks created by AI systems into four levels: unacceptable risk, high risk, limited risk, and minimal risk. Except for high risk, these terms are not explicitly used in the EU AI Act’s text, but they are widely used in EU communications about the Act.[8] The Act itself prohibits certain AI practices with unacceptable risk, sets requirements for high-risk AI systems and obligations for their operators, and defines transparency obligations for certain AI systems that pose limited risk. AI systems that do not fall under any of the categories (i.e., posing minimal risk) can be placed on the EU market without restrictions or obligations and as such are not covered by the EU AI Act (or discussed in this report). Additionally, the EU AI Act regulates general-purpose AI models, which can be integrated into AI systems of different risk categories.[9] It defines an AI system as a
machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.[10]
The prohibited practices deemed to pose unacceptable risks to fundamental rights and safety focus on the use of AI systems that
Additionally, the EU AI Act bans the use of real-time remote biometric identification systems in public spaces for law enforcement purposes, except under specific, narrowly defined circumstances (such as serious threats to public security) and only with appropriate safeguards and judicial oversight.[11] These prohibitions will go into effect on February 2, 2025.[12]
The EU AI Act has two categories of high-risk AI systems. The first category covers those AI systems that are products or safety components of products required by separate EU product safety legislation to undergo a third-party conformity assessment.[13] This list contains regulations and directives regarding different types of vehicles (ground, maritime, and aerial), as well as machinery, lifts, radio equipment, protective equipment, pressure equipment, cableway installations, appliances burning gaseous fuels, medical devices, and toys.[14] The requirements for these high-risk systems will go into effect on August 2, 2027.[15]
The second category of high-risk AI systems poses significant risks to individuals' health, safety, and fundamental rights, especially when the systems "materially influenc[e] the outcome of decision making" in several areas listed in Annex III of the EU AI Act. This category encompasses specific AI systems used in biometric identification and categorization, critical infrastructure management, education and vocational training, employment and worker management, access to essential private and public services, law enforcement, migration and border control, and the administration of justice and democratic processes.[16] Even in these areas, AI systems are not deemed high risk if they perform a preparatory or narrow procedural task or are "intended to improve the result of a previously completed human activity."[17]
Otherwise, AI systems deemed to pose a high risk will have to be registered in an EU database. Before these systems are placed on the market or put into service, providers will have to implement quality management systems that, among other things, would ensure that high-risk AI systems comply throughout their life cycle with an extensive set of requirements regarding:
The quality management system has to provide a regulatory compliance strategy; description of techniques, procedures, and systematic actions for design, design control, design verification, development, quality control, and quality assurance; procedures for examination, testing, and verification of the system; systems and procedures for data management; risk management systems; postmarket monitoring systems; procedures for serious incident reporting and for recordkeeping; an accountability framework; and handling of communication with oversight bodies.[19] All these requirements for high-risk AI systems of the second category will go into effect—along with most articles of the EU AI Act—on August 2, 2026. The list of high-risk systems can be amended by the European Commission.[20]
The last group of AI systems covered by the Act are those that pose transparency risks (i.e., limited-risk systems). The European Union requires that the deployers of AI systems that interact directly with humans inform users about the AI use (e.g., chatbots). Similarly, the EU AI Act requires that AI-generated synthetic audio, image, video, or text content be marked as such in a verifiable machine-readable format.[21] In case of deep fakes or content intended to inform the public on matters of public interest, deployers need to effectively disclose the artificial generation or manipulation of outputs.[22] Finally, deployers have a responsibility to inform people about their exposure to an emotion recognition system or a biometric categorization system.[23] Some systems can fall under more than one set of obligations.[24] For example, some biometric categorization AI systems can simultaneously fall under high-risk system requirements and transparency requirements while other biometric categorization AI systems might be prohibited.
All these regulations do not apply to systems that are exclusively for military, defense, or national security purposes. They also do not apply to any research, testing, or development prior to placing systems on the market or putting them into service (unless they are tested in real-world conditions). Finally, they do not apply to purely personal, nonprofessional deployment of AI.[25]
Importantly, the work on developing guidelines on the practical implementation of the EU AI Act—including guidelines on requirements for high-risk systems and transparency obligations—is starting only now, as the European Commission has until February 2, 2026, to provide such guidelines.[26] Consequently, there is still significant uncertainty regarding the specific requirements with which AI developers will need to comply.
The EU debates regarding the AI governance model center on the question of a single enforcing agency (centralized model) or delegation of powers to many entities (decentralized model). The EU AI Act creates several new bodies at the EU level while leaving member states with some flexibility in enforcing rules at the national level. Centralized oversight applies specifically to general-purpose AI models (which can be integrated into AI systems of different risk categories) that fall under the exclusive oversight of the newly established AI Office acting on behalf of the European Commission.[27] The AI Office has already started work in this capacity by launching work on the AI Code of Practice that will "facilitate the proper application of the rules of the AI Act for general-purpose AI models."[28]
Outside of general-purpose AI models, it will be up to member states to enforce regulations at a national level, though the AI Office will still play a significant role as the facilitator of the "uniform application of the AI Act across Member States."[29] For example, the AI Office will develop methods to evaluate risk levels and, thus, update the list of prohibited practices, high-risk systems, and systems with transparency obligations.[30]
The EU AI Act leaves it to member states to decide whether they will use a centralized or decentralized oversight model at the national level by selecting at least one notifying authority and at least one market surveillance authority.[31] Whichever model a state chooses, representatives of competent national authorities (one per country) will form the European AI Board, the role of which is to ensure consistency and coordination in the implementation of the EU AI Act. Among other tasks, the board will facilitate the exchange of information, expertise, and best practices; advise on consistent enforcement of the rules; and provide recommendations on codes of conduct and technical standards.[32]
Both the AI Board and the AI Office will be supported by two advisory bodies: an advisory forum and a scientific panel of independent experts. The advisory forum will facilitate dialogue with stakeholders and consist of representatives from industry, start-ups, small and medium-sized enterprises, civil society, and academia. The scientific panel, consisting of a set of independent experts selected by the European Commission, will provide technical expertise and advice to the AI Office.[33] The governance structure will be set up by August 2, 2025.[34]
In the debates about the EU AI Act's governance model, some stakeholders supported centralized governance on the EU level as having a better chance of facilitating a multi-stakeholder dialogue and keeping up with the rapidly advancing technology it sought to regulate.[35] A single agency was seen as able to "concentrate the necessary expertise and resources to effectively and consistently enforce the new regulation across all Member States and foreign regulated entities."[36] On the national level, some commentators regarded the single agency approach as "more likely able to hire talent, build internal expertise, and effectively enforce the AI Act" while facilitating easier coordination between member states.[37] The arguments against centralized governance concerned the time and money necessary to set up a new body and the distance from local stakeholders.[38] Additionally, it was noted that the downside of having a centralized AI authority is the separation of AI experts (who have oversight of technology) from the subject-matter experts (who have oversight of areas where AI can be applied).[39] Some stakeholders also criticized the decision to allow member states to choose national competent authorities because that might lead to "very different interpretations and enforcement activities" based on the primary mission of the respective organizations.[40]
Some European scholars point out that a risk-based approach seeking an "optimal balance between innovation and the protection [of] rights" becomes a characteristic of a broader European "digital constitutionalism," including the General Data Protection Regulation and the Digital Services Act.[41] The European Union regards the scale and pace of the digital revolution as the source of immense opportunities and challenges simultaneously.[42] In this environment, risk becomes a "tool to prioritize and target enforcement action in a manner that is proportionate to an actual hazard."[43] A risk-based approach has been praised for providing a clear justification of intervention, channeling the scarce enforcement resources where they are most needed and "striking the right balance between the benefits and risks of a particular technology."[44]
However, some researchers have also pointed out that the risk-based approach—especially as implemented in the EU AI Act—has shortcomings. For example, researchers noted that the Act does not provide a risk-benefit analysis, which would assess whether the risks outweigh the benefits and, thus, would provide a better basis for categorizing AI systems into different groups.[45] More broadly, it was noted that the Act does not provide any methodologically clear procedure in which potential harm would be measured, and the risk category would be assigned accordingly.[46] The Act has also been criticized for enumerating cases falling under specific risk categories, which can create gaps in regulatory oversight when, for example, some important cases are omitted.[47] Fast-paced advances in technology might necessitate frequent legislative revisions, contrary to the EU AI Act's initial intention to create comprehensive legislation that provides legal certainty.[48]
For the purposes of the EU AI Act, providers are defined as entities developing, placing on the market, or putting into service AI systems or general-purpose AI models, whereas deployers are entities using those systems.[49] The EU AI Act applies to EU and third-country (including the United States) providers placing AI systems on the market or putting them into service in the European Union. Therefore, all U.S. companies that want to sell AI systems on the EU market or those that use AI in their operations in the European Union need to comply with the EU AI Act. Furthermore, the Act applies to third-country providers and deployers of AI systems "where the output produced by the AI system is used in the Union."[50]
Although how the latter provision will be enforced is not yet clear, it might generate uncertainty for U.S. companies that do not directly operate in the EU market but also do not necessarily know where the outputs of their systems are used. Specifically, this might be the case if a product or service is used by a third party that provides products or services in the European Union. For example, a U.S. AI developer might sell AI products to U.S. companies providing human resources services, which export these services to the EU market. Even though an AI developer might not know where its client sells its services, the provided AI system might be deemed high risk and, therefore, fall under very strict EU requirements.
Meanwhile, noncompliance with the Act might result in a fine: up to €35 million or 7 percent of the provider's total worldwide annual turnover (whichever is higher) for providing an AI system in the prohibited category, or up to €15 million or 3 percent of the provider's total worldwide annual turnover (whichever is higher, unless the penalized entity is a small or medium-sized enterprise) for noncompliance with obligations under the high-risk and transparency risk categories.[51] There are additional administrative fines for providing incorrect, incomplete, or misleading responses to an oversight body request for information.[52] Given these circumstances, U.S. companies will need to monitor enforcement practices to fully understand the implications of the EU AI Act regardless of whether they want to operate independently in the EU market.
U.S. companies not willing to comply with the EU AI Act's requirements might contractually prohibit their customers from using AI outputs in the European Union. However, the experience with the EU General Data Protection Regulation (GDPR) suggests that vendors might customarily require contractual provisions that address the EU AI Act's obligations (akin to the Data Processing Addendum under the GDPR).
The European Commission will develop practical implementation guidelines for the EU AI Act in the next two years.[53] However, some of the compliance requirements might be most efficiently implemented while a model is still in development. Therefore, newcomers to the market might need to decide early on whether they intend to operate in the EU market, which will likely generate additional costs—either compliance costs (thus potentially making them less competitive vis-à-vis U.S. companies not intending to export to the European Union) or opportunity costs of lost market share. This decision might be particularly difficult for start-ups and other small and medium-sized enterprises developing AI high-risk systems. Their compliance in the European Union will be supported by regulatory sandboxes, access to which will be provided on a preferential basis to companies with a registered office or a branch in the European Union.[54] Consequently, U.S. companies seeking to benefit from this opportunity will need to bear the costs of establishing a presence in the European Union.
The European Union highlights that the vast majority of AI systems currently available in its markets pose minimal risk and, consequently, can remain in the market without any restrictions or obligations for providers.[55] Nevertheless, navigating this new regulatory environment will create new challenges, not only for U.S. companies but also for the U.S. legislators seeking to regulate AI at home.
This research was sponsored by RAND's Institute for Civil Justice and conducted in the Justice Policy Program within RAND Social and Economic Well-Being and the Science and Emerging Technology Research Group within RAND Europe.
This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.