Risk-Based AI Regulation

A Primer on the Artificial Intelligence Act of the European Union

Krystyna Marcinek, Karlyn D. Stanley, Gregory Smith, Paul Cormarie, Salil Gunashekar

ResearchPublished Nov 20, 2024

computer chip lit up blue. Photo by Getty Images/jiefeng jiang

Photo by Getty Images/jiefeng jiang

Key Takeaways

  • Congress has been working on numerous bills to manage different risks and opportunities stemming from a rapid acceleration of artificial intelligence (AI) development. Over the past several years, the European Union (EU) has developed a comprehensive, risk-based law, the Artificial Intelligence Act.
  • The EU AI Act imposes obligations and requirements specific to the risks posed by the AI system in question: The higher the risk, the stricter the rules. The EU AI Act bans certain AI systems deemed to pose unacceptable risks, imposes extensive requirements on high-risk systems, and defines transparency requirements for limited-risk systems.
  • The EU AI Act creates a governance system in which some responsibilities are assigned to EU-level centralized bodies, whereas enforcement on the national level by member states can be either centralized or decentralized.
  • Understanding the EU approach to AI regulation will be important for understanding the obligations and responsibilities that U.S. companies operating in the EU market will have and the challenges they might face during implementation of the EU AI Act.

The European Union (EU)'s Artificial Intelligence (AI) Act is a landmark piece of legislation that lays out a detailed and wide-ranging framework for the comprehensive regulation of AI deployment in the European Union covering the development, testing, and use of AI.[1] This report is one of several intended to serve as succinct snapshots of a variety of interconnected subjects that are central to AI governance discussions in the United States, in the European Union, and globally. This report, which focuses on key aspects of the EU AI Act, is not intended to provide a comprehensive analysis but rather to spark dialogue among stakeholders on specific facets of AI governance, especially as AI applications proliferate worldwide and complex governance debates persist. Although we refrain from offering definitive recommendations, we explore issues that are relevant to AI developers and legislators.

The U.S. Congress has been working on numerous bills to manage different risks and opportunities stemming from a rapid acceleration of AI development. As of June 1, 2024, there were more than 90 pieces of legislation introduced in the 118th U.S. Congress designed to impose restrictions on high-risk AI, propose evaluation or transparency requirements, create or designate an AI regulatory oversight authority, protect consumers through liability measures, or mandate AI studies.[2] The long list of proposed legislation indicates that, so far, Congress has generally been pursuing a decentralized approach to AI regulation.

Meanwhile, the European Union has chosen a different path. In August 2024, the EU AI Act—a comprehensive, risk-based AI regulation—entered into force.[3] This report on the EU approach to AI serves as a primer to familiarize U.S. legislators and their staffs with this alternative model of regulation. The purpose of this report is not to argue for or against the EU approach but to offer additional perspective for the U.S. AI legislation debate and to help legislators and their staffs understand the requirements that U.S. companies operating in the EU market will have to follow.[4]

Risk Framework for the EU AI Act

When the European Union initiated the AI regulatory process, the European Commission defined its objectives as ensuring that AI systems are safe and respect fundamental rights and EU values, providing legal certainty to encourage investment and innovation, enhancing governance and enforcement of existing laws, and preventing market fragmentation.[5] To accomplish these goals, the European Union decided to adopt a regulatory approach that would "introduce a proportionate and effective set of binding rules for AI systems" that are "clear and robust" and applied across all sectors, tailoring "the type and content of such rules to the intensity and scope of the risks that AI systems can generate."[6] In other words, the Act imposes obligations and requirements based on the risk posed by the AI system, where risk is defined as "the combination of the probability of an occurrence of harm and the severity of that harm."[7] The higher the risk, the stricter the rules.

The EU AI Act framework categorizes risks created by AI systems into four levels: unacceptable risk, high risk, limited risk, and minimal risk. Except for high risk, these terms are not explicitly used in the EU AI Act’s text, but they are widely used in EU communications about the Act.[8] The Act itself prohibits certain AI practices with unacceptable risk, sets requirements for high-risk AI systems and obligations for their operators, and defines transparency obligations for certain AI systems that pose limited risk. AI systems that do not fall under any of the categories (i.e., posing minimal risk) can be placed on the EU market without restrictions or obligations and as such are not covered by the EU AI Act (or discussed in this report). Additionally, the EU AI Act regulates general-purpose AI models, which can be integrated into AI systems of different risk categories.[9] It defines an AI system as a

machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.[10]

The prohibited practices deemed to pose unacceptable risks to fundamental rights and safety focus on the use of AI systems that

  • deploy subliminal or manipulative techniques to distort behavior and impair decisionmaking, causing significant harm
  • exploit vulnerabilities resulting from age, disability, or social situations
  • evaluate or classify individuals based on social behavior, leading to unjustified or disproportionate treatment
  • conduct criminal risk assessments based solely on profiling
  • create untargeted facial recognition databases
  • infer emotions in workplaces and educational institutions (except for medical or safety reasons)
  • conduct biometric categorization based on sensitive attributes, with certain exceptions for law enforcement.

Additionally, the EU AI Act bans the use of real-time remote biometric identification systems in public spaces for law enforcement purposes, except under specific, narrowly defined circumstances (such as serious threats to public security) and only with appropriate safeguards and judicial oversight.[11] These prohibitions will go into effect on February 2, 2025.[12]

The EU AI Act has two categories of high-risk AI systems. The first category covers those AI systems that are products or safety components of products required by separate EU product safety legislation to undergo a third-party conformity assessment.[13] This list contains regulations and directives regarding different types of vehicles (ground, maritime, and aerial), as well as machinery, lifts, radio equipment, protective equipment, pressure equipment, cableway installations, appliances burning gaseous fuels, medical devices, and toys.[14] The requirements for these high-risk systems will go into effect on August 2, 2027.[15]

The second category of high-risk AI systems poses significant risks to individuals' health, safety, and fundamental rights, especially when the systems "materially influenc[e] the outcome of decision making" in several areas listed in Annex III of the EU AI Act. This category encompasses specific AI systems used in biometric identification and categorization, critical infrastructure management, education and vocational training, employment and worker management, access to essential private and public services, law enforcement, migration and border control, and the administration of justice and democratic processes.[16] Even in these areas, AI systems are not deemed high risk if they perform a preparatory or narrow procedural task or are "intended to improve the result of a previously completed human activity."[17]

Otherwise, AI systems deemed to pose a high risk will have to be registered in an EU database. Before these systems are placed on the market or put into service, providers will have to implement quality management systems that, among other things, would ensure that high-risk AI systems comply throughout their life cycle with an extensive set of requirements regarding:

  • risk management to understand and minimize risks to health, safety and fundamental rights
  • data and data governance to ensure high-quality training, validation, and testing of data sets and avoidance of biases
  • technical documentation to provide oversight bodies with sufficient information to assess compliance with requirements (technical documentation is specified in Annex IV)
  • automatic recordkeeping (logs) to enable monitoring of the functioning of a system throughout its lifetime
  • transparency and provision of information to deployers to ensure that they understand the system's outputs, including their accuracy, and can use them properly
  • human oversight measures to enable deployers to not use the system or to disregard it, override it, reverse its outputs, or interrupt it and bring it to a halt in a safe manner
  • accuracy, robustness, and cybersecurity measures to ensure that systems perform consistently throughout their life cycle.[18]

The quality management system has to provide a regulatory compliance strategy; description of techniques, procedures, and systematic actions for design, design control, design verification, development, quality control, and quality assurance; procedures for examination, testing, and verification of the system; systems and procedures for data management; risk management systems; postmarket monitoring systems; procedures for serious incident reporting and for recordkeeping; an accountability framework; and handling of communication with oversight bodies.[19] All these requirements for high-risk AI systems of the second category will go into effect—along with most articles of the EU AI Act—on August 2, 2026. The list of high-risk systems can be amended by the European Commission.[20]

The last group of AI systems covered by the Act are those that pose transparency risks (i.e., limited-risk systems). The European Union requires that the deployers of AI systems that interact directly with humans inform users about the AI use (e.g., chatbots). Similarly, the EU AI Act requires that AI-generated synthetic audio, image, video, or text content be marked as such in a verifiable machine-readable format.[21] In case of deep fakes or content intended to inform the public on matters of public interest, deployers need to effectively disclose the artificial generation or manipulation of outputs.[22] Finally, deployers have a responsibility to inform people about their exposure to an emotion recognition system or a biometric categorization system.[23] Some systems can fall under more than one set of obligations.[24] For example, some biometric categorization AI systems can simultaneously fall under high-risk system requirements and transparency requirements while other biometric categorization AI systems might be prohibited.

All these regulations do not apply to systems that are exclusively for military, defense, or national security purposes. They also do not apply to any research, testing, or development prior to placing systems on the market or putting them into service (unless they are tested in real-world conditions). Finally, they do not apply to purely personal, nonprofessional deployment of AI.[25]

Importantly, the work on developing guidelines on the practical implementation of the EU AI Act—including guidelines on requirements for high-risk systems and transparency obligations—is starting only now, as the European Commission has until February 2, 2026, to provide such guidelines.[26] Consequently, there is still significant uncertainty regarding the specific requirements with which AI developers will need to comply.

Who Enforces the EU AI Act?

The EU debates regarding the AI governance model center on the question of a single enforcing agency (centralized model) or delegation of powers to many entities (decentralized model). The EU AI Act creates several new bodies at the EU level while leaving member states with some flexibility in enforcing rules at the national level. Centralized oversight applies specifically to general-purpose AI models (which can be integrated into AI systems of different risk categories) that fall under the exclusive oversight of the newly established AI Office acting on behalf of the European Commission.[27] The AI Office has already started work in this capacity by launching work on the AI Code of Practice that will "facilitate the proper application of the rules of the AI Act for general-purpose AI models."[28]

Outside of general-purpose AI models, it will be up to member states to enforce regulations at a national level, though the AI Office will still play a significant role as the facilitator of the "uniform application of the AI Act across Member States."[29] For example, the AI Office will develop methods to evaluate risk levels and, thus, update the list of prohibited practices, high-risk systems, and systems with transparency obligations.[30]

The EU AI Act leaves it to member states to decide whether they will use a centralized or decentralized oversight model at the national level by selecting at least one notifying authority and at least one market surveillance authority.[31] Whichever model a state chooses, representatives of competent national authorities (one per country) will form the European AI Board, the role of which is to ensure consistency and coordination in the implementation of the EU AI Act. Among other tasks, the board will facilitate the exchange of information, expertise, and best practices; advise on consistent enforcement of the rules; and provide recommendations on codes of conduct and technical standards.[32]

Both the AI Board and the AI Office will be supported by two advisory bodies: an advisory forum and a scientific panel of independent experts. The advisory forum will facilitate dialogue with stakeholders and consist of representatives from industry, start-ups, small and medium-sized enterprises, civil society, and academia. The scientific panel, consisting of a set of independent experts selected by the European Commission, will provide technical expertise and advice to the AI Office.[33] The governance structure will be set up by August 2, 2025.[34]

In the debates about the EU AI Act's governance model, some stakeholders supported centralized governance on the EU level as having a better chance of facilitating a multi-stakeholder dialogue and keeping up with the rapidly advancing technology it sought to regulate.[35] A single agency was seen as able to "concentrate the necessary expertise and resources to effectively and consistently enforce the new regulation across all Member States and foreign regulated entities."[36] On the national level, some commentators regarded the single agency approach as "more likely able to hire talent, build internal expertise, and effectively enforce the AI Act" while facilitating easier coordination between member states.[37] The arguments against centralized governance concerned the time and money necessary to set up a new body and the distance from local stakeholders.[38] Additionally, it was noted that the downside of having a centralized AI authority is the separation of AI experts (who have oversight of technology) from the subject-matter experts (who have oversight of areas where AI can be applied).[39] Some stakeholders also criticized the decision to allow member states to choose national competent authorities because that might lead to "very different interpretations and enforcement activities" based on the primary mission of the respective organizations.[40]

Will Risk Categorization Meet Its Goals?

Some European scholars point out that a risk-based approach seeking an "optimal balance between innovation and the protection [of] rights" becomes a characteristic of a broader European "digital constitutionalism," including the General Data Protection Regulation and the Digital Services Act.[41] The European Union regards the scale and pace of the digital revolution as the source of immense opportunities and challenges simultaneously.[42] In this environment, risk becomes a "tool to prioritize and target enforcement action in a manner that is proportionate to an actual hazard."[43] A risk-based approach has been praised for providing a clear justification of intervention, channeling the scarce enforcement resources where they are most needed and "striking the right balance between the benefits and risks of a particular technology."[44]

However, some researchers have also pointed out that the risk-based approach—especially as implemented in the EU AI Act—has shortcomings. For example, researchers noted that the Act does not provide a risk-benefit analysis, which would assess whether the risks outweigh the benefits and, thus, would provide a better basis for categorizing AI systems into different groups.[45] More broadly, it was noted that the Act does not provide any methodologically clear procedure in which potential harm would be measured, and the risk category would be assigned accordingly.[46] The Act has also been criticized for enumerating cases falling under specific risk categories, which can create gaps in regulatory oversight when, for example, some important cases are omitted.[47] Fast-paced advances in technology might necessitate frequent legislative revisions, contrary to the EU AI Act's initial intention to create comprehensive legislation that provides legal certainty.[48]

Implications of the EU AI Act for U.S. Companies

For the purposes of the EU AI Act, providers are defined as entities developing, placing on the market, or putting into service AI systems or general-purpose AI models, whereas deployers are entities using those systems.[49] The EU AI Act applies to EU and third-country (including the United States) providers placing AI systems on the market or putting them into service in the European Union. Therefore, all U.S. companies that want to sell AI systems on the EU market or those that use AI in their operations in the European Union need to comply with the EU AI Act. Furthermore, the Act applies to third-country providers and deployers of AI systems "where the output produced by the AI system is used in the Union."[50]

Although how the latter provision will be enforced is not yet clear, it might generate uncertainty for U.S. companies that do not directly operate in the EU market but also do not necessarily know where the outputs of their systems are used. Specifically, this might be the case if a product or service is used by a third party that provides products or services in the European Union. For example, a U.S. AI developer might sell AI products to U.S. companies providing human resources services, which export these services to the EU market. Even though an AI developer might not know where its client sells its services, the provided AI system might be deemed high risk and, therefore, fall under very strict EU requirements.

Meanwhile, noncompliance with the Act might result in a fine: up to €35 million or 7 percent of the provider's total worldwide annual turnover (whichever is higher) for providing an AI system in the prohibited category, or up to €15 million or 3 percent of the provider's total worldwide annual turnover (whichever is higher, unless the penalized entity is a small or medium-sized enterprise) for noncompliance with obligations under the high-risk and transparency risk categories.[51] There are additional administrative fines for providing incorrect, incomplete, or misleading responses to an oversight body request for information.[52] Given these circumstances, U.S. companies will need to monitor enforcement practices to fully understand the implications of the EU AI Act regardless of whether they want to operate independently in the EU market.

U.S. companies not willing to comply with the EU AI Act's requirements might contractually prohibit their customers from using AI outputs in the European Union. However, the experience with the EU General Data Protection Regulation (GDPR) suggests that vendors might customarily require contractual provisions that address the EU AI Act's obligations (akin to the Data Processing Addendum under the GDPR).

The European Commission will develop practical implementation guidelines for the EU AI Act in the next two years.[53] However, some of the compliance requirements might be most efficiently implemented while a model is still in development. Therefore, newcomers to the market might need to decide early on whether they intend to operate in the EU market, which will likely generate additional costs—either compliance costs (thus potentially making them less competitive vis-à-vis U.S. companies not intending to export to the European Union) or opportunity costs of lost market share. This decision might be particularly difficult for start-ups and other small and medium-sized enterprises developing AI high-risk systems. Their compliance in the European Union will be supported by regulatory sandboxes, access to which will be provided on a preferential basis to companies with a registered office or a branch in the European Union.[54] Consequently, U.S. companies seeking to benefit from this opportunity will need to bear the costs of establishing a presence in the European Union.

The European Union highlights that the vast majority of AI systems currently available in its markets pose minimal risk and, consequently, can remain in the market without any restrictions or obligations for providers.[55] Nevertheless, navigating this new regulatory environment will create new challenges, not only for U.S. companies but also for the U.S. legislators seeking to regulate AI at home.

Notes

  • [1] European Union, "Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) Text with EEA relevance," June 13, 2024. Hereafter cited as the EU AI Act, this legislation was adopted by the European Parliament in March 2024 and approved by the European Council in June 2024. As of July 16, 2024, all text cited in this report related to the EU AI Act can be found at https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:L_202401689#d1e5435-1-1
  • [2] Brennan Center for Justice at New York University School of Law, "Artificial Intelligence Legislation Tracker," webpage, last updated May 31, 2024. As of June 16, 2024: https://www.brennancenter.org/our-work/research-reports/artificial-intelligence-legislation-tracker
  • [3] European Parliament, "Artificial Intelligence Act," webpage, February 9, 2024. As of June 20, 2024: https://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI(2021)698792
  • [4] Regardless of whether, from the U.S. perspective, the EU AI Act strikes the right balance between innovation and protection of rights or meets its legislative consistency objective, adopting such an approach in the U.S. legislative context might be challenging: To do so would likely require a multiple referral, which makes such a bill less likely to pass. See Glen S. Krutz and Courtney L. Cullison, "Multiple Referral and U.S. House Legislative Success in the 1990s," American Review of Politics, Vol. 29, 2008. Another paper also found that the "primary referral" rule disincentivizes secondary committees’ deliberation (see Jonathan Lewallen and Scott Moser, It's in Our Hands: Multiple Referral with a Primary Committee, Social Science Research Network, 2016).
  • [5] European Union, "Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts," COM(2021) 206 final, Brussels, 21.4.2021, p. 1. As of August 29, 2024: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
  • [6] EU AI Act, Recitals 8 and 26.
  • [7] EU AI Act, Chap I, Art. 3(2).
  • [8] For example, see European Commission, "Shaping Europe's Digital Future: AI Act," webpage, undated. As of August 29, 2024: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai. Also see European Council, “Artificial Intelligence,” webpage, October 14, 2024, as of October 29, 2024: https://www.consilium.europa.eu/en/policies/artificial-intelligence/
  • [9] For more information on the European Union's regulation of general-purpose AI models, see Gregory Smith, Karlyn D. Stanley, Krystyna Marcinek, Paul Cormarie, and Salil Gunashekar, General-Purpose Artificial Intelligence (GPAI) Models and GPAI Models with Systemic Risk, RAND Corporation, RR-A3243-1, 2024. As of October 30, 2024: https://www.rand.org/pubs/research_reports/RRA3243-1.html
  • [10] EU AI Act, Chap. I, Art. 3(1).
  • [11] EU AI Act, Chap. II, Art. 5(1)–(7).
  • [12] EU AI Act, Chap. XIII, Art. 113.
  • [13] EU AI Act, Chap. III, Art. 6(1).
  • [14] EU AI Act, Annex I.
  • [15] EU AI Act, Chap. XIII, Art. 113.
  • [16] EU AI Act, Annex III.
  • [17] U AI Act, Chap. III, Art. 6(3).
  • [18] EU AI Act, Chap. III, Articles 9–15.
  • [19] EU AI Act, Chap. III, Art. 17(1).
  • [20] EU AI Act, Chap. III, Art. 7.
  • [21] EU AI Act, Chap. IV, Art. 50(2).
  • [22] EU AI Act, Chap. IV, Art. 50(4).
  • [23] EU AI Act, Chap. IV, Art. 50(3).
  • [24] EU AI Act, Chap. IV, Art. 50(4).
  • [25] EU AI Act, Chap. I, Art. 2.
  • [26] EU AI Act, Chap. III, Art. 6(5), 96.
  • [27] EU AI Act, Chap. IX, Art. 88(1).
  • [28] European Commission, "AI Act: Participate in the Drawing-Up of the First General-Purpose AI Code of Practice," July 30, 2024.
  • [29] Future of Life Institute, "The AI Office: What Is It, and How Does It Work?" webpage, March 21, 2024. As of October 30, 2024: https://artificialintelligenceact.eu/the-ai-office-summary/
  • [30] EU AI Act, Chap. XIII, Art. 112(11). Other responsibilities of the AI Office will include cooperation with market surveillance authorities when a general-purpose AI model that can be used in an AI system is in noncompliance with the Act (Art. 75(2)); leading the work on the implementation of fundamental rights impact assessment for high-risk AI systems (Art. 27(5)); the detection and labeling of artificially generated or manipulated content (Art. 50(7)); and the development of voluntary codes of conduct for non-high-risk AI systems (Art. 95).
  • [31] EU AI Act, Chap. VII, Art. 70.
  • [32] EU AI Act, Chap. VII, Art. 66.
  • [33] EU AI Act, Chap. VII, Art. 68.
  • [34] EU AI Act, Chap. XIII, Art. 113.
  • [35] Giorgos Verdi, "The Case of an EU AI Agency," European Digital SME Alliance, December 5, 2022.
  • [36] Nicolas Moës, Felicity Reddel, and Samuel Curtis, Giving Agency to the AI Act, The Future Society, April 2023, p. 4.
  • [37] Alex Engler, "Key Enforcement Issues of the AI Act Should Lead EU Trilogue Debate," Brookings Institution, June 16, 2023.
  • [38] Moës, Reddel, and Curtis, 2023, p. 4; Verdi, 2022.
  • [39] Engler, 2023.
  • [40] Kai Zenner, "Some Personal Reflections on the EU AI Act: A Bittersweet Ending," LinkedIn, February 15, 2024. As of April 4, 2024: https://www.linkedin.com/pulse/some-personal-reflections-eu-ai-act-bittersweet-ending-kai-zenner-avgee
  • [41] For example, see Giovanni De Gregorio and Pietro Dunn, "The European Risk-Based Approaches: Connecting Constitutional Dots in the Digital Age," Common Market Law Review, Vol. 59, No. 2, 2022; Claudia Quelle, "Enhancing Compliance Under the General Data Protection Regulation: The Risky Upshot of the Accountability- and Risk-Based Approach," European Journal of Risk Regulation, Vol. 9, No. 3, 2018; and Maria Eduarda Gonçalves, "The Risk-Based Approach Under the New EU Data Protection Regulation: A Critical Perspective," Journal of Risk Research, Vol. 23, No. 2, 2020.
  • [42] European Commission, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: A Digital Single Market Strategy for Europe, COM(2015) 192 final, Brussels, May 6, 2015, p. 3. As of August 30, 2024: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52015DC0192
  • [43] De Gregorio and Dunn, 2022, p. 476.
  • [44] Martin Ebers, Truly Risk-Based Regulation of Artificial Intelligence: How to Implement the EU's AI Act, SSRN, June 19, 2024, pp. 10–11.
  • [45] Ebers, 2024, pp. 9, 12–13.
  • [46] Ebers, 2024, pp. 10, 13–14.
  • [47] Daniel Leufer and Fanny Hidvegi, "The Pitfalls of the European Union’s Risk-Based Approach to Digital Rulemaking," UCLA Law Review, Vol. 71, 2024; Ebers, 2024, pp. 14–15.
  • [48] Ebers, 2024, pp. 9–10. Additionally, the European Commission is empowered to amend only the high-risk systems list, making the other two more difficult to change. See Leufer and Hidvegi, 2024, p. 169.
  • [49] EU AI Act, Chap. I, Art. 3(3)–(4).
  • [50] EU AI Act, Chap. I, Art. 2(1).
  • [51] EU AI Act, Chap. XII, Art. 99(3)–(4).
  • [52] EU AI Act, Chap. XII, Art. 99(5).
  • [53] EU AI Act, Chap. III, Art. 6(5), 96.
  • [54] EU AI Act, Chap. VI, Art. 62(1)(a).
  • [55] Minimal risk systems include, for example, AI-enabled video games or spam filters. European Commission, "Shaping Europe’s Digital Future: AI Act," webpage, October 14, 2024. As of October 30, 2024: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

Document Details

Citation

RAND Style Manual

Marcinek, Krystyna, Karlyn D. Stanley, Gregory Smith, Paul Cormarie, and Salil Gunashekar, Risk-Based AI Regulation: A Primer on the Artificial Intelligence Act of the European Union, RAND Corporation, RR-A3243-3, 2024. As of April 30, 2025: https://www.rand.org/pubs/research_reports/RRA3243-3.html

Chicago Manual of Style

Marcinek, Krystyna, Karlyn D. Stanley, Gregory Smith, Paul Cormarie, and Salil Gunashekar, Risk-Based AI Regulation: A Primer on the Artificial Intelligence Act of the European Union. Santa Monica, CA: RAND Corporation, 2024. https://www.rand.org/pubs/research_reports/RRA3243-3.html.
BibTeX RIS

This research was sponsored by RAND's Institute for Civil Justice and conducted in the Justice Policy Program within RAND Social and Economic Well-Being and the Science and Emerging Technology Research Group within RAND Europe.

This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.