Research
General-Purpose Artificial Intelligence (GPAI) Models and GPAI Models with Systemic Risk: Classification and Requirements for Providers
Aug 8, 2024
The Application of U.S. Tort Law and Liability to Harms from Artificial Intelligence Systems
ResearchPublished Nov 20, 2024
Photo by Getty Images/hh5800
The advent of new, powerful artificial intelligence (AI) systems has prompted a burst of interest in how best to govern AI. Although software and AI have existed for many years, new AI technologies, such as generative AI, with increasingly broad capabilities have recently been introduced, such as OpenAI's ChatGPT and Anthropic's Claude; these developments have increased interest in AI governance and in understanding existing law's application to this technology.[1] There is interest in how existing rules of liability might apply to these new AI systems.[2] This report provides a brief overview of some of the key issues at the intersection of advanced AI systems and U.S. tort law.
This report, which focuses on AI's effects on U.S. tort law, is not intended to provide a comprehensive analysis but rather to highlight key issues and areas of uncertainty raised by the continued advancement of AI technology and its deployment. It is intended to support continued dialogue among stakeholders on specific facets of AI governance, especially as AI applications proliferate worldwide and complex governance debates persist.[3] Note that legal and regulatory frameworks will similarly apply to AI when relevant, but that is outside the scope of this report.
Established principles and frameworks of liability and tort law will be particularly influential in governing AI. Tort law is the set of rules, doctrines, and statutes that governs remedying of harms caused by wrongful actions under civil law in the United States.[4] When one party harms another and is then sued by the injured party, the case will usually be brought in civil court and decided under tort law. Philosophically, tort law seeks to compensate the person least at fault.
In contrast to many other proposed regimes for governing AI, tort law is already in force and is regularly applied in state and federal court. Liability law also does not require an act by Congress, the President, or other federal or state decisionmakers to apply to AI-caused injuries. It is therefore valuable for policymakers to understand some of the key structural components and doctrines of tort law and how they might be applied to AI-caused harms in the future.
To provide some guidance about how liability might be assigned for civil harms from AI, it is first necessary to point out two critical structural components of tort law in the United States: First, it is primarily a form of common law; second, most U.S. tort law is made by state courts.
Common law is a form of law that is primarily created through judicial decisions in the process of litigation rather than being made through statutes passed by legislatures.[5] Many of the rules governing the allocation of liability for harms, such as the duties that parties owe each other or what constitutes negligence, have been developed by the accumulation of judicial decisions over time. This said, not all tort law is common law; statutes can directly modify the rules of tort, and statutes are often given significant weight in determining whether a party is liable for the harms they inflict—even if those statutes do not explicitly address themselves to the subjects that tort law covers.[6]
This state of affairs has several implications for liability for harms done by AI. First, absent statutory intervention by federal or state legislatures, courts will need to adjudicate harms caused by novel AI systems and resolve uncertainties by building on common-law foundations. Although some predictions can be made about how tort doctrine will be applied to AI, there will still be significant uncertainty as to how tort law will treat specific issues in concrete cases, across different states as they are put before the courts. (Several of these key uncertainties are covered in more detail later in this report.)
Second, if and when cases are brought by plaintiffs, judges will eventually need to grapple with complex and novel legal issues, and juries will face complex factual issues. AI-related cases will require judges to apply existing law and doctrine to the novel circumstances that AI will create. In the U.S. legal system, judges alone—not juries—determine the law, such as legal duties, under tort. This will result in many cases not making it to trial because judges will apply the law to these new facts and decide many cases solely as matters of law, or "a determination of the applicable law."[7] Although some cases might be easily decided under precedent, judges could have to grapple with difficult questions regarding the design and deployment of AI, particularly when injuries occur that a party might claim were caused by the AI's developer rather than by the owner of a particular AI-enhanced product. These decisions also could include creating new common-law doctrines to handle the new challenges that AI poses—decisions that might then be applied in future cases in that same state jurisdiction. Judges will therefore have a major role both in determining how specific AI-related tort cases will proceed and how each state's doctrine develops as a result of AI-related challenges.
Although many cases are likely to be decided as matters of law by judges, some cases will still proceed to trial and potentially be decided by a jury. Many components of tort law are questions of fact (dealing with "the particular details of the case")[8] and therefore traditionally left to juries to determine, although defendants can and do agree to waive their right to a jury trial and have judges act as the fact-finder for their case.[9] The outcomes of these cases will rely on the specific factual judgments of judges and juries whose behavior and sympathies might be difficult to predict. The outcome of novel AI-related cases will therefore be particularly uncertain. This might drive more cases to settle out of court if AI companies seek to avoid the risks associated with litigating novel facts for which there is little precedent—particularly for large-scale harms against sympathetic plaintiffs.
In the United States, the common law of tort is primarily formed by the decisions and precedents of state courts, not federal ones.[10] In most situations, there is no such thing as federal common law, and federal courts will work through a complex choice of legal rules to determine which state common law should be applied to a particular case. Although federal common law does exist for narrow areas, such as for harms arising on maritime waters under admiralty law, federal courts mostly apply state common law to allocate liability for harms.[11] Therefore, it will be primarily state common law, and potentially state courts, that will develop rules to govern AI-related torts.
State control of tort law in the United States also means that there can be variation between different states on key rules for determining liability.[12] How AI-caused harms are treated will differ from jurisdiction to jurisdiction, and such variations could significantly affect the imposition of liability and/or potential size of liability judgments against AI developers. The potential for state-by-state variation has, in the past, created incentives for federal policymakers to intervene and set national rules of the road for liability.[13] Whether such a solution is right for AI is unclear. Although setting common rules would make it easier for AI developers to manage their risk and make clear to developers and users when they would be held liable, the relative youth of AI technology means that it might not yet be possible to know what these ideal rules should be.
Although common law is primarily formed in state courts, there is also a strong interest in synthesizing and identifying the common elements of different states' bodies of tort law.[14] Much of this work is done by the American Law Institute, which issues Restatements of Law, including the Restatements of Torts, that synthesize and summarize state bodies of law. The Restatements are often considered highly persuasive by state courts when they formulate their own common law rules. However, there is no requirement that a state court adopt the Restatement's particular synthesis of a given law, and states regularly reject the Restatements or adopt them only in part. The Restatements are therefore one set of influential standards that help shape the common-law of tort across the United States. The remainder of this report draws from the Restatements of Tort because they are still relied on in many states, but the authors acknowledge that states might have their own individual standards or develop unique new rules for applying tort law to AI.
It should also be noted that contract terms often require parties to forgo tort claims should a product malfunction. These limitations on liability are common in user agreements for software and might prevent AI users from pursuing traditional tort claims should such terms be included in that AI's user agreement. However, some of these waivers might not be enforceable, and there could be confusion when claims are brought regarding whether contractual waivers of tort claims apply in specific cases. Note, however, that if there is a contract between the parties, then an injured party will not be able to bring tort claims should the contract be found valid.
Many tort claims regarding AI will allege negligence.[15] These cases generally allege that a defendant acted negligently and breached a duty of care toward the plaintiff, resulting in the plaintiff's injury.[16] Although there can be specific duties of care, all people are held to have a duty to take reasonable care against causing foreseeable harm to others.[17] This is generally summarized as requiring people to be "reasonably prudent" both in their behavior and in taking precautions to ensure they do not inflict harms on others.[18] This requirement generally applies to preventing harm to a person’s body and property; there is not generally a duty against causing pure economic loss or purely emotional harm.[19]
Several issues immediately arise from this simple description of negligence and its application to AI. To start, what standards will be used to judge whether an AI developer or deployer was reasonably prudent when they developed or deployed their AI? The duty an AI developer might have to deployers or users of their product is also unclear. There will also be arguments about whether a particular AI was truly defective that could be difficult to settle, given that AI is used to automate more tasks and that clear definitions of "defective" behavior might not yet exist. Although reason and logical judgment will play a role in determining this duty, courts are also likely to look to the practices and customs of the AI industry more broadly to judge whether a party was negligent in developing or deploying their AI.[20]
Negligence law in the United States generally places great emphasis on industry custom and standard when determining whether a defendant was negligent; following industry practice is taken as strong, although not conclusive, evidence that a defendant was not negligent.[21] Conversely, a defendant not following industry practice will be taken as strong evidence that a defendant was negligent. Courts are also not limited in determining what constitutes industry practice; they can look to informal standards or those promulgated by an industry body that the court finds representative of industry practice.[22]
This means that, in common law, the AI industry's own safety practices will play a significant role in defining what is required of AI developers and deployers to avoid liability for negligence. Courts can still find AI companies negligent even if they did follow industry custom—courts have noted in the past that entire industries could be negligent or lagging in their safety practices.[23] However, particularly safety-conscious AI companies, or associations of AI companies, also might have an opportunity to develop standards for safe AI development that in turn could establish benchmarks for the whole industry in future litigation. Policymakers and government bodies could also play a role in this standard-setting by publishing their own guidance for AI development and deployment.[24]
Another issue arises from the complexity of the AI supply chain and its intersection with the requirements of causation. Causation, in negligence, requires that a defendant must be the factual, or "but-for," cause of the plaintiff's injury for the defendant to be found liable.[25] Demonstrating which party caused a plaintiff's injury might be difficult for cases involving AI systems. AI, particularly large language models (LLMs) such as ChatGPT, are often developed by one party, which then makes its model available for various AI deployers to customize or fine-tune to behave in certain ways for their own products.[26] Although LLMs can be made safer through fine-tuning and other techniques, these methods are not foolproof, and protections embedded in LLMs can be removed by other developers and bypassed by users.[27] It might not be clear whether an injury resulted from actions by the developer, the deployer, or some other party, let alone the question of whether any party’s actions were also negligent.
With such an extended supply chain, the complexity of AI development, and the difficulty of controlling LLM behavior, it might be difficult for plaintiffs to show that any actor was the but-for cause of a particular injury. An AI developer might argue that the deployer's usage of its AI was the cause of the injury; the AI deployer might argue that the original developer was, in fact, negligent in some way in the original creation of the model. Courts have developed doctrines to deal with issues of causation in other contexts, and will have examples to draw from when encountering complex issues of causation posed by AI. However, courts could face difficulties with the uncertainties involved in identifying a particular party’s behavior as the true cause of a plaintiff's injuries.
Another issue that plaintiffs in AI-related tort cases might face is proving standing. Standing is a constitutional requirement that must be fulfilled by a plaintiff to successfully bring a case; failure to show standing results in a case being dismissed.[28] In federal court, standing generally requires that a plaintiff demonstrate "injury in fact," meaning an actual injury that is "concrete and particularized" to them, that there is a causal connection between the injury and the conduct of the defendant, and that the injury can be redressed by a favorable court decision.[29] The requirement that plaintiffs show they have suffered a concrete and particularized injury has been a major barrier to software-related suits because common alleged injuries, such as invasion of privacy, often have not been concrete enough, or been able to point to sufficiently clear damages to plaintiffs, to satisfy the requirement for standing.[30] The same issues with showing concrete and particularized harms could reemerge with suits related to AI. Although a case involving physical harm caused by an AI malfunction would be very likely to pass the standing requirement, it might be more difficult to show a sufficiently "concrete and particularized injury" when an AI violates a plaintiff's privacy or outputs misinformation about a plaintiff.
In addition to the law of negligence, U.S. states have developed specialized doctrines of products liability to adjudicate cases related to the manufacture, design, or warning provided regarding products.[31]
The first question is whether AI will be considered a product. Courts have regularly considered whether other forms of software were considered products and have mostly held that they are not, for the purposes of products liability law.[32] It should also be noted that software is not considered a product under the Uniform Commercial Code, a model law that many states have adopted; this, in turn, could make treating AI as a product more difficult.[33] The incorporation of AI into physical products—such as IoT (Internet of Things) devices, self-driving cars, or robots—might prompt courts to begin considering AI as a form of product subject to specialized products liability doctrines.
If AI is considered a product, it might be subject in certain state jurisdictions to the consumer expectations test, a legal standard that considers whether a product is dangerous to a degree in a way that is not within the expectation of the ordinary purchaser or user of that product.[34] This test asks whether a product is "malfunctioning" according to the expectations of the consumer; if it is, liability is assigned to the product’s designer.[35]
The issue of consumer expectations for AI might therefore be particularly important in some state jurisdictions when determining liability for an AI-caused tort. AI developers and deployers might be able to avoid some liability by warning users of the potential risks that an AI system might pose, thereby appropriately calibrating a user to the potential harms that an AI product could inflict. The issue of how consumers would expect an AI system to behave will therefore be particularly critical should AI, or AI-enabled products, be broadly subject to products liability law throughout the United States.
It should also be noted that some states do not apply the consumer expectations test but rather apply the "risk-utility" test.[36] This test asks whether an alternative design would, at reasonable cost, reduce the risk posted by a particular product.[37] In practice, the risk-utility test tends to hinge on analyzing whether a particular design is "reasonable" compared with alternatives; for AI, this assessment might hinge on whether appropriate safeguards and controls were implemented in a particular AI system that is alleged to have caused harm. For cases in which the risk-utility test is used to determine products liability, whether AI companies and deployers are held liable might turn on judgments about whether they acted reasonably in designing the AI system in question.
It should also be noted that the First Amendment's applications to tort law and civil liability might also have significant implications for AI. This is because many outputs from AI systems take the form of text or speech and, therefore, might plausibly receive some protection under the First Amendment. Historically, the Supreme Court has held that the imposition of civil liability for speech implicates the First Amendment and must be subject to its restrictions.[38] This rule does not prevent the imposition of civil liability in tort cases for speech, but it has led to changes in tort doctrine to avoid running afoul of the First Amendment. For example, in New York Times Co. v. Sullivan, the Supreme Court limited the common-law tort of defamation in consideration of the First Amendment.[39] Courts often balance the concerns of the First Amendment and tort law, choosing to make success in a tort case more difficult if the conduct in question was or resembled speech.[40]
The primary implication of these precedents is that if an AI's output resembles speech, it is possible that a court might make it more difficult to recover in a tort case related to that output.[41] Conversely, it will probably be easier for a plaintiff to succeed in a tort claim if an AI's behavior takes the form of "acting" in the real world and directly injures a plaintiff, such as a malfunctioning self-driving car hitting a pedestrian, given that speech considerations would be significantly reduced in such a situation.
Tort cases involving AI might also be affected by Section 230 of the Communications Decency Act of 1996, which reads that "[n]o provider or user of an interactive computer service shall be treated as the publisher of any information provided by another information content provider."[42] Section 230 has been a major barrier in tort cases brought against technology companies in the past, so there is interest in how it might affect cases brought regarding harms caused by AI.[43] Unfortunately, there is no definitive answer yet on how Section 230 will apply to AI broadly. It is likely that the application of Section 230 will be highly specific to the particular facts of each case and how a particular harm was caused by AI. A key point of uncertainty is whether an AI developer or deployer would be considered a content provider under Section 230, which would prevent that party from invoking it as a protection against tort liability.
The common-law nature of tort and its development by state courts mean that the development of tort law in the AI context will be a gradual and distributed process, which creates a potential space for federal policymakers to intervene to create rules of the road for AI liability that might apply nationwide. However, policymakers will have to carefully assess different considerations, such as the proper balance between state and federal law under the U.S. federal system. By the same token, it might be undesirable to intervene in the fact-finding and doctrinal development of the courts, which might be able to develop more-comprehensive solutions to harms from AI as the technology rapidly evolves and is deployed in new use cases that legislators might not foresee. Such interventions can also ease the path for plaintiffs in tort cases, however—for example, recognizing new classes of harms or imposing new duties on defendants could encourage developers and deployers of AI to be more cautious in their use of such systems where the systems could cause harm. How to strike this balance is beyond the scope of this report, but it is hoped that a stronger understanding of the issues at the intersection of tort law and AI will help inform future discussion and debate.
This research was sponsored by RAND's Institute for Civil Justice and conducted in the Justice Policy Program within RAND Social and Economic Well-Being and the Science and Emerging Technology Research Group within RAND Europe.
This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.