Artificially Intelligent Entities as Legal Persons: Navigating the Frontiers of Legal Personhood in the Age of Autonomous Technologies
- Shivangi Bhardwaj
- Apr 3
- 17 min read
Updated: 5 days ago
Abstract
This article examines the complex issue of granting legal personhood to artificially intelligent entities. It explores the challenges and implications of recognising AI as legal persons, focusing on the types of legal personhood and their potential consequences. It discusses the evolving nature of AI technology, its increasing autonomy, and the difficulties in defining and regulating AI within existing legal frameworks. It addresses key questions surrounding AI agency, liability, and the ethical considerations of attributing moral responsibility to non-human entities, while highlighting the need for careful deliberation in developing new legal paradigms to address the unique challenges posed by AI. This emphasises the importance of balancing innovation with human safeguards. While not providing premature answers, the article aims to stimulate further discussion on the desirability and extent of granting legal personhood to AI entities in an increasingly AI-driven world.
Keywords: Artificial intelligence, Legal personhood, Agency, Liability, Artificial Legal Person.
I. Introduction
One of the first lessons of our life is that you reap what you sow[i]. Actions have consequences. Virtue is rewarded, and crime is followed by punishment. Law maintains order; without it, we face tyranny. In an orderly society, a person bears specific responsibilities and is guaranteed certain rights. Conventionally, a person is a natural person, and sometimes an artificial person that acts through natural persons. However, with the rise of new and emerging technologies and eroding conventional boundaries between the real and the virtual or the artificial world, especially the rise of artificially intelligent entities (‘AI entity’), the notion of a conventionally legal person demands re-examination.
The rapid technological developments enabled by Artificial Intelligence (‘AI’) around us bring into question whether AI entities can be treated as legal persons. The freedom of choice is the essence of human dignity[ii]. Choices bring consequences to humans, for which they are held liable. The recent AI legislation enacted by the European Union (‘EU’) defines AI systems as machine-based systems that are designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment[iii]. This definition emphasises the autonomy and adaptability of AI technology. So, when an AI entity makes decisions without human supervision, who bears the liability for the consequences that follow?
This article aims to examine the challenges and implications of granting the status of a legal person to an AI entity. It primarily focuses on identifying the types of legal personhood and the consequences of granting them to an AI entity. To avoid the adverse effects of premature conclusions drawn at the nascent stage of AI development, the question of whether such recognition is desirable and to what extent has been left open for further deliberation.
II. Legal Personhood — What Does It Entail?
Who is a legal person?
In common parlance, the term ‘person’ is usually used to refer to human beings or natural persons as persons. However, a legal person is different from a natural person. A legal person has recognised and guaranteed legal rights and duties[iv]. To follow Kant, one might use the term ‘legal person’ to describe a being that not only has Kantian moral personhood but also possesses the cognitive and linguistic abilities necessary to exercise their rights under the law[v]. Hans Kelsen distinguishes between a (legal) person, which he defines as a collection of rights and obligations, and a ‘man,’ which he sees as a physical being to which this collection is attributed. According to Kelsen, ‘man’ is the basis for attributing a specific set of legal rights and obligations, while this set of rights and obligations essentially defines a person[vi]. Legal personality has two essential aspects: identification as a legal subject and recognition as a legal agency. Any entity with a legal agency is also a legal subject, but not every legal subject is a legal agent. The set of rights and duties accompanying the recognition of an entity as a legal person varies with the nature of the entity. Until now, the legal agency has been kept exclusive to natural persons[vii].
Legal personhood is not a quality inherent to an entity; instead, it is a requirement that an entity be recognised as a legal person by the governing legal framework. Therefore, whether an entity is considered a legal person depends on how it is treated within the specific legal system, making it an institutional determination[viii], and legal personhood is a status that can be granted or revoked by a legal system[ix]. It is the ability of an individual, system, or legal entity to be acknowledged by the law to a degree that allows them to carry out fundamental legal activities. This includes the capacity to possess property, engage in contracts, initiate or be subject to legal actions, act as a legal representative, and fulfil legal obligations[x].
III. Types of Legal Persons and Degree of Their Liabilities
Broadly, legal persons can be classified into two categories: natural and non-natural. These two types can further be sub-classified based on the extent of their legal rights and obligations. For example, a natural legal person under a legal system does not enjoy the same set of rights and duties throughout their entire life. As a minor, he is a legal subject with legal rights but lacks the agency to make legally valid decisions; his guardians are his legal agents. Once he reaches the age of legal adulthood, the extent of his rights and duties changes, and he becomes his own agent, given that he is not disqualified by some other legal reason. Likewise, an infant and an adolescent are both legal persons, but to different degrees, based on the context. Therefore, even for a natural legal person, there are different types of personhood based on the degree of agency bestowed[xi]. In the same vein, for non-natural legal persons, there are different subcategories of personhood based on the nature of the entity and the degree of agency granted. A corporation, which is an artificial legal entity, differs from animals, both domesticated and pets. Recognition of a river or a lagoon as a legal person[xii] is not like that of a corporation. An idol bestowed with a legal personality is also very different from other non-natural legal persons.
The difference is not only in the degree of rights but also in the degree of liabilities. At the top of each non-natural legal person, a human agent makes the decisions or supervises the entity, so when the question of liability arises, the human agent or guardian is held liable. MacCormick identifies legal personhood as an institutional creation with active and passive elements. Based on these, he categorises the capacity of a legal person into four types[xiii]:
(a) Pure passive capacity: It refers to an entity’s legal ability to benefit from certain legal provisions, which are intended to protect the entity from harm or promote its interests.
(b) Passive transactional capacity: It is the ability to receive the benefits or burdens of a transaction, such as an infant’s ability to own property.
(c) Capacity responsibility: This involves determining whether an individual can be held legally liable for their actions, resulting in criminal or civil sanctions.
(d) Transactional capacity: It refers to the ability to perform legally effective acts.
Visa A.J. Kurki, in his book ‘A Theory of Legal Personhood,’ provides another classification that highlights the active and passive elements of identification as a legal person, based on the concept of subjecthood. The substantive and the procedural aspects of active and passive subjecthood under law. He focuses on two distinctions—active/passive and substantive/procedural—to evaluate an entity’s status within a specific legal domain[xiv]. To illustrate with an example from the law of contracts, a contract can be created on behalf of an infant, who plays a passive role in contract law. Such a contract can be legally enforced in court on behalf of the infant, reiterating their passive role in the procedural aspects of contract law. In contrast, being an active subject in contract law relates to one’s capacity and duties. Adults of sound mind can choose which contracts to enter into and whether to pursue legal action regarding a contract. They are responsible for fulfilling their contractual obligations, unless they have delegated this responsibility. As a result, adults are considered both active and passive subjects in contract law[xv].
However, the advancement of technology has led to the creation of something that does not fit within any of the prior existing categories of legal persons: AI entities. The presence of AI-enabled technologies and their role in society are rapidly increasing, and it forces us to think: Should AI entities be identified as legal persons? If yes, to what extent? Can AI entities be held accountable for their criminal actions and required to pay compensation under civil remedies? But how do we punish an invisible, unsupervised, and continuously evolving entity like AI when no human agent is directly involved in the decision-making process of the action in question? The following section discusses the meaning and impact of granting legal personhood to AI.
IV. Granting Legal Rights and Duties To AI
Law operates in a defined frame; governing uncertainty is neither desirable nor possible. Therefore, to understand the need for regulating AI entities, it is essential to understand what the term AI means and why it is a novel phenomenon that requires a new approach to granting legal personhood to a non-natural entity.
Understanding AI
The fantasy of AI has been around for a long time. However, until recently, machines with human cognitive ability were not a part of everyday human life. The cognitive ability itself was primitive and limited, supervised by a determinate set of commands from humans. Jerry Kaplan, a computer science expert and futurist, suggests that defining artificial intelligence is a challenging task due to the lack of consensus on the definition of intelligence[xvi]. When the term' intelligence' itself cannot be defined with certainty, any attempt to define AI is similar to chasing the horizon; the closer it seems, the farther it recedes. However, any effort to regulate something we cannot comprehend will be fruitless; for a legal system to be effective in its regulation, a more definite understanding of the subject being regulated is required. Individuals cannot be expected to comply with rules they do not comprehend. If the law is so complex or ambiguous that it cannot be known in advance, its ability to guide behaviour is diminished, if not entirely negated[xvii].
Today, AI entities are broadly categorised into two groups: narrow and general. Narrow AI, also known as ‘weak’ AI, refers to systems capable of accomplishing specific objectives using computational intelligence. These objectives may include tasks like natural language processing for translation or navigation while driving. A narrow AI system is designed for a specific task and cannot generalise beyond its intended function. The majority of existing AI systems fall into this narrow category. On the other hand, general AI, also known as ‘strong’ AI, can achieve a wide range of objectives, including setting new goals independently. This type of AI encompasses many aspects of human intelligence and is often depicted in popular culture’s portrayals of robots and AI[xviii]. However, current technology has not yet achieved general AI at a level comparable to human capabilities, leading some of us to question its feasibility[xix].
Prevailing definitions
Under the human-centric approach, where intelligence is defined relative to human intelligence, if a machine successfully convinces a human that it is a human, it is considered to have intelligence[xx]. Be it a replication of just one aspect of human behaviour or multiple aspects of the same, a machine’s intelligence is dependent on human behaviour and intelligence. It focuses on the imitation or the duplication of human behaviour. Such a definition is both over- and under-inclusive. Not everything done by humans is related to intelligence, nor can everything done by machines be done by humans; emerging properties of AI suggest that it can do so many things that are beyond human capabilities[xxi].
In another approach, definitions of AI emphasise the notions of thinking and acting rationally, rather than merely imitating humans. It refers to an AI system that has specific objectives and capabilities, employing reasoning to achieve those objectives. The next approach to defining AI is sceptical of a universally acceptable definition of AI. It emphasizes that the definition of intelligence itself varies across time, people, and places. There can be no common consensus on defining AI. Even the AI Act of the EU provides a definition based on its functionality with an emphasis on autonomy and adaptiveness. Even though it emphasises these aspects of AI entities, it remains silent about who shall be held accountable for any harm caused to an individual by such autonomous entities. Thus, the question of determining the personhood status of these entities is even more critical.
V. AI: Legal subject or a Legal agent?
Legal personality is a fiction created by legal systems to govern and regulate their subjects. AI challenges the agency, a quality that has traditionally been reserved for natural persons. All legal persons are legal subjects, but only qualified natural persons are legal agents. A legal subject refers to a being within a particular legal system that possesses certain rights and duties. The recognition as a legal subject is not ascribed to one’s creation but is bestowed upon individuals, animals, or objects under various legal systems. A legal agent is an entity with the capacity to control and change its behaviour while understanding the consequences of its actions or inactions. This capacity requires an understanding of and engagement with relevant legal norms, indicating that legal agency is not simply given but developed through interaction. Despite the existence of various types of legal subjects, including non-human entities, legal agency has always remained with humans. However, advancements in AI could disrupt this exclusive dominance of humans[xxii]. The distinguishing feature of AI lies in its capacity for autonomous decision-making, setting it qualitatively apart from existing technologies. Unlike conventional tools, AI is sometimes required to make moral and independent decisions. This presents a challenge to established legal systems because, for the first time, a technological entity is inserting itself between humans and potential outcomes. How would an AI respond to the famous trolley problem? If it were put in the same situation again, would it make the same choice, or would it decide differently? The answer is unknown. Humans use discretion when exercising their legal agency; how will AI utilize such discretion? Could that be predicted? How appropriate would it be to subject AI to the same moral standards as humans, given that it clearly does not function in the same way as humans? AI is capable of independent development, learning beyond what was pre-determined by its developers. It learns and adapts beyond what can be foreseen by humans. The defeat of a human player by AI, AlphaGo, was not predicted by its developers[xxiii].
The challenges presented by AI itself are so unique and novel in nature that AI entities cannot be granted legal personhood under the existing legal system, as it would lead to the breakdown of the system itself. If we take the example of granting copyright to AI, who should be allowed to have the right and entitlement to the gain from its fair usage? AI is an invisible, abstract entity. Should the right go to the end user, the developer of the program or the owner of the program? No answer to this question is infallible. Similarly, if an automated, self-driving car leads to an accident, leading to the death of a person, who shall be held liable? Would the liability arising be criminal or civil? Existing legal practices cannot provide conclusive answers to these questions. AI is not similar to an artificial corporation where the final agency remains with humans; it is not an idol or a river either, for its nature is much more dynamic, and its ability to affect humans is beyond ordinary human foresight.
VI. Personhood To AI Entities - The Debate Between Just and Legal
We have come to understand that defining and governing AI entities at this stage are fraught with uncertainties, and no matter how cautiously we approach their regulations, they are bound to encounter multiple challenges and will most likely be imperfect; such is the nature of governing emerging technologies. However, the fear of fallacy should not deter us from regulating AI entities; we can trust the law to evolve in tandem with the technology. The EU has taken its first step towards AI governance through its risk-based approach to these entities[xxiv]. It is the first legal framework on AI. Although it does not discuss granting personhood to AI, it focuses on the development of AI entities while safeguarding humans against foreseeable harm caused by these entities. It emphasizes the importance of fostering trustworthy AI while encouraging its innovation, as well as mitigating foreseeable risks.
The idea of entrusting ‘trust’ to an entity implies that some degree of independent cognitive ability is associated with that entity. If so, the answer to the question of granting personhood to AI is already in the making.
Assigning Liability to an AI Entity
This article started with the notion of actions and their consequences, or liabilities. Today, AI is being used in various forms across all sectors of human life. From generating artwork and music to conducting AI-operated robotic diagnoses and surgeries, humankind has undoubtedly benefitted from this development. However, we must not forget that the seemingly supreme intelligence of these entities has been programmed to learn from human behaviours, and these entities not only learn the best of human behaviour but also the worst of its kind. The questionable ethics of a society, its biases and prejudices, are also learnt by the AI entity. Therefore, if most of society believes that saving one famous person from being crushed by a trolley is worth sacrificing the lives of five ordinary people, the same principle would be reflected in the actions taken by the AI entity operating within that society.
The only problem is that, unlike the case with natural persons having agency, we do not have any mechanism to determine the liability in cases where the agency lies with AI. We cannot also determine the culprit behind the learning of an AI. In a case where an AI is trained on a faulty or poor-quality data set affecting the quality of its output, or in another where the AI has been trained on quality data but is deployed in a poor environment and thus as it adapts, its output is affected by its prevailing surrounding, who do we hold accountable? How do we find the proximate cause? We may harmonise the existing laws to find a way around this, but would it be fair to hold either the developer, the market provider, or the end user/deployer of the product accountable for an action that was never within their control and for which they could not foresee the consequences? How an AI would interpret or react in a given situation is not predictable, as a matter of fact, the EU laws insist adaptiveness of AI and their autonomy, their ability to infer, implying that complete predictability of the outputs of a machine-based system would disqualify that system from being recognised as an AI system. Traditional computational software systems have been explained to fall outside the scope of the definition of AI systems under the AI Act[xxv]. However, it would also be unjust not to remedy the harm caused by an AI and hold it liable for its actions. We have often come across a phenomenon known as ‘hallucinating AI,’ where generative AI provides fabricated information that bears no relation to reality. One usually dismisses such hallucinations, labelling them harmless. Still, when AI is being used in medical, legislative, judicial, corporate and several other fields in everyday activities, such hallucinations do not remain harmless.
The urgent need for determining whether to grant personhood to AI is not only to make ownership of properties, copyrights, and earning income and royalties more convenient for AI, but also to prevent innocent individuals from being held accountable for unforeseeable harms committed by AI entities.
In terms of allocating criminal liability to AI, the primary challenge lies in determining intent, or mens rea, an essential component of a criminal offence. Existing AI systems are being used in criminal activities, such as cyberattacks, cyber frauds, and the creation of deepfakes, where the human actor and intent are identifiable. However, with the evolution of technology, future AI entities will be able to function more independently, making the chain of human command less easily identifiable. Consequently, determining the intent behind any wrongful action would become a challenge. Hence, it is best not to prematurely dismiss all discussions on the criminal liability of AI, saying that since AI is not a legal person, it cannot have an intent.
The Ethical Conundrums
AI entities, regardless of their level of advancement, lack consciousness, emotions, and intrinsic moral agency. While AI can mimic decision-making and autonomous action, it does so based on algorithms and data rather than understanding or free will. Granting personhood could falsely attribute moral agency to machines that lack the capacity for ethical reasoning or responsibility. However, not granting personhood also presents ethical challenges, especially the problem of assigning liability when AI entities cause harm. Without personhood, humans—whether developers, manufacturers, or users — must bear legal responsibility. Simultaneously, if AI were granted personhood, it could lead to scenarios where the machine itself is held accountable, potentially complicating the attribution of blame. There is a high risk that this could reduce accountability for human actors and lead to loopholes where responsibility is diluted. Malicious human actors and agents have exploited loopholes in the law to serve their own interests for ages. As important as the question of determining personhood to AI is, it must also not become a Frankenstein nightmare where AI becomes a ready scapegoat for wrongdoers.
VII. Conclusion
AI is the new reality. Instead of fear-mongering over aspects of sentience as often depicted in science fiction, it is better to learn to live with and adapt to this reality. Legal systems also need to evolve to address the complex and dynamic challenges presented by AI. At the turn of the 20th century, the concept of AI was largely theoretical, a subject of purely academic debate; today, cases involving AI entities and their rights are already being presented before courts. Instances of accidents by self-driving cars[xxvi], copyright infringements[xxvii], and application of AI algorithms for tacit collusion[xxviii] and anti-competitive practices are already being reported[xxix]. AI today aids humans in their decision-making, but it would not be surprising if the roles of the two were reversed in the future, given the progression of technology and the advancement of machine learning. Is it desirable to acknowledge AI entities as a legal person, and if yes, to what extent, full recognition that is similar to humans, or limited recognition like one bestowed upon corporations, or is there a need to develop a new kind of legal person suited to address the challenges explicit to the rise of AI? Before jumping to a premature conclusion about a future where AI will start ruling humans, it is best to devise a mechanism that keeps human agency in the loop with the decision-making ability of machines and finds a way to assign liability accordingly. Granting legal personhood to AI entities does not imply recognition of their intrinsic worth. Instead, it can serve other purposes, such as enhancing economic efficiency or managing risks. However, this subject requires careful deliberation, and such deliberations should not be unilateral but must include all parties involved, from the developer to the end-users.
End Notes
[i] Galatians 6:7 - Do not be deceived: God cannot be mocked. A man reaps what he sows.
[ii] Brownsword, R. (2017). From Erewhon to AlphaGo: For the sake of human dignity, should we destroy the machines? Law, Innovation and Technology, 9(1), 117-153. https://doi.org/10.1080/17579961.2017.1303927.
[iii] Article 3: Definitions. In Regulation (EU) 2024/1689 of the European Parliament and of the Council on Artificial Intelligence (Artificial Intelligence Act). https://artificialintelligenceact.eu/article/3/.
[iv] Lawrence, B. Solum, Legal Personhood for Artificial Intelligences, 70 N.C. L. Rev. 1231 (1992).Available at: https://scholarship.law.unc.edu/nclr/vol70/iss4/4.
[v] See, Chapter 4. Kurki, Visa A.J., 'Who or What Can be a Legal Person?’, A Theory of Legal Personhood (Oxford, 2019; online edn, Oxford Academic, 19 Sept. 2019), https://doi.org/10.1093/oso/9780198844037.003.0005.
[vi] See, chapter 5. Kurki, Visa A.J., 'Who or What Can be a Legal Person?’, A Theory of Legal Personhood (Oxford, 2019; online edn, Oxford Academic, 19 Sept. 2019), https://doi.org/10.1093/oso/9780198844037.003.0005.
[vii]See, chapter 2, Turner, J. Robot Rules: Regulating Artificial Intelligence. 1st ed. 2019.
[viii] Supra Note 5.
[ix] Supra Note 5.
[x] Bayern, S. “The Implications Of Modern Business-Entity Law For The Regulation Of Autonomous Systems,” Stanford Technology Law Review, 2015.
[xi] Supra Note 5.
[xii] In 2017, New Zealand gave the Whanganui River the status of a legal person. Same year, India declared two of its rivers, the Ganga and the Yamuna, as living entities. In 2019, Bangladesh gave all of its rivers the same legal rights as humans. See, Hollingsworth, J. (2020, December 12). This river in New Zealand is legally a person. here’s how it happened. CNN. https://www.cnn.com/2017/03/15/asia/river-personhood-trnd/index.html.; Suri, M. (2017, March 23). India becomes second country to give rivers human status. CNN. https://edition.cnn.com/2017/03/22/asia/india-river-human/index.html; Fears of evictions as Bangladesh gives Rivers Legal Rights | Reuters. (2019, July 05). https://www.reuters.com/article/us-bangladesh-landrights-rivers-idUSKCN1TZ1ZR.
[xiii] Supra Note 5, Neil MacCormick, Institutions of Law: An Essay in Legal Theory (Oxford University Press 2007) 77–99.
[xiv] Supra Note 5.
[xv] Supra Note 5.
[xvi] Supra Note 4, Kaplan, J. Artificial Intelligence: What Everyone Needs to Know (New York: Oxford University Press, 2016), 1.
[xvii] See, Chapter 1, Rules: Regulating Artificial Intelligence. 1st ed. 2019., Turner, J. Robot.
[xviii] Remember Baymax from Big Hero 6, or Richie Rich’s butler Irona.
[xix] See, Chapter 1, Turner, J. Robot Rules: Regulating Artificial Intelligence. 1st ed. 2019.
[xx] The Alan Turing Test, also known as the imitation game.
[xxi] Supra Note 17.
[xxii] See, chapter 2, Turner, J. Robot Rules: Regulating Artificial Intelligence. 1st ed. 2019.
[xxiii] Mozur, P. (2017, May 23). Google’s AlphaGo Defeats Chinese Go Master in Win for A.I. The New York Times. https://www.nytimes.com/2017/05/23/business/google-deepmind-alphago-go-champion-defeat.html.
[xxiv] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act).
[xxv] Recital 12. In Regulation (EU) 2024/1689 of the European Parliament and of the Council on Artificial Intelligence (Artificial Intelligence Act). https://artificialintelligenceact.eu/recital/12/.
[xxvi] Associated Press. (2023, November 8). Cruise recalls all self-driving cars after grisly accident and California ban. The Guardian. Retrieved January 16, 2025, from https://www.theguardian.com/technology/2023/nov/08/cruise-recall-self-driving-cars-gm.
[xxvii] Crawford, K., & Schultz, J. (2024, January 16). Generative AI is a crisis for copyright law. Issues in Science and Technology. Retrieved January 16, 2025, from https://issues.org/generative-ai-copyright-law-crawford-schultz/.
[xxviii] OECD (2017), Algorithms and Collusion: Competition Policy in the Digital Age. www.oecd.org/competition/algorithms-collusion-competition-policy-in-the-digital-age.htm.
[xxix] Economics Observatory. (n.d.). AI cartels: What does artificial intelligence mean for competition policy? Retrieved January 16, 2025, from https://www.economicsobservatory.com/ai-cartels-what-does-artificial-intelligence-mean-for-competition-policy.
Authored by Shivangi Bhardwaj, Advocate at Metalegal Advocates. The views expressed are personal and do not constitute legal opinions.