Home Artificial Intelligence Artificial Intelligence and Legal Identity

Artificial Intelligence and Legal Identity

0
Artificial Intelligence and Legal Identity

This text focuses on the difficulty of granting the status of a legal subject to artificial intelligence (AI), especially based on civil law. Legal identity is defined here as an idea integral to the term of legal capability; nonetheless, this doesn’t imply accepting that moral subjectivity is similar as moral personality. Legal identity is a posh attribute that could be recognized for certain subjects or assigned to others.

I feel this attribute is graded, discrete, discontinuous, multifaceted, and changeable. Which means that it will probably contain kind of elements of differing kinds (e.g., duties, rights, competencies, etc.), which generally could be added or removed by the legislator; human rights, which, based on the common opinion, can’t be deprived, are the exception.

Nowadays, humanity is facing a period of social transformation related to the alternative of 1 technological mode with one other; “smart” machines and software learn quite quickly; artificial intelligence systems are increasingly able to replacing people in lots of activities. Considered one of the problems that’s arising an increasing number of regularly because of the advance of artificial intelligence technologies is the popularity of artificial intelligent systems as legal subjects, as they’ve reached the extent of creating fully autonomous decisions and potentially manifesting “subjective will”. This issue was hypothetically raised within the twentieth century. Within the twenty first century, the scientific debate is steadily evolving, reaching the opposite extreme with each introduction of recent models of artificial intelligence into practice, corresponding to the looks of self-driving cars on the streets or the presentation of robots with a latest set of functions.

The legal issue of determining the status of artificial intelligence is of a general theoretical nature, which is attributable to the target impossibility of predicting all possible outcomes of developing latest models of artificial intelligence. Nevertheless, artificial intelligence systems (AI systems) are already actual participants in certain social relations, which requires the establishment of “benchmarks”, i.e., resolution of fundamental issues on this area for the aim of legislative consolidation, and thus, reduction of uncertainty in predicting the event of relations involving artificial intelligence systems in the longer term.

The problem of the alleged identity of artificial intelligence as an object of research, mentioned within the title of the article, definitely doesn’t cover all artificial intelligence systems, including many “electronic assistants” that don’t claim to be legal entities. Their set of functions is restricted, and so they represent narrow (weak) artificial intelligence. We’ll relatively consult with “smart machines” (cyber-physical intelligent systems) and generative models of virtual intelligent systems, that are increasingly approaching general (powerful) artificial intelligence comparable to human intelligence and, in the longer term, even exceeding it.

By 2023, the difficulty of making strong artificial intelligence has been urgently raised by multimodal neural networks corresponding to ChatGPT, DALL-e, and others, the mental capabilities of that are being improved by increasing the variety of parameters (perception modalities, including those inaccessible to humans), in addition to by utilizing large amounts of knowledge for training that humans cannot physically process. For instance, multimodal generative models of neural networks can produce such images, literary and scientific texts that it isn’t all the time possible to tell apart whether or not they are created by a human or a man-made intelligence system.

IT experts highlight two qualitative leaps: a speed leap (the frequency of the emergence of brand-new models), which is now measured in months relatively than years, and a volatility leap (the shortcoming to accurately predict what might occur in the sector of artificial intelligence even by the tip of the 12 months). The ChatGPT-3 model (the third generation of the natural language processing algorithm from OpenAI) was introduced in 2020 and will process text, while the following generation model, ChatGPT-4, launched by the manufacturer in March 2023, can “work” not only with texts but additionally with images, and the following generation model is learning and will likely be able to much more.

A number of years ago, the anticipated moment of technological singularity, when the event of machines becomes virtually uncontrollable and irreversible, dramatically changing human civilization, was considered to occur at the very least in a couple of a long time, but nowadays an increasing number of researchers consider that it will probably occur much faster. This suggests the emergence of so-called strong artificial intelligence, which can display abilities comparable to human intelligence and can give you the option to resolve the same and even wider range of tasks. Unlike weak artificial intelligence, strong AI could have consciousness, yet certainly one of the essential conditions for the emergence of consciousness in intelligent systems is the power to perform multimodal behavior, integrating data from different sensory modalities (text, image, video, sound, etc.), “connecting” information of various modalities to reality, and creating complete holistic “world metaphors” inherent in humans.

In March 2023, greater than a thousand researchers, IT experts, and entrepreneurs in the sector of artificial intelligence signed an open letter published on the web site of the Way forward for Life Institute, an American research center specializing within the investigation of existential risks to humanity. The letter calls for suspending the training of recent generative multimodal neural network models, as the shortage of unified security protocols and legal vacuum significantly enhance the risks because the speed of AI development has increased dramatically because of the “ChatGPT revolution”. It was also noted that artificial intelligence models have developed unexplained capabilities not intended by their developers, and the share of such capabilities is more likely to regularly increase. As well as, such a technological revolution dramatically boosts the creation of intelligent gadgets that can turn into widespread, and latest generations, modern children who’ve grown up in constant communication with artificial intelligence assistants, will likely be very different from previous generations.

Is it possible to hinder the event of artificial intelligence in order that humanity can adapt to latest conditions? In theory, it’s, if all states facilitate this through national laws. Will they accomplish that? Based on the published national strategies, they will not; quite the opposite, each state goals to win the competition (to take care of leadership or to narrow the gap).

The capabilities of artificial intelligence attract entrepreneurs, so businesses invest heavily in latest developments, with the success of every latest model driving the method. Annual investments are growing, considering each private and state investments in development; the worldwide marketplace for AI solutions is estimated at lots of of billions of dollars. In line with forecasts, particularly those contained within the European Parliament’s resolution “On Artificial Intelligence within the Digital Age” dated May 3, 2022, the contribution of artificial intelligence to the worldwide economy will exceed 11 trillion euros by 2030.

Practice-oriented business results in the implementation of artificial intelligence technologies in all sectors of the economy. Artificial intelligence is utilized in each the extractive and processing industries (metallurgy, fuel and chemical industry, engineering, metalworking, etc.). It’s applied to predict the efficiency of developed products, automate assembly lines, reduce rejects, improve logistics, and stop downtime.

Using artificial intelligence in transportation involves each autonomous vehicles and route optimization by predicting traffic flows, in addition to ensuring safety through the prevention of dangerous situations. The admission of self-driving cars to public roads is a difficulty of intense debate in parliaments around the globe.

In banking, artificial intelligence systems have almost completely replaced humans in assessing borrowers’ creditworthiness; they’re increasingly getting used to develop latest banking products and enhance the safety of banking transactions.

Artificial intelligence technologies are taking up not only business but additionally the social sphere: healthcare, education, and employment. The appliance of artificial intelligence in medicine enables higher diagnostics, development of recent medicines, and robotics-assisted surgeries; in education, it allows for personalized lessons, automated assessment of scholars and teachers’ expertise.

Today, employment is increasingly changing because of the exponential growth of platform employment. In line with the International Labour Organization, the share of individuals working through digital employment platforms augmented by artificial intelligence is steadily increasing worldwide. Platform employment isn’t the one component of the labor transformation; the growing level of production robotization also has a big impact. In line with the International Federation of Robotics, the number of business robots continues to extend worldwide, with the fastest pace of robotization observed in Asia, primarily in China and Japan.

Indeed, the capabilities of artificial intelligence to research data used for production management, diagnostic analytics, and forecasting are of great interest to governments. Artificial intelligence is being implemented in public administration. Nowadays, the efforts to create digital platforms for public services and automate many processes related to decision-making by government agencies are being intensified.

The concepts of “artificial personality” and “artificial sociality” are more regularly mentioned in public discourse; this demonstrates that the event and implementation of intelligent systems have shifted from a purely technical field to the research of varied technique of its integration into humanitarian and socio-cultural activities.

In view of the above, it will probably be stated that artificial intelligence is becoming an increasing number of deeply embedded in people’s lives. The presence of artificial intelligence systems in our lives will turn into more evident in the approaching years; it’s going to increase each within the work environment and in public space, in services and at home. Artificial intelligence will increasingly provide more efficient results through intelligent automation of varied processes, thus creating latest opportunities and posing latest threats to individuals, communities, and states.

Because the mental level grows, AI systems will inevitably turn into an integral a part of society; people could have to coexist with them. Such a symbiosis will involve cooperation between humans and “smart” machines, which, based on Nobel Prize-winning economist J. Stiglitz, will result in the transformation of civilization (Stiglitz, 2017). Even today, based on some lawyers, “to be able to enhance human welfare, the law mustn’t distinguish between the activities of humans and people of artificial intelligence when humans and artificial intelligence perform the identical tasks” (Abbott, 2020). It must also be considered that the event of humanoid robots, that are acquiring physiology an increasing number of just like that of humans, will lead, amongst other things, to their performing gender roles as partners in society (Karnouskos, 2022).

States must adapt their laws to changing social relations: the variety of laws aimed toward regulating relations involving artificial intelligence systems is growing rapidly around the globe. In line with Stanford University’s AI Index Report 2023, while just one law was adopted in 2016, there have been 12 of them in 2018, 18 – in 2021, and 37 – in 2022. This prompted the United Nations to define a position on the ethics of using artificial intelligence at the worldwide level. In September 2022, a document was published that contained the principles of ethical use of artificial intelligence and was based on the Recommendations on the Ethics of Artificial Intelligence adopted a 12 months earlier by the UNESCO General Conference. Nevertheless, the pace of development and implementation of artificial intelligence technologies is way ahead of the pace of relevant changes in laws.

Basic Concepts of Legal Capability of Artificial Intelligence

Considering the concepts of doubtless granting legal capability to mental systems, it must be acknowledged that the implementation of any of those approaches would require a fundamental reconstruction of the prevailing general theory of law and amendments to quite a lot of provisions in certain branches of law. It must be emphasised that proponents of various views often use the term “electronic person”, thus, using this term doesn’t allow to find out which concept the creator of the work is a proponent of without reading the work itself.

Essentially the most radical and, obviously, the least popular approach in scientific circles is the concept of the person legal capability of artificial intelligence. Proponents of this approach recommend the concept of “full inclusivity” (extreme inclusivism), which suggests granting AI systems a legal status just like that of humans in addition to recognizing their very own interests (Mulgan, 2019), given their social significance or social content (social valence). The latter is attributable to the undeniable fact that “the robot’s physical embodiment tends to make humans treat this moving object as if it were alive. That is much more evident when the robot has anthropomorphic characteristics, because the resemblance to the human body makes people start projecting emotions, feelings of delight, pain, and care, in addition to the need to ascertain relationships” (Avila Negri, 2021). The projection of human emotions onto inanimate objects isn’t latest, dating back to human history, but when applied to robots, it entails quite a few implications (Balkin, 2015).

The prerequisites for legal confirmation of this position are often mentioned as follows:

– AI systems are reaching a level comparable to human cognitive functions;

– increasing the degree of similarity between robots and humans;

– humanity, protection of intelligent systems from potential “suffering”.

Because the list of mandatory requirements shows, all of them have a high degree of theorization and subjective assessment. Particularly, the trend towards the creation of anthropomorphic robots (androids) is driven by the day-to-day psychological and social needs of people that feel comfortable within the “company” of subjects just like them. Some modern robots produce other constricting properties because of the functions they perform; these include “reusable” courier robots, which place a priority on robust construction and efficient weight distribution. On this case, the last of those prerequisites comes into play, because of the formation of emotional ties with robots within the human mind, just like the emotional ties between a pet and its owner (Grin, 2018).

The thought of “full inclusion” of the legal status of AI systems and humans is reflected within the works of some legal scholars. Because the provisions of the Structure and sectoral laws don’t contain a legal definition of a personality, the concept of “personality” within the constitutional and legal sense theoretically allows for an expansive interpretation. On this case, individuals would come with any holders of intelligence whose cognitive abilities are recognized as sufficiently developed. In line with A.V. Nechkin, the logic of this approach is that the essential difference between humans and other living beings is their unique highly developed intelligence (Nechkin, 2020). Recognition of the rights of artificial intelligence systems appears to be the following step within the evolution of the legal system, which is regularly extending legal recognition to previously discriminated against people, and today also provides access to non-humans (Hellers, 2021).

If AI systems are granted such a legal status, the proponents of this approach consider it appropriate to grant such systems not literal rights of residents of their established constitutional and legal interpretation, but their analogs and certain civil rights with some deviations. This position relies on objective biological differences between humans and robots. As an example, it is unnecessary to acknowledge the proper to life for an AI system, because it doesn’t live within the biological sense. The rights, freedoms, and obligations of artificial intelligence systems must be secondary when put next to the rights of residents; this provision establishes the derivative nature of artificial intelligence as a human creation within the legal sense.

Potential constitutional rights and freedoms of artificial intelligent systems include the proper to be free, the proper to self-improvement (learning and self-learning), the proper to privacy (protection of software from arbitrary interference by third parties), freedom of speech, freedom of creativity, recognition of AI system copyright and limited property rights. Specific rights of artificial intelligence can be listed, corresponding to the proper to access a source of electricity.

As for the duties of artificial intelligence systems, it is usually recommended that the three well-known laws of robotics formulated by I. Asimov must be constitutionally consolidated: Doing no harm to an individual and stopping harm by their very own inaction; obeying all orders given by an individual, apart from those aimed toward harming one other person; taking good care of their very own safety, apart from the 2 previous cases (Naumov and Arkhipov, 2017). On this case, the foundations of civil and administrative law will reflect another duties.

The concept of the person legal capability of artificial intelligence has little or no likelihood of being legitimized for several reasons.

First, the criterion for recognizing legal capability based on the presence of consciousness and self-awareness is abstract; it allows for various offences, abuse of law and provokes social and political problems as a further reason for the stratification of society. This concept was developed intimately within the work of S. Chopra and L. White, who argued that consciousness and self-awareness aren’t vital and/or sufficient condition for recognising AI systems as a legal subject. In legal reality, completely conscious individuals, for instance, children (or slaves in Roman law), are deprived or limited in legal capability. At the identical time, individuals with severe mental disorders, including those declared incapacitated or in a coma, etc., with an objective inability to be conscious in the primary case remain legal subjects (albeit in a limited form), and within the second case, they’ve the identical full legal capability, without major changes of their legal status. The potential consolidation of the mentioned criterion of consciousness and self-awareness will make it possible to arbitrarily deprive residents of legal capability.

Secondly, artificial intelligence systems won’t give you the option to exercise their rights and obligations within the established legal sense, since they operate based on a previously written program, and legally significant decisions must be based on an individual’s subjective, moral selection (Morhat, 2018b), their direct expression of will. All moral attitudes, feelings, and desires of such a “person” turn into derived from human intelligence (Uzhov, 2017). The autonomy of artificial intelligence systems within the sense of their ability to make decisions and implement them independently, without external anthropogenic control or targeted human influence (Musina, 2023), isn’t comprehensive. Nowadays, artificial intelligence is barely capable of creating “quasi-autonomous decisions” which are someway based on the ideas and moral attitudes of individuals. On this regard, only the “action-operation” of an AI system could be considered, excluding the power to make an actual moral assessment of artificial intelligence behavior (Petiev, 2022).

Thirdly, the popularity of the person legal capability of artificial intelligence (especially in the shape of equating it with the status of a natural person) results in a destructive change within the established legal order and legal traditions which have been formed because the Roman law and raises quite a lot of fundamentally insoluble philosophical and legal issues in the sector of human rights. The law as a system of social norms and a social phenomenon was created with due regard to human capabilities and to make sure human interests. The established anthropocentric system of normative provisions, the international consensus on the concept of internal rights will likely be considered legally and factually invalid in case of building an approach of “extreme inclusivism” (Dremlyuga & Dremlyuga, 2019). Subsequently, granting the status of a legal entity to AI systems, particularly “smart” robots, will not be an answer to existing problems, but a Pandora’s box that aggravates social and political contradictions (Solaiman, 2017).

One other point is that the works of the proponents of this idea normally mention only robots, i.e. cyber-physical artificial intelligence systems that can interact with people within the physical world, while virtual systems are excluded, although strong artificial intelligence, if it emerges, will likely be embodied in a virtual form as well.

Based on the above arguments, the concept of individual legal capability of a man-made intelligence system must be regarded as legally inconceivable under the present legal order.

The concept of collective personality with regard to artificial intelligent systems has gained considerable support amongst proponents of the admissibility of such legal capability. The fundamental advantage of this approach is that it excludes abstract concepts and value judgments (consciousness, self-awareness, rationality, morality, etc.) from legal work. The approach relies on the appliance of legal fiction to artificial intelligence.

As for legal entities, there are already “advanced regulatory methods that could be adapted to resolve the dilemma of the legal status of artificial intelligence” (Hárs, 2022).

This idea doesn’t imply that AI systems are literally granted the legal capability of a natural person but is barely an extension of the prevailing institution of legal entities, which suggests that a latest category of legal entities called cybernetic “electronic organisms” must be created. This approach makes it more appropriate to contemplate a legal entity not in accordance with the trendy narrow concept, particularly, the duty that it might acquire and exercise civil rights, bear civil liabilities, and be a plaintiff and defendant in court by itself behalf), but in a broader sense, which represents a legal entity as any structure apart from a natural person endowed with rights and obligations in the shape provided by law. Thus, proponents of this approach suggest considering a legal entity as a subject entity (ideal entity) under Roman law.

The similarity between artificial intelligence systems and legal entities is manifested in the best way they’re endowed with legal capability – through mandatory state registration of legal entities. Only after passing the established registration procedure a legal entity is endowed with legal status and legal capability, i.e., it becomes a legal subject. This model keeps discussions concerning the legal capability of AI systems within the legal field, excluding the popularity of legal capability on other (extra-legal) grounds, without internal prerequisites, while an individual is recognized as a legal subject by birth.

The advantage of this idea is the extension to artificial intelligent systems of the requirement to enter information into the relevant state registers, just like the state register of legal entities, as a prerequisite for granting them legal capability. This method implements a crucial function of systematizing all legal entities and making a single database, which is vital for each state authorities to regulate and supervise (for instance, in the sector of taxation) and potential counterparties of such entities.

The scope of rights of legal entities in any jurisdiction will likely be lower than that of natural individuals; subsequently, using this structure to grant legal capability to artificial intelligence isn’t related to granting it quite a lot of rights proposed by the proponents of the previous concept.

When applying the legal fiction technique to legal entities, it’s assumed that the actions of a legal entity are accompanied by an association of natural individuals who form their “will” and exercise their “will” through the governing bodies of the legal entity.

In other words, legal entities are artificial (abstract) units designed to satisfy the interests of natural individuals who acted as their founders or controlled them. Likewise, artificial intelligent systems are created to fulfill the needs of certain individuals – developers, operators, owners. A natural one who uses or programs AI systems is guided by his or her own interests, which this technique represents within the external environment.

Assessing such a regulatory model in theory, one mustn’t forget that an entire analogy between the positions of legal entities and AI systems is inconceivable. As mentioned above, all legally significant actions of legal entities are accompanied by natural individuals who directly make these decisions. The need of a legal entity is all the time determined and fully controlled by the need of natural individuals. Thus, legal entities cannot operate without the need of natural individuals. As for AI systems, there may be already an objective problem of their autonomy, i.e. the power to make decisions without the intervention of a natural person after the moment of the direct creation of such a system.

Given the inherent limitations of the concepts reviewed above, a lot of researchers offer their very own approaches to addressing the legal status of artificial intelligent systems. Conventionally, they could be attributed to different variations of the concept of “gradient legal capability”, based on the researcher from the University of Leuven D. M. Mocanu, who implies a limited or partial legal status and legal capability of AI systems with a reservation: the term “gradient” is used since it isn’t only about including or not including certain rights and obligations within the legal status, but additionally about forming a set of such rights and obligations with a minimum threshold, in addition to about recognizing such legal capability just for certain purposes. Then, the 2 fundamental sorts of this idea may include approaches that justify:

1) granting AI systems a special legal status and including “electronic individuals” within the legal order as a completely latest category of legal subjects;

2) granting AI systems a limited legal status and legal capability throughout the framework of civil legal relations through the introduction of the category of “electronic agents”.

The position of proponents of various approaches inside this idea could be united, provided that there aren’t any ontological grounds to contemplate artificial intelligence as a legal subject; nonetheless, in specific cases, there are already functional reasons to endow artificial intelligence systems with certain rights and obligations, which “proves the very best approach to promote the person and public interests that must be protected by law” by granting these systems “limited and narrow” types of legal entity”.

Granting special legal status to artificial intelligence systems by establishing a separate legal institution of “electronic individuals” has a big advantage within the detailed explanation and regulation of the relations that arise:

– between legal entities and natural individuals and AI systems;

– between AI systems and their developers (operators, owners);

– between a 3rd party and AI systems in civil legal relations.

On this legal framework, the synthetic intelligence system will likely be controlled and managed individually from its developer, owner or operator. When defining the concept of the “electronic person”, P. M. Morkhat focuses on the appliance of the above-mentioned approach to legal fiction and the functional direction of a selected model of artificial intelligence: “electronic person” is a technical and legal image (which has some features of legal fiction in addition to of a legal entity) that reflects and implements a conditionally specific legal capability of a man-made intelligence system, which differs depending on its intended function or purpose and capabilities.

Similarly to the concept of collective individuals in relation to AI systems, this approach involves keeping special registers of “electronic individuals”. An in depth and clear description of the rights and obligations of “electronic individuals” is the idea for further control by the state and the owner of such AI systems. A clearly defined range of powers, a narrowed scope of legal status, and the legal capability of “electronic individuals” will be sure that this “person” doesn’t transcend its program because of potentially independent decision-making and constant self-learning.

This approach implies that artificial intelligence, which on the stage of its creation is the mental property of software developers, could also be granted the rights of a legal entity after appropriate certification and state registration, however the legal status and legal capability of an “electronic person” will likely be preserved.

The implementation of a fundamentally latest institution of the established legal order could have serious legal consequences, requiring a comprehensive legislative reform at the very least within the areas of constitutional and civil law. Researchers reasonably indicate that caution must be exercised when adopting the concept of an “electronic person”, given the difficulties of introducing latest individuals in laws, because the expansion of the concept of “person” within the legal sense may potentially lead to restrictions on the rights and legit interests of existing subjects of legal relations (Bryson et al., 2017). It seems inconceivable to contemplate these elements because the legal capability of natural individuals, legal entities and public law entities is the results of centuries of evolution of the speculation of state and law.

The second approach throughout the concept of gradient legal capability is the legal concept of “electronic agents”, primarily related to the widespread use of AI systems as a way of communication between counterparties and as tools for online commerce. This approach could be called a compromise, because it admits the impossibility of granting the status of full-fledged legal subjects to AI systems while establishing certain (socially significant) rights and obligations for artificial intelligence. In other words, the concept of “electronic agents” legalizes the quasi-subjectivity of artificial intelligence. The term “quasi-legal subject” must be understood as a certain legal phenomenon wherein certain elements of legal capability are recognized on the official or doctrinal level, however the establishment of the status of a full-fledged legal subject is inconceivable.

Proponents of this approach emphasize the functional features of AI systems that allow them to act as each a passive tool and an lively participant in legal relations, potentially able to independently generating legally significant contracts for the system owner. Subsequently, AI systems could be conditionally considered throughout the framework of agency relations. When creating (or registering) an AI system, the initiator of the “electronic agent” activity enters right into a virtual unilateral agency agreement with it, because of this of which the “electronic agent” is granted quite a lot of powers, exercising which it will probably perform legal actions which are significant for the principal.

Sources:

  • R. McLay, “Managing the rise of Artificial Intelligence,” 2018
  • Bertolini A. and Episcopo F., 2022, “Robots and AI as Legal Subjects? Disentangling the Ontological and Functional Perspective”
  • Alekseev, A. Yu., Alekseeva, E. A., Emelyanova, N. N. (2023). “Artificial personality in social and political communication. Artificial societies”
  • “Specificities of Sanfilippo A syndrome laboratory diagnostics” N.S. Trofimova, N.V. Olkhovich, N.G. Gorovenko
  • Shutkin, S. I., 2020, “Is the Legal Capability of Artificial Intelligence Possible? Works on Mental Property”
  • Ladenkov, N. Ye., 2021, “Models of granting legal capability to artificial intelligence”
  • Bertolini, A., and Episcopo, F., 2021, “The Expert Group’s Report on Liability for Artificial Intelligence and Other Emerging Digital Technologies: a Critical Assessment”
  • Morkhat, P. M., 2018, “On the query of the legal definition of the term artificial intelligence”

LEAVE A REPLY

Please enter your comment!
Please enter your name here