Artificial Intelligence and Human Rights: Four Realms of Discussion, Research, and Annotated Bibliography

By Jootaek Lee

Jootaek Lee is an assistant professor and librarian at Rutgers Law School (Newark). Professor Lee is also an adjunct professor and an affiliated faculty for the Program on Human Rights and the Global Economy (PHRGE) at the Northeastern University School of Law and a Massachusetts attorney. Professor Lee, a prolific scholar and author, has been published in prestigious journals, including Georgetown Environmental Law Review, Northwestern Journal of Human Rights, Emory International Law Review, Law Library Journal, International Journal of Legal Information, Legal Reference Services Quarterly, Korea University Law Review, and Globalex by New York University Law School. His research focuses on human rights to land, water, and education; Asian practice of international law, especially human rights and international criminal law; legal informatics; Korean law and legal education; and pedagogy in law. He made numerous presentations at national and international conferences. He is active with the American Association of Law Libraries (AALL) and the American Society of International Law (ASIL), having served on AALL’s Diversity Committee, CONELL Committee, and Awards Committee. He is the former Co-Chair of International Legal Research Interest Group of the ASIL (2012-2015) and the former president of Asian American Law Librarians Caucus of AALL (2013-2014).

NOTE: This article is an abridged version of a full article by Jootaek Lee, Artificial Intelligence and Human Rights: Four Realms of Discussion, Research, and Annotated Bibliography, 1 Rutgers International and Human Rights Journal (2021).

Published July/August 2022

Table of Contents

1. Introduction

The term, “artificial intelligence” (“AI”) has changed since it was first coined by John McCarthy in 1956.[1] AI, believed to have been created with Kurt Gödel’s unprovable computational statements in 1931,[2] is now called deep learning or machine learning. AI is a computer machine with the ability to make predictions about the future and solve complex tasks using algorithms.[3] The AI algorithms are enhanced and become effective with big data by capturing the present and the past, while still reflecting human biases into models and equations.[4] AI can also make choices like humans, mirroring human reasoning.[5] AI can help robots to efficiently repeat the same labor intensive procedures in factories. It can also analyze historic and present data more efficiently through deep learning, natural language processing, and anomaly detection. Thus, AI covers a spectrum of augmented intelligence relating to prediction, autonomous intelligence relating to decision-making, automated intelligence for labor robots, and assisted intelligence for data analysis.[6]

This spectrum, however, will be further expanded with the development of the Artificial General Intelligence (“AGI”), also known as super-intelligence. The AGI, a set of algorithms learning and developing multiple self-intelligences independently to resolve multiple problem domains, will accelerate the displacement of human labor. Just as Jeremy Rifkin’s The End of Work foresees the end of farmers, blue-collar workers, and service workers due to the First-, Second-, and Third-Industrial Revolutions,[7] after the Fourth Industrial Revolution with AI, robots, biotechnology, nanotechnology, and autonomous vehicles, the AGI can be developed to displace current, general human jobs. This issue has been a big conundrum to answer in contemporary society.

Whether AI displaces of human labor during the Fourth Industrial Revolution will depend upon how we define the nature of the mind.[8] Based on the Gardner theory,[9] there are intelligences highly relying on the mind such as musical intelligence, interpersonal intelligence, intrapersonal and metacognitive intelligence, and naturalistic intelligence.[10] To the extent that our world needs and cares about only visual-spatial, linguistic-verbal, and logical-mathematical intelligences in the brain, AGI so-called superhuman intelligence may cause the human era to come to an end.[11] Furthermore, some scholars suggest that even curiosity and creativity, which are relevant to musical and intrapersonal intelligence, can be defined and interpreted in a new way of “connecting previously disconnected patterns in an initially surprising way,” and thus can be reached and realized by AI.[12] “Singularity” is defined as the acceleration of technological progress.[13] It is predicted to cause exponential “runaway” reactions beyond any hope of control: singularity will surpass human intelligence and lead to irreversible changes to human civilizations, where most human affairs may not continue.[14]

When seen from a different angle, however, AI can be viewed more optimistically. Robots and machines equipped with AI can co-exist harmoniously, just as humans’ relationship with animals. They may replace humans in many traditional jobs, but humans also may find new jobs to supervise, educate, and design AI. When AI machines are given similar moral status as humans, humans will be able to feel empathy towards robots, and the necessity to protect them as a human equivalent will be possible.[15]

In accordance with Moore’s Law,[16] AI has developed exponentially in the last ten years, and this noticeable development allows us to accurately predict both positive and negative results from AI. AGI seemed far from reality when AI was simply used to solve only computational problems and play games against humans. For example, the AI computer IBM Watson won first prize in Jeopardy in 2011, and the AI computer AlphaGo defeated professional Go player Lee Sedol, in 2016.

Nowadays, AI has become realized in more concrete forms such as androids, autonomous vehicles, autonomous killing machines, and various video avatars. These new forms of AI have affected human lives extensively, and as a result, impacted human rights and humanitarian law.[17] AI is not a story of the future anymore; it is the new normal. AI affects contemporary everyday life. AGI is also believed to be fully developed within 20 to 50 years. The current moment is a good time to thoroughly analyze the human rights implications of AI, both philosophically and empirically, before AI development passes the Singularity.

Researching AI and its human rights implications can be separately done in four different realms. These four realms are devised according to the level of AI development and human rights implication; these realms may be overlapping at certain stages of development and timeline. Humans devised the AI to benefit themselves (The First Realm), and the AI started having side effects of harming humans (The Second Realm). Humans began anthropomorphizing AI and started feeling an obligation to protect AI (The Third Realm). Finally, the AI, especially AGI, starts claiming their own rights (The Fourth Realm).

In the first and second realms, AI was discussed as passive beneficial objects of human life. Paternalistic attitudes toward AI remain. Until general international law regulating AI is drafted and adopted, democratic accountability for governments’ misuses of AI should be regulated in a uniform way by the current legally binding universal human rights system.

Many international human rights principles will apply to AI as passive objects, and scholars have thought about what human rights will be affected by AI development. Various working groups for global governance, including governments, international organizations, and private entities and institutions, produced statements and principles to regulate AI development. The primary goals sought by this regulation are accuracy; transparency; human-centered design; lawful; ethical and robust safety; privacy and data governance; diversity; non-discrimination and fairness; and societal and environmental wellbeing and accountability.

Scholars and practitioners in the third and fourth realms of AI discussion have not resolved whether AI is an active subject for human rights and protection. Whether AI is human or human-like and enjoys human rights, is not clear. Just as animal rights are a different category from human rights and are provided based on sympathetic motives, AI may be able to enjoy rights in a category separate from human rights. However, this idea has the potential to remove AI discussion from the international realm to domestic legal realms. Furthermore, this temporary solution will not last long and needs clearer resolution on whether AI/AGI is entitled to human rights before the Singularity passes. AI has bigger, comprehensive impacts on all humankind, so global cooperation and governance among states, international organizations, and private entities to deal with this AI issue is necessary.

2. Annotated Bibliography

2.1. Books

Robot Law (eds. Ryan Calo, Michael Froomkin & Ian Kerr, 2016) (Edward Elgar Publishing 2016). This book is a collection of articles from various authors. It contains five sections:

Each of the five sections are divided into chapters; there are 14 chapters in total. Section I contains only one chapter: How Should The Law Think About Robots? by Neil M. Richards and William D. Smart. The aim of this chapter is to define the “conceptual issues surrounding law, robots and robotics[.]”[18] Subsection 1, ‘What is a robot?’ defines robots as, “a constructed system that display both physical and mental agency but is not alive in the biological sense.”[19] The authors further specify that the machine may only have the appearance of autonomy and the definition excludes AI that have no physical presence in the actual world.

In Subsection 2, ‘What Can Robots Do?’, the authors describe the many different kinds of robots available in our daily lives such as Roombas, cruise missiles, NASA space robots and autonomous Kiva systems used by online retailers to move merchandise.[20] Essentially, this article argues that there is nothing that robots cannot be programmed to do, and as technology becomes more and more integrated into our daily lives, the legal framework and relevant protections must be in place to regulate rapidly changing technology.

Subsection 3, ‘Robolaw and Cyberlaw’, discusses how “robot-specific” laws must be made in order to effectively regulate the new issues raised by AI. Uncertainty and ambiguousness about robotic issues (such as liability) only impedes development and widespread usage of technology. This subsection also asserts that, “how we regulate robots will depend on the metaphors we use to think about them.”[21] The authors use examples from a series of Fourth Amendment surveillance cases to highlight the importance of choosing the right metaphors in creating legislation. Olmstead v. United States and Katz v. United States both discussed how telephone wiretapping invaded the constitutional right to privacy. The authors argue that the Olmstead court misunderstood privacy to pertain to physical searches. By “[clinging] to outmoded physical-world metaphors for the ways police could search without a physical trespass,” the court failed to see the threat new technology had on limits to federal power and to constitutional rights.[22] These lines of cases still impact how technology (like GPS tracking) may be used today.

In Subsection 4, ‘the importance of metaphors,’ the article reiterates that, “[h]ow we think about, understand, and conceptualize robots will have real consequences at the concept, engineering, legal and consumer stages.”[23] Examples such as equating Netflix to a video store, and stealing digital media as “piracy” are used. The use of metaphors can constrain or assist the way technology is created and received by the consuming public.

Subsection 5, ‘the android fallacy,’ focuses on the tendency for people to “project human attributes” to robots.[24] This pertains not only to physical appearance but the appearance of free will, as well. It is important to always know what is the cause of a robot’s agency; otherwise, it can cause legislative decisions to be, “based on the form of a robot, not the function.”[25] The authors compare and contrast an android designed to act and deprive humans of a $20 reward with a vending machine that eats your change. Functionally, there is no difference between the end results. However, in a study, 65% of subjects gave the android moral accountability. This subsection concludes with the statement that, “we should not craft laws just because a robot looks like a human…, but we should craft laws that acknowledge that members of the general public will, under the right circumstances, succumb to the Android Fallacy[.]”[26]

In Subsection 6, the authors very briefly ask how we should classify robots that collaborate with a human operator because they are not fully autonomous. Should we consider these kinds of robots as “a portal or avatar” for its operator?[27] Chapter 9 is Extending Legal Protection to Social Robots: The Effects of Anthropomorphism, Empathy, and Violent Behavior Towards Robotic Objects, by Kate Darling. This chapter is divided into seven subsections and is meant to address the topic of humanoid robots that are designed to socialize with human beings.

After the introduction, subsection 2 asks ‘What is a Social Robot?’ A “social robot” is defined as, “a physically embodied, autonomous agent that communicates and interacts with humans on a social level.”[28] Some examples of social robots mentioned include toys like the robotic dinosaur Pleo and Sony’s Aibo dog. There are also therapeutic robots like Paro baby seal; MIT has built robots such as Kismet, AIDA, and Leonardo.

Subsection 3, ‘Robots vs. Toasters: Projecting our Emotions’ examines how robots create effective engagement with human beings. Darling asserts that humans are susceptible for forming emotional attachments to non-living things, and “will ascribe intent, states of mind, and feelings to robotic objects.”[29] This point is illustrated with the movie Cast Away, when the main character expresses “deep remorse” for not taking care of his volleyball friend, Wilson. The subsection next discusses three factors that impact human relationships with social robots: 1) Physicality, 2) “perceived autonomous movement”, and 3) social behavior.[30] Darling argues that we are, “hardwired to respond differently to object in [our] physical space” as opposed to virtual objects.[31] Secondly, we project intent unto a robots when we cannot anticipate its movements. The Roomba is used as an example: it moves according to a simple algorithm, but because it moves on its own, people tend to “name it, talk to it, and feel bad for it when it gets stuck under the couch.”[32] When these robots mimic our social cues and are designed to express human emotions, they elicit emotional reactions and “may target our involuntary biological responses.”[33] The author, citing psychologist Sherry Turkle, discusses the notion of “the caregiver effect” which evokes a sense of mutual nurturing or “reciprocity” between a human and a social robot that is programmed to act dependently.[34] Furthermore, our responses to these robots are not voluntary because these social robots, “[play] off of our natural responses.”

Subsection 4, ‘the issues around emotional attachment robots,’ discusses the ethical issues that arise from social robots. The subsection begins with various concerns: 1) society will not be able to distinguish between reality and virtual reality, “thereby undermining values of authenticity”; 2) Social robots will replace real human interactions; 3) Social robots will manipulate human beings through software that provides advertisements, and even collect private data without user’s consent.[35] On the other hand, the author notes that social robots also provide benefits. For example, the Paro seal assists dementia patients, and robotic interactions can help motivate people.[36] Next, the author explores why humans feel that, “violent behavior toward robotic objects feels wrong… even if we know that the ‘abused’ object does not experience anything.”[37] The author posits that this is because we want to protect societal values. For example, a parent would stop his or her child from kicking or abusing a household robot because they want to discourage behavior that would be detrimental in other contexts. A related concern is the possibility that human beings could act out abusive sexual behaviors towards social robots. The underlying concern is that when, “the line between lifelike and alive are muddle in our subconscious,” certain actions towards robots could cause us to become desensitized and lose empathy towards other objects or things.[38]

In subsection 5, ‘Extending Legal Protection to Robotic Objects’ Darling posits that protection for social robots could be modeled after animal abuse laws. The author also states that while philosophical concepts against animal abuse are based on an animal’s “inherent dignity” and preventing unnecessary pain, the laws show that they are made to address human emotional states more than anything else. It causes us discomfort to see animals suffer or appear to be in pain. Robots, “invoke the experience of pain,” in a similar manner, even if they don’t actually experience suffering.[39] In order to pass laws, Darling argues that a good definition of “social robot” needs to made, and social robots must be distinguished from other types of robots or objects. Darling offers a working definition: “(1) an embodied object with (2) a defined degree of autonomous behavior that is (3) specifically designed to interact with humans on a social level and respond to mistreatment in a lifelike way.”[40] Darling also notes that the definition of “mistreatment” would have to be defined appropriately.[41]

2.2. Articles

Deborah G. Johnson and Mario Verdicchio, Why Robots Should Not Be Treated Like Animals, 20 Ethics and Information Technology 291-301 (2018).
This article concerns itself primarily with the creation of social robots with humanoid features. It is divided into four sub sections. The authors first examine the common tendency to analogize human-like robots to animals and details the commonalities between human interactions with animals and with robots. This analogy is used as a touchstone to explore a variety of concepts throughout the paper. The authors reference Coecklebergh, who used the analogy to understand how the appearance of robots effects the way human beings experience robots.[42] Ashrafian stated that Robots were similar to dogs in that they are subordinate to human beings but have, “some sort of moral status.”[43] Sullins argues that robots, like guide dogs, are technology. The section concludes that the analogy to animals is not a practical comparison to make when it comes to robots.

The next section enumerates why this analogy fails. The key argument is that, robots cannot acquire moral status because they are incapable of suffering, regardless of whether they attain consciousness in the future. The authors reason that if animals acquire their moral status from their ability to suffer, robots would have to acquire their moral status in the same way. As a secondary matter, the authors also ask whether it would be wrong from humans to build robots that suffer.[44]

The third section considers the legal liability for robots. The authors cite Asaro, who suggests that using the animal analogy is useful to place responsibility on the owners and manufacturers of robots. The authors also reference Schaerer’s framework for imposing tortious concepts of strict liability and negligence for the misbehavior of robots.[45] A distinction is made between animal autonomy and robotic autonomy. Animals are a living entity and when humans train animals, they work within the limitations of an animal’s nature. On the other hand, the article argues that a robot’s software has been coded by human beings.[46] For this reason, animals and robots are dissimilar.

The fourth section muses on the question of whether our treatment of robots impacts our treatment of other human beings. This question is based off the Kantian claim and consequential discussion by Darling, that “if we treat animals in inhumane ways, we become inhumane persons.”[47] The authors argue that while cruelty towards animals or robots suggests inhumanity, there is no scientific evidence of direct causation. The authors reiterate their previous argument that robots cannot actually suffer or experience pain and distress, but merely give the appearance of suffering. They concede that the arguments may change if robots become so human-like that people can no longer distinguish the AI from actual human beings. Lastly, the article muses on the direction policy may take: 1) make laws to restrict behavior towards humanoid robots, and 2) restrict the design of robots.[48]

David J. Gunkel, The Other Question: Can and Should Robots Have Rights?, 20 Ethics and Information Technology 87-99 (2018).
Gunkel applies philosopher David Hume’s is/ought statement framework to examine whether robots should and can have rights. The article is organized into four different modalities, allowing the author to apply a “cost-benefit analysis” to the arguments for each modality. The first modality is “Robots cannot have rights. Therefore robots should not have rights.” The second modality is “Robots can have rights. Therefore robots should have rights.” The third modality is “Even though robots can have rights, they should not have rights.” And finally, the fourth modality is “Even though robots cannot have rights, they should have rights.”[49]

After describing the literature which supports each modality, the author also describes the problems of each modality. Ultimately, Gunkel advocates for an alternative form of thought, which he terms as “thinking otherwise.”[50] Applying Emmanuel Levinas’s philosophy, Gunkel argues that “ethics proceeds ontology; in other words … the ’ought’ dimension, that comes first, in terms of both temporal sequence and status and the ontological aspects follows from this decision.”[51]

Mathias Risse, Human Rights and Artificial Intelligence: An Urgently Needed Agenda, 41 Human Rights Quarterly 2 (2019).
Risse discusses the current and future implications of A.I. in our modern society in this paper. This article is divided into five sections, after the introduction: 1) AI and Human Rights, 2) The Morality of Pure Intelligence, 3) Human Rights and the Problem of Value Alignment, 4) Artificial Stupidity and the Power of Companies, The Great Disconnect: Technology and Inequality.

In the first section ‘AI and Human Rights,’ Risse briefly discusses the similarities and differences between complex algorithms and the concept of consciousness.[52] The next section then discusses the concept of “superintelligence” and when A.I. may reach the Singularity—which is when machines surpasses human intelligence—in the future. Risse then asks how a superintelligence would value and apply morals. Using the theories of four philosophers, Hume, Kant, Hobbes and Scanlon, the author hypothesizes about how AI superintelligence may understand morality or rationality.[53]

The third section focuses on the present, and what society can do now in order to ensure that AI adheres to human rights principles even though there will come a time when they are smart enough to violate them. The author briefly discusses the UN Guiding Principles on Business and Human Rights, and the Future of Life Institute’s Asilomar Principles 9 as two efforts to create doctrines that robots should follow. Risse suggests that in order for AI to acquire human rights values, there should be, “more interaction among human-rights and AI communities.”[54] The fourth section addresses the problem of “artificial stupidity” which includes the manipulation of data to spread false information, the lack of transparency, and the ownership of private data by corporations.[55] The final section addresses, as the title suggests, the “technological wedge” in society. Risse explains that technological advancements impact economic growth, employment levels and poverty levels.[56]

Eileen Donahue and Megan MacDuffee Metzger, Artificial Intelligence and Human Rights, 30 Journal of Democracy 115-126 (Johns Hopkins University Press, 2019).
In this article, Donahue and Metzger primarily argue that “the existing universal human-rights framework is well suited to serve,” as a “global framework…to ensure that AI is developed and applied in ways that respect human dignity, democratic accountability, and the bedrock principles of free societies.”[57] The article is organized into three sections: 1) Societal and Ethical Concerns About AI; 2) A Human-Centered Ethics for AI; and 3) Governing AI Through a Human Rights Lens.

The first section explains the following concerns with AI technology: 1) machines will take over the human world; 2) “how to weigh whether or when various applications of AI are ethical, who should make judgments, and on what basis”;[58] and 3) unintended negative effects such as embedded bias and limitations on free choice. The article then examines four particular features of the human-rights framework which make it compatible with AI governance: 1) the human person is the “focal point of governance and society”; 2) the human rights framework addresses, “the most pressing societal concerns about AI; 3) it describes the rights and duties of government and private sector; and 4) the framework is shared by many nations and is “understood to be universally applicable.”[59]

In the second section, “A Human-Centered Ethics for AI,” the authors name Articles 2, 3, 8-12, 19, 20-21, 23, and 25 of the UDHR as critical sections that address the potential impacts of AI. These Articles of the UDHR speak to security, discrimination, equal protection, freedom of expression, and the right to enjoy an adequate standard of living. Lastly, the authors note that a crucial advantage of the existing human rights framework is that it “enjoys a level of geopolitical recognition and status under international law that no newly emergent ethical framework can match.”[60]

The third section, “Governing AI Through a Human Rights Lens,” provides practical ways to begin implementing the human rights approach to AI. The first method is “transparency in both governmental and business uses of decision-making algorithms,” while a second idea is based on the concept of “human rights by design,” which means that assessment and reflection of human rights must occur as technology is being developed.[61] Other methods for implementation include accountability and education of young technologists about existing human rights standards.

Jutta Weber, Robotic Warfare, Human Rights & The Rhetorics of Ethical Machines in Ethics and Robots (2009).
This paper is organized into thirteen short sections. The main goal of the article is to explain the recent developments for uninhabited combat aerial vehicles (UCAV) and the “ethical, political, and sociotechnical implications” of these developments.[62] The first five sections discuss the gradual progression towards use of uninhabited aerial vehicles by the U.S., Israel, and some European countries. The author notes that the United States has devoted a $127 billion to the development of new unmanned/uninhabited combat robots.[63] UCAVs are controlled from the ground by either radio, laser, or satellite link. They are used for “targeted killing missions” and were used mostly in Iraq, Pakistan and Afghanistan.[64] The author argues that while this new technology is supposed to increase precision, these air attacks have resulted in hundreds of innocent civilian deaths. In 2006, the Israeli Supreme Court held that “international law constrains the targeting of terror suspects,” but refused to ban Israel’s targeted killing policies. In addition, the court held that reliable information proves the target is “actively engaged in hostilities,” and that an arrest is too risky.[65] Moreover, an independent investigation must be conducted after each strike.

In the sixth section titled, ‘The Price of New Warfare Scenarios: On Racism, Sexism & Cost-Efficiency,’ the author discusses the cost of this new kind of warfare. The author argues that while this unmanned robotic technology is lauded for decreasing the number of human soldiers that need to be on the ground, there is, “no concern for the humanitarian costs of these new technologies with regard to the non-combatants of other (low-tech) nations ….”[66] The author notes that warfare is not limited to robot casualties. The article also states that the cost-efficiency of producing and using UCAVs has potential to lead to an arms race between western countries. The article notes an additional problem in the next section: new technology will not lead to effective deterrence and shorter wars. Instead, it, “will lead to a lowering of the threshold of warfare.” [67]

The article also addresses the implications of this kind of new warfare on international law in the tenth section, ‘Uninhabited Systems and Jus in Bello.’ This section considers the implications if responsibility is no longer an issue in robotic warfare. One consequence could be that the battle could easily and quickly get out of control. The author also discusses how to distribute responsibility between the programmer, the machine or the commanding officer. The manufacturer gave the appropriate warnings regarding the use of the automatic weapons system (AWS); they could not be held responsible for any malfunctions.[68] The author asserts that it is not yet reasonable to hold autonomous machines responsible because of their limited cognitive abilities; however, if a system is supposed to act increasingly autonomous, the programmer cannot be responsible “for the negative outcome of the unpredictable behavior of an autonomous system.”[69]

In section eleven, ‘Push-Button Wars on Law-Tech Nation?,’ the article concerns itself with the possibility that increased use of autonomous weapons systems make war too easy and destabilize situations. The author ultimately argues for a ban on autonomous weapons systems.[70] He argues that the ease of robotic wars and decreased responsibility would increase risky military maneuvers. Moreover, robots will do what they are programmed for and will be incapable of disobeying inhumane orders, resulting in a change for international law.[71]

In the last section before the conclusion, ‘The Rhetoric of Moral Machines,’ the author presents a critique of roboticist Ronald Arkin’s approach to installing “ethical” software. In a brief summary of Arkin’s arguments, the article explains that in the future, robots with this “ethical” software may become better than humans at determining whether a target is a legitimate threat. Robots would have faster computing power and would be able to make lethal decisions.[72] In retaliation, the author responds that 1) robot systems may be able to compute faster but still have the same amount of information as a human solider would; 2) advanced robots made in our time would still not have the ability to, “resist the performance of an unethical act,” and would be unable to explain their reasoning; 3) ethical robot systems will not fully developed in the near future; and 4) Arkin fails to answer the question, “[h]ow can one make sure that a system is applying rules adequately to a certain situation and that the system decides correctly that it is allowed to apply its rule to this specific situation?”[73]

Filippo A. Raso et al., Artificial Intelligence & Human Rights: Opportunities & Risks (Berkman Klein Center, 2018).
This report is divided into eight sections and essentially aims to evaluate how AI impacts economic, social and cultural rights. After a brief introduction, the authors ask, “[w]hat is Artificial Intelligence?” in Section 2. The report acknowledges that AI technology develops at a rate so fast that it is difficult to provide a concrete definition of Artificial Intelligence. The authors categorize AI into two “buckets”: 1) knowledge-based systems, which cannot learn or make decisions but instead, determine optimal decisions based on specific limit of data; and 2) machine learning, which, “uses statistical learning to continuously improve their decision-making performance.”[74] The report also notes that its findings are limited to the AI systems that are currently in use, and it does not evaluate AI theoretical capacities.

In section 3, “What are Human Rights” the report briefly explains that human rights are derived from the Universal Declaration of Human Rights (UDHR), the International Covenant on Civil and Political Rights (ICCPR) and the International Covenant on Economic, Social and Cultural Rights (ICESCR). Section 4, ‘Identifying the Human Rights Consequences of AI,’ lays out a framework for identifying “pre-existing institutional structures” (in other words, the context within which AI is created). The two-step methodology is as follows: 1) Establish the Baseline; and 2) Identify the Impacts of AI.[75] The report further notes that there are three sources from which AI intersects with Human Rights: 1) Quality of training data; 2) System design; and 3) Complex Interactions.[76] In Section 5, ‘AI’s Multifaceted Human Rights Impacts,’ the report explores the consequences of AI decision-making in criminal justice, finance, healthcare, content moderation, human resources, and education.

Richard M. Re & Alicia Solow-Niederman, Developing Artificially Intelligent Justice, 22 Stan. Tech. L. Rev. 242 (2019).
This article is organized into five sections. After the Introduction, Section II is titled “Modeling AI Development”; III. “Concerns”; IV. “Responses”; and V. “Conclusion: Courts and Beyond.” The article is focused on the use of artificial intelligence in making judicial decisions, and it argues that the increased use of this technology will, “affect the adjudicatory values held by legal actors as well as the public at large.”[77]

Section II, “Modeling AI Development” argues that “AI adjudication is likely to generate a shift in attitudes and practices that will alter the values underlying the judicial system… Particularly, AI Adjudication will tend to strengthen codified justice at the expense of equitable justice.”[78] In subsection A, the article explains two different models of legal change: Rule Updating and Value Updating. Rule Updating means that new technology develops and prompts the creation of new rules in the legal system. However, the underlying values remain fixed. On the other hand, the Value Updating models results in the change of values. The article argues that new technology can act as a social force and lead to interaction between new tech, rules and values. More specifically, the two ways which new technology acts on values are, 1) when tech, “alter[s] individual and social capabilities in ways that disrupt established practices, catalyzing new practices and related ways of thinking”; and 2) when, “new technology facilitates the spread of information that disrupts once-established understanding and opinions.”[79]

In Subsection B, the article explains the two models of adjudicatory justice: equitable justice and codified justice. The former, equitable justice, incorporates both the enforced values and the “reasoned application” of said values. It, “aspires to apply consistent principles and is prepared to set aside general patterns in favor of unique circumstances.”[80] Because this model relies on context, it can seem “incompatible with automated algorithmic processes.”[81] The latter model, Codified Justice, “refers to the routinized application of standardized procedures to a set of facts.”[82] The author describes codified justice as the predecessor to artificial intelligence because it, “aspires to establish the total set of legally relevant circumstances discoverable in individualized proceedings.”[83] Codified justice reduces such variables as bias and arbitrariness.

Subsection C argues that, “AI adjudication will generate new capabilities, information, and incentives that will foster codified justice at the expense of equitable justice.”[84] The potential benefits of codified justice, such as cost-efficiency and elimination of human bias could make the judicial system more effective. However, the article argues that because AI cannot explain its decisions and because the data sets AI works from would be private, AI adjudication is being pushed into a direction that erodes preexisting legal values. The article envisions AI adjudication would provide ”automated rationalizations” that would satisfy an unwitting human audience, without actually providing real rationalizations for its adjudications.[85] Section II ends with a description of a “self-reinforcing cycle” where AI adjudication will make codified justice more appealing and, “push toward greater measurability, objectivity, and empiricism in the legal system.”

Section III, “Concerns,” raises four concerns of AI adjudication: “incomprehensibility, datafication, disillusionment, and alienation.”[86] Subsection A is concerned with the difficult of comprehending AI functions. This, the article argues, is contradictory to equitable justice, which favors personal explanations for rationalizing. The article addresses three specific worries under the incomprehensibility of AI decision-making. First, the article is worried that the judiciary would lose their accountability to the public and to the individuals who stand in their court without understandable human decision-making. Furthermore, the article is concerned that such enigmatic reasoning would, “frustrate public debate and obstruct existing modes of public accountability and oversight such as impeachment or judicial election.”[87] Secondly, incomprehensibility could lead to issues of legitimacy or fairness for defendants. One of the essential principles of our judicial system and constitution is the right to due process, and the article argues that AI incomprehensibility could disempower the defendant.[88] Thirdly, “AI Adjudicators might preclude optimal degrees, or desirable forms, of incomprehensibility.”[89] The argument is that some aspects of the judicial decision-making need to remain unpredictable or ambiguous. For example, human judges might want to obfuscate their reasoning in order, “preserve room for jurisprudential maneuvering tomorrow.”[90] Lastly, the article argues that the incomprehensibility could be unequally applied and, “allow the legal system to be gamed.”[91] For example, if a detailed technical report can only be understood by experts, “only a select set of actors … would be able to parse the ’real’ explanation.”[92]

Subsection B addresses the issue of datafication, which is the emphasis and incorporation on objective data. Firstly, this subsection is concerned that datafication, “could insulate the legal system from legitimate criticism, thereby allowing bias to flourish.”[93] If AI relies on inherently biased datasets, then the AI adjudication process will recreate or worsen preexisting biases.[94] Secondly, datafication could cause the legal system to be “undesirably fixed.” The system would not be susceptible to “natural updating” such as generational changes that come with judges rotating out of the bench, and cultural/societal transformations.[95] Thirdly, datafication will reduce the reliance on significant but, “less quantifiable or data rich considerations.”[96] For example, “the personal sincerity or remorse” of an defendant could be ignored. Lastly, AI adjudication could lead to adaptations in the law itself which favor measurable data. The article uses the example of malignant heart murder in criminal law. This type of murder requires a human element that cannot be determined by implementing a standard code.[97]

Subsection C concentrates on disillusionment,[98] meaning, “skeptical reconsideration of existing practices.”[99] The subsection points to three examples of AI adjudication successfully exhibiting the flaws inherent with human judgment. First, disillusionment would, “erode confidence in the legal system’s legitimacy.”[100] Second, AI adjudication could diminish the position of the judiciary and alter the judiciary’s culture and composition. As a consequence, judges would have diminished authority. Finally, disillusionment could result in smaller but significant changes: 1) diminished political power; 2) lawyer’s rhetoric becomes irrelevant; 3) diminished adversarial system and movement towards inquisition; and 4) erasure of human lawyers from the legal process.[101]

Subsection D posits that AI adjudication will cause alienation and cease participation in the legal system. The article goes even one step further and imagines a future where the judicial system becomes fully automated with AI.[102] In addition, alienation could cause a decrease in public engagement where there would be an insufficient amount of public oversight (ex. jury participation). The article closes Section III with hope for a “new equilibrium” between equitable justice and codified justice as AI adjudication is incorporated in the legal system.[103]

Section IV discusses four types of viable responses to the issues described above. Subsection A suggests continued experimentation with changes in the legal system as a possible solution to these problems. However, it also recognizes the risks of experimentation where human lives are at stake.[104] In subsection B, the article explores the possibility of “coding equity” into the AI system. The article argues that coding ethics into the system could be achieved if it is updated regularly and could respond faster to issues of inequality faster than humans. However, coding equity would be difficult because the concept of “equity” contains many nuances. It ultimately comes to the conclusion that it is an, “ineffective stand-alone solution.”[105] Subsection C, suggests the division of labor between humans and AI as an additional solution. For example, human judges and AI could collaborate at specific stages of the legal process, providing extra human oversight in these situations. Another possibility is to separate cases and, “apportion discrete types of judicial decision-making to human[s].”[106] Subsection D posits the removal of profit-seeking actors as a way to keep the system focused on justice.

Section IV concludes with the solution that concerns of AI adjudication should be addressed by drawing from all four types of responses. The last section, section V summarizes and briefly notes the far-reaching consequences for AI adjudication on “executive bureaucracy and administrative agencies” where the same issues regarding codified justice would be reproduced.[107]

Hin-Yan Liu, Three Types of Structural Discrimination Introduced by Autonomous Vehicles, 51 UC Davis L. Rev. Online 149 (2017-2018).
This article centers around crash-optimization algorithms used in autonomous vehicles. The authors discuss the ways in which this crash-optimization uses discriminatory systems and ends on an exploration of the impacts autonomous vehicles will have on the structure of future society. The article is organized into five parts: I) Prioritizing the Occupant in Autonomous Vehicles through Trolley-Problem Scenarios; II) Structural Biases in Crash Optimization and Trolley-Problem Ethics; III) Intentional Discrimination and the Immunity Devise Thought-Experiment; IV) Structural Discrimination in the Corporate Profit-Driven Context; and V) Structural Discrimination in Urban Design and Law Revisited.

In the first section, the author explains that autonomous vehicles utilize crash-optimization algorithms to reduce overall damage of an unavoidable crash.[108] This happens by establishing probabilistic courses of action. Many equate this algorithm to the classic ethic hypothetical—the trolley-problem scenario, where the conductor must decide whether to kill one person or five people by diverting the railroad tracks. By contrasting the programming of autonomous cars from this ethics hypothetical, the author asserts three ways in which the crash-optimization program creates a discriminatory and problematic machine. Unlike the trolley driver, the decision-maker in self-driving cars is the manufacturer or the occupants of the vehicle. They are not disinterested decision-makers. For the manufacturer the stake in the outcome of the crash-optimization program is exclusively serving the interest of customers and for the occupant, their interest is their own personal safety. As a result, the primary focus of crash-optimization programs is to privilege their occupants over pedestrians and other third parties.[109] The author suggests that changing the perspective is a way to democratize the algorithm.

The second way in the crash-optimizing system discriminates is the way in which it collects data and learns different outcomes. The author argues that the system focuses on a very structured and isolated patterns of data; focusing on a single scenario overlooks a wider range of externalities.[110] The effect would only multiply when the same automobiles are distributed. A related concern is that currently self-driving cars identify human beings just as units. Not by characteristics or physical features, such as race or gender. However, there would be notable consequences if the algorithm begins using identifying characteristics to determine how to reduce damage. For example, if the vehicle is programmed to hits motorcyclists with helmets because her odds of surviving are higher, then, “certain groups will consistently bear a greater burden despite the fact that they adopted prudential measures to minimize theirs risk.”[111] The article continues to argue that a system which makes a small number of biased decisions can unintentionally result in, “ a systemic and collective dimension whereby the generated outcomes will be reliably and systematically skewed.”[112] Another issue is if these small biased decisions do not reflect enough discriminatory intent for legal recognition. The article notes that such discrimination while plainly discrimination “falls outside of the scope” of Article 26 of ICCPR.[113]

Section III considers what would happen if discrimination was intentional. The article posits that this requires the aim of crash-optimization systems to, “maximize the collective well-being of society.”[114] For example, if the criteria for crash optimization was based on positive traits of an individual like talents, cultured ability, or potential, the system could make decisions based on saving the lives of scientific and cultural elite. Many other types of preferences could be used to make these calculations: age, sex, race, or social/political status.[115] Section III continues to consider hypothetical future scenarios where the manufacture of an “immunity device” would allow its carriers to become completely immune to self-driving auto collisions. The article also imagines customer loyalty programs which provide people with additional security or safety.[116]

Section IV examines the structural discrimination that derives from profit-driven motives. The article contends that it would be difficult to apply law and cause trouble in the future because self-driving car manufacturers could use, “human beings as moral crumple zones” that absorb legal liability for structural discrimination.[117]

Section V expands the scope of the article, arguing that these issues with self-driving vehicles will also lead to changes in urban design. The section also argues that there are currently no incentives to create liability structures. The article speculates that normalizing the use of self-driving cars could cause deeper segregation between those with the privilege of wealth and access to technology and those who do not.[118] Moreover, forms of “architectural exclusion” could be used to exacerbate inequality. For example, when highways/bridges were first built over parkways, they were deliberately made low so that public transportation could not travel using the parkway, effectively keeping poor people off the road and within specific neighborhoods.[119] The article is concerned that similar structures will occur when infrastructure is designed to support self-driving vehicles. Two foreseeable consequences could be the complete removal of human beings as operators of transportation and continued, “privatization of public space.”[120]

In the conclusion, the article asks for a vigilant approach to all future development of self-driving automobile programs in order to avoid dystopian scenarios. The article suggests the “broadest range of participation in the design and development of these systems.”[121]

Joanna J. Bryson, Robots Should Be Slaves (University of Bath, 2009).
This article is made up of 6 sections and essentially argues, as the title suggests, that robots should be considered for all intents and purposes as slaves, not as companions or humans. The author invites the reader to deeply consider how we think about robots and our relationship to this technology.

In the first section after the introduction, “Why slaves?,” the article notes that slaves are defined as “people you own.”[122] The author acknowledges the cruel and horrible history of slavery, particularly the dehumanizing consequences of slavery. However, the author argues that, “dehumanization is only wrong when it’s applied to someone who really is human[.]”[123] The article makes four fundamental claims: “1) Having servants is good and useful, provided no one is dehumanized; 2) A robot can be a servant without being a person; 3) It is right and natural for people to own robots; 4) It would be wrong to let people think that their robots are persons.”[124]

In the second section, “Why we get the metaphor wrong,” the author pointedly states from the outset that there is no question that humans own robots, and it would be a mistake to ignore that robots are in our service.[125] Robots do not exist unless humans decide to create them, and we program and design their intelligence and behavior. In the remainder of this section, the paper examines the tendency for roboticist and science fiction movies/books to express a human’s ethical obligation to robots. Citing a previously published article, the author explains that this tendency is a result of “uncertainty about human identity,” and the “arbitrary assignments of empathy.”[126]

In the third section, “Costs and benefits of mis-dentification with AI,” the author essentially argues that over interaction with AI is an inefficient use of human time and resources.[127] The section provides measures of the cost of over-identification on the individual level: “1) the absolute amount of time and other resources an individual will allocate to a virtual companion; 2) what other endeavors that individual sacrifices to make that allocation; and 3) whether the tradeoff in benefits the individual derives from their engagement with the AI outweigh the costs or benefits to both that individual and anyone else who might have been affected by the neglected alternative endeavors.”[128] The article argues that individuals have a, “finite amount of time and attention for forming social relationships,” and humans increasingly seek superficial relationships from, “lower-risk, faux-social activities such as radio, television and interactive computer games.”[129] At an institutional level, the article examines the larger implications of when AI makes decisions for humans. In addition, it is dangerous to put moral responsibility unto robots instead of humans. As the author states, “we should never be talking about machines making ethical decisions, but rather machines operated correctly within the limits we set for them.”[130] Ultimately, the author argues that misidentification with AI leads to “less responsible and productive members of society.”[131] Automation allows humans to choose less “fulfilling social interactions with a robot over those with a human, just because robotic interactions are more predictable and less risky.”[132]

The fourth section, “Getting the metaphor right,” examines the ways that robots can be useful for human society. The author argues that understanding the “robot-as-slave” is the best way to, “get full utility … and … to avoid the moral hazards” of these robots.[133] The section posits that domestic robots will be used as physical support for the infirm, assisting those with working memory challenges, and as tutors for children.

In the fifth section, “Don’t we owe robots anything?,” the article addresses concern that robots could be exploited and abused. The article states that because humans determine robots’ goals and desires, “it cannot mind being frustrated unless we program it to perceive frustration as distressing, rather than as an indication of a planning puzzle.”[134] The author goes even further to say that robots should be absolutely replaceable, and no one should ever have to question whether to save a person or a robot from a burning building.[135] In the conclusion, the author reiterates that robots should be viewed as tools that can enhance our own abilities, and that if humans have any obligations, it is to society and not to robots.[136]

Jason Borenstein and Ron Arkin, Robotic Nudges: The Ethics of Engineering a More Socially Just Human Being, Sci Eng Ethics (2016).
The central question of this article is whether it would be ethical to construct companion robots that nudge a human user to behave in a certain way. The author notes that the robotics community is working towards building robots that can function as lifelong companions for human beings. Consequently, the article briefly mentions how the fields of cinema, psychology, and marketing study the factors behind influencing human beings. Robotics also gather data from these same sources to understand how robots can influence human behavior.[137]

Citing the work of Thaler and Sunstein, the authors define “nudge” as a way to, “shape behavior without resorting to legal or regulatory means,” and is often a subtle method.[138] Some contemporary examples of nudging that Thaler and Sunstein use are Texas’ state motto, “Don’t Mess with Texas,” which creates feelings of a shared group identity and apparently decreased pollution in the state as a result. Another example is how ATMs are programmed to return the debit card to the user before dispensing cash, thereby decreasing the chances of losing the card.

The article contends that there are two types of paternalism which could serve as a justification for robotic nudges. First, there is weak or soft paternalism which prevents harm in situation where, “it is presumed that if a person had additional knowledge or was mentally competent, the person would make a different decision.”[139] Then, there is strong or hard paternalism which protects a person, “even if it goes against that person’s voluntary choice.” The article again cites Thaler and Sunstein, who advocate for “libertarian paternalism,” which upholds individual autonomy while still moving towards more “productive ends.”[140] While robots are being developed and used for a variety of uses, including warfare, security, and healthcare, the article posits that robots could also be used to bring out positive traits from their users through verbal cues, proxemics, or touch.

The article argues that a well-designed robot would have “distinct advantages” over other types of technology when influencing human behavior. Unlike phone apps, robots have a physical and therefore, stronger presence in the world, as well as the capacity to move around, and robots have a wider range of possibilities to mold their environment. A well-designed robot would have to be a sophisticated machine which could discern between human beings and the various human behaviors it is supposed to monitor.[141] If society can agree that robotic nudging is ethically acceptable “when the intent is to promote a person’s own well-being,” the article asks under which conditions this would be permissible.[142] Another question to consider is whether nudging should be used to benefit the individual or a larger group. For example, a robot could tap a parent on the shoulder when the user’s child has been sitting alone watching television for an extended amount of time. While the child’s welfare is the primary concern, the article notes that the parent could feel startled by this tap or even feel offended for the suggestion that the adult is a bad parent.[143]

The article briefly distinguishes between positive and negative nudges. Positive nudges utilize positive reinforcement methods such as encouragement or rewards; negative nudges would use punishment or expressions of disappointment. Furthermore, psychological and sociological data should inform the design of robotic programming in addition to ethical considerations. If the robot is programmed to use abrasive or sudden tactics to deter human behavior, there is a strong likelihood that the human would see this as an intrusion and become angry instead of change their behavior.[144]

The article next examines some objections to robotic nudging. First, the deliberate manipulation of a free-thinking individual is seen as an intrusion upon human liberty. Second, there are concerns that nudging could be misused and result in “a wide range of abuses.”[145] There is also the issue of “moral paternalism.” Citing Harris, the article defines moral paternalism as the protection from corruption or wickedness. Critics argue that this is “tampering with personal identity.”[146] The same concern arises in biomedical technology where a robotic nudge could change human nature.

The article then asks, “which framework or theory should be used as a basis or foundation for defining what ethical means?”[147] Even if only Western theories were examined, they would include such possibilities as: “rights-based approaches, deontology, consequentialism, virtue ethics, [and] cultural relativism.”[148] The article directs its focus on what it considers to be the most valuable virtue—justice. Then it comes to the conclusion that it would be best for robot nudges to promote social justice. The article relies on two Rawlsian concepts of justice: 1) “each person is to have an equal right to the most extensive basic liberty compatible with a similar liberty for others”; and 2) the inequalities of society must be addressed with compensation that benefits everyone.[149]

The article proceeds to describe three “design pathways related to how much control a user could exert over a robot’s nudging behavior: 1) opt in, 2) opt out, 3) no way out.”[150] The Opt In pathway allows for users to, “consciously and deliberately select their preferences.”[151] This option considers the individual autonomy of the human user. The Opt Out pathway allows the robot to perform a default function until the user makes modification. This pathway is likened to the automatic enrollment of employees into a retirement plan although they may choose to not participate in the plan. The article notes that there is a concern with “subordination to technology” because humans will tend to agree with the default without taking the time to fully explore other available options. The last pathway, the No Way Out pathway does not provide the user with the ability to turn off the robot. In other words, “justice trumps the individual user’s autonomy and rights.”[152] This is compared to the inability for smart phone users to turn off GPS tracking when the police use this option.

In the final section, the article considers the robot designer’s moral obligations in programming the technology. A question that must be considered is, “does the foremost obligation that a robot possesses belong to its owner or to human society overall.”[153] This article ultimately is concerned with “highlight[ing] ethical complexities” of robotic nudging rather than provide precise answers.

Angela Daly et. al., Artificial Intelligence Governance and Ethics: Global Perspectives, The Chinese University of Hong Kong Research Paper Series (2019).
This article is made up of 8 sections and provides an overview of international efforts to develop AI policies. Section 1: Introduction; Section 2: Global Level; Section 3: Europe; Section 4: India; Section 5: China; Section 6: The United States of America; Section 7: Australia; and Section 8: Reflections, issues and next steps. The article only lists and provides brief explanations of any policies made by each nation’s government.

In Section 1, the introduction examines the definition of AI, then explores the intersection between AI and ethics. It is comprised of four subsections: 1) What is AI?; 2) AI and Ethics; 3) What does ‘ethics’ mean in AI?; and 4) This Report. The central issues the report is trying to figure out are, “what are the ethical standards to which AI should adhere,”[154] as well as which actors should be responsible for setting the legal and ethical standards. One concern is that regulation will be established by private companies rather than government agencies. The article next defines morality as, “a reflection theory of morality or as the theory of the good life.”[155] AI ethics is understood as dynamic and interdisciplinary which must meet two traits to be effective: 1) AI should utilize “weak normativity” and cannot, “universally determine what is right and what is wrong”; and 2) “AI ethics should seek close proximity to its designated object.”[156]

In Section 2, Global Level, the article notes that, “the most prominent AI ethics guidelines” are the OECD Principles on AI. These principles have been adopted by 36 Member states including the U.S., and six non-member states: Argentina, Brazil, Colombia, Costa Rica, Peru and Romania.[157] The 40th International Conference of Data Protection & Privacy Commissioners (ICDPPC) created the Declaration on Ethics and Data Protection in Artificial Intelligence in 2018. The Commissioners also established a permanent working group on Ethics and Data Protection in Artificial Intelligence. Under the subsection, “Technical initiatives,” the article explains that the Institute of Electrical and Electronic Engineers (IEEE) has produced Ethically Aligned Design, which includes, “five General Principles to guide the ethical design, development and implementation of autonomous and intelligent systems.”[158] Multinational corporations, such as Amazon, BBC, and Baidu have also developed their own statements. The World Economic Forum (WEF) released a White Paper about AI governance.

Section 3 focuses on AI policies in Europe. The section is divided into five subsections: European Union, Council of Europe, Germany, Austria, and the United Kingdom. Subsection 1, the article notes that the EU has positioned itself as, “a frontrunner in the global debate on AI governance and ethics.”[159] In 2018, the General Data Protection Regulation (GDPR) was put into legislation. This article highlights Section 5 of the GDPR on the Right to Object (Article 21) and Automated Individual Decision-Making Including Profiling (Article 22) as particularly significant elements of the GDPR.

The article next mentions the European Parliament Resolution on Civil Law Rules on Robotics which was published in 2017. Significantly, the article highlights how the Resolution wanted the existing legal framework to be supplemented with, “guiding ethical principles in line with the complexity of robotics and its many social, medical and bioethical implications.”[160] The Annex to the Resolution includes a proposed Code of Ethical Conduct for Robotics Engineers, Code for Research Ethics Committees, License for Designers and License for Users.

Next, the European Commission issued a Communication on Artificial Intelligence for Europe in 2018, with three goals: 1) boosting the EU’s technological and industrial capacity; 2) preparing for the labor, social security and educational socio-economic changes brought by increased use of AI; and 3) establishing an effective ethics and legal framework.[161]

Also in 2018, the European Group on Ethics in Science and New Technologies released a Statement on Artificial Intelligence, Robotics and Autonomous Systems. This Statement suggested basic principles based on, “fundamental values laid down in the EU Treaties and in the EU Charter of Fundamental Rights.”[162] The European Union High-Level Expert Group on Artificial Intelligence (“High-Level Expert Group”), which the article describes as a, “multi-stakeholder group of 52 experts from academia, civil society and industry,” created its Ethics Guidelines for Trustworthy AI in 2019.[163] These guidelines establish requirements that determine whether or not AI is “trustworthy” and are currently being implemented in a pilot program across public and private sectors. Thomas Metzinger, a member of this group, criticized this process as ”ethics washing” because certain non-negotiable clauses were removed from the Guidelines, and he calls for AI governance to be separated from industry.[164] The High-Level Expert Group later put out the Policy and Investment Recommendations for Trustworthy AI, with 33 recommendations for sustainable inclusive development of AI. These recommendations also condemn the use of AI for State and corporate mass surveillance. The Panel’s puts particular focus on, “the monitoring and restriction of automated lethal weapons; the monitoring of personalized AI systems built on children’s profiles; and the monitoring of AI systems used in the private sectors which significantly impact on human lives, with the possibility of introducing further obligations on such providers.”[165]

Subsection 2 discusses the governance initiatives of the Council of Europe (COE) which includes all EU Member States and some non-EU states like eastern European states, Turkey, and Russia. The European Commission for the Efficiency of Justice created the European Ethical Charter in 2018, which sets forth five principles for the development of AI usage in the European judiciary. The COE has also published the Guidelines on Artificial Intelligence and Data Protection in 2019.

Subsection 3 centers on Germany. The article notes that the country has invested close to 3 billion Euros into AI research. While the nation is small and cannot compete with larger nations, Germany has competitively branded itself as supportive of, “data protection-friendly, trustworthy, and ‘human centered’ AI systems, which are supposed to be used for the common good…”.[166] Part of the German government’s strategy is to fund research and innovation as well as create 100 new professorships in the study of “AI and Machine Learning.”

Subsection 4 briefly focuses on Austria. The government drafted a report, ‘Artificial Intelligence Mission Austria 2030,’ which lists numerous stakeholders and participation methods. Austria has also indicated the desire to create a “large national data pool” where the personal data of Austrian citizens, “would be sold to the highest bidder in order to attract cutting edge data-driven research to Austria.”[167]

Subsection 5 focuses on the United Kingdom. In 2018, the UK introduced the AI Sector Deal in order to place, “the UK at the forefront of the artificial intelligence and data revolution.”[168] The UK Parliament has made various initiatives to address AI governance and issues, including an All-Party Parliamentary Group on AI and a Select Committee on AI. The latter committee studies whether the current legal and regulatory frameworks should be adapted to meet the needs of an AI future. The UK government also partnered with the WEF’s Center for the Fourth Industrial Revolution to design guidelines for public sector usage of AI. This subsection ends with some concern for future AI development initiatives in the face of Brexit, as financing and research will be rescinded or implausible.[169]

Section 4 concisely focuses on India. The article notes that three national initiatives have been implemented by the Indian government: 1) Digital India, “which aims to make India a digitally empowered knowledge economy.”; 2) Make in India, which focuses on making India the designer and developer of AI technology; and 3) The Smart Cities Mission.[170] The Ministry of Commerce and Industry formed an AI Task Force and reported that AI should be incorporated into, “national security, financial technology, manufacturing and agriculture.”[171] The article provides some criticisms of India’s AI governance. There is currently not data protection legislation in place or other ethical framework to address personal data concerns. Furthermore, suggested ethics guidelines do not, “meaningfully engage with issues of fundamental rights, fairness, inclusion, and the limits of data driven decision making.”[172]

Section 5 concentrates on governance efforts made by China. In 2017, The New-Generation AI Development Plan called for high investment in AI development and to create new regulations and ethical policies by 2025.[173] In 2019, the Beijing Academy of Artificial Intelligence released the Beijing AI Principles. These principles considered: “1) the risk of human unemployment by encouraging more research on Human-AI coordination; 2) avoiding the negative implications of ‘malicious AI race’ by promoting cooperation, also on a global level; 3) integrating AI policy with its rapid development in a dynamic and responsive way by making special guidelines across sectors; and 4) continuously making preventive and forecasting policy in a long-term perspective with respect to risks posed by Augmented Intelligence, Artificial General Intelligence (AGI) and Superintelligence.”[174] Top Chinese Universities, companies, and the Artificial Intelligence Industry Alliance (AIIA), released a Joint Pledge on Self Discipline in the Artificial Intelligence Industry. The article points out that while this Pledge is similar to many other AI governance statements, it does distinguish itself by including language of “secure/safe and controllable” and “self-discipline” as important aspects that need to be integrated into AI governance.[175] Lastly, the Chinese Government Ministry of Science and Technology released its eight Governance Principles for the New Generation Artificial Intelligence. The Principles advocate for international collaboration as well as the concept of “agile governance.”[176] This concept addresses the rapidly progressing nature of AI technology and the need for legislation to be dynamic in resolving issues.

Section 6 concentrates on the United States of America. Last year, the U.S. implemented the Executive Order on Maintaining American Leadership in Artificial Intelligence. This Order has created the American AI Initiative, organized by the National Science and Technology Council (NSTC) Select Committee on Artificial Intelligence. The Order include the protection of “civil liberties, privacy and American values,” and the creation of lucrative foreign markets for American-made AI.[177] There are six strategic objectives that must be met by regulators and developers, including the protection of, “American technology, economic and national security, civil liberties, privacy, and values.”[178] In addition, AI developers must prioritize national security and public trust and protect AI technology from foreign attacks. In 2019, the US Department of Defense initiated its AI Strategy to build lawful and ethical military technology to remain competitive with China and Russia.[179] Not-for-profit organizations, like The Future of Life Institute, issued 23 Asilomar AI Principles. OpenAI released its open AI charter with the goal that artificial general intelligence outperforms humans for the benefit of all humanity.

Section 7 focuses on Australia and offers some criticism. The Australian Human Rights Commission started the Technology and Human Rights Project. The Australian Government Department of Industry, Innovation and Science released a paper on Australia’s efforts to develop an ethics framework. The authors argue that the Australian Ethical Framework report developed by Data 61 and CSIRO lacks a fundamental misunderstanding of Australian privacy law (citing Johnson 2019). It also suggests, “a very narrow understanding of the negative impacts of AI,” and fails to see the full impact of harms AI can have.[180] The report does not offer responsive regulatory approaches for automated decision making. The Report sets out eight key principles: “Generates net-benefits; do no harm; regulatory and legal compliance; privacy protection; fairness; transparency and explanability; and contestability[.]”[181] The proposed Australian AI ethical framework also provides a ‘toolkit’ for implementation. Some of the strategies include: “impact assessments; internal/external review; risk assessments; best practice guidelines; industry standards; collaboration; mechanisms for monitoring and improvement; recourse mechanisms; and consultation.”[182]

Lastly, Section 8 provides some of the authors’ reflections and analysis of the information gathered in the previous sections. This section is divided into 7 smaller subsections. First, this section notes that there is clearly a distinction between the “haves” and “have-nots” in terms of the resources and capacity to implement advanced governing systems for AI. The article notes that the EU and China are the top groups, but the U.S. could quickly become a strong competitor.[183] Second, there is a fundamental motivation to compete between countries despite the stated need for international collaboration. The U.S., China and the EU have all stated the desire to become global leaders and to perpetuate nationalist values. This is also evidenced in the fact that smaller countries like Austria are, “willing to engage in less ethical projects to attract attention and investment.”[184] Third, the authors note that many of the AI governance statements contain many similar goals, such as accountability, transparency, privacy, and protection. However, the authors also note that underneath these shared goals, there may be varying, “cultural, legal and philosophical understandings.”[185] Fourth, the authors consider “what’s not included” in these discussions. Some questions that need to be asked are: 1) Is there reference to other government or corporate initiatives which may contradict the principles?; 2) What are the hidden costs of AI; and 3) How are they visible and internalized? Fifth, “What’s already there?” The authors assert the need to better understand the interactions between policy, rights, private law, and AI ethics. Sixth, the authors raise the issue of “ethics washing.” If these AI governance initiatives are not enforced, then all of the aforementioned research papers, committees, and strategies only serves as “window dressing.”[186] This section also raises issues of “jurisdiction shopping” for locations with less restrictive AI regulations. The authors note that it may be significant to take a historical perspective on implementation because there are, “different predecessor technologies … as well as different social, economic and political conditions,” that each country starts with.[187] Lastly, the authors ask, “who is involved?” It is imperative that all participating voices are heard and that the larger public is appropriately and accurately represented in discussions about AI implementation and governance.[188]

Mark Latonero, Governing Artificial Intelligence: Upholding Human Rights & Dignity, Data & Society (2018).
This report is divided into five sections: 1) Introduction; 2) Bridging AI and Human Rights; 3) A Human Rights Frame For AI Risks and Harms; 4) Stakeholder Overview; and 5) Conclusion. The introduction lays out the reports underlying belief that if the purpose of AI is to, “benefit the common good, at the very least its design and deployment should avoid harms to fundamental human values.”[189] The author states that rooting AI development into a rights-based approach provides an, “aspirational and normative guidance to uphold human dignity and the inherent worth of every individual, regardless of country or jurisdiction.”[190]

Section 2, Bridging AI and Human Rights, acknowledges that AI is a vast, multi-disciplinary field. Therefore, the section explains, “the basic entry points” between human rights and AI. The article explains that current AI technology uses machine learning systems. Machine learning processes historical data to detect patterns, but if this data is skewed or incomplete, biases can quickly perpetuate throughout the AI system. For example, facial recognition technologies, “reproduce culturally engrained biases against people of color,” when discriminatory algorithms cannot properly process or recognize darker skinned people.[191]

Section 3, A Human Rights Frame For AI Risks And Harms, is divided into five smaller subsections. The purpose of this section focuses on five areas of human rights: Nondiscrimination and Equality, Political Participation, Privacy, Freedom of Expression, as well as Disability Rights. The section opens with the International Bill of Rights as the primary source of human rights, which is comprised of three treaties: 1) The International Covenant on Civil and Political Rights (ICCPR); 2) the International Covenant on Economic, Social and Cultural Rights (ICESRC); and the Universal Declaration on Human Rights (UDHR).

The first subsection, Nondiscrimination and Equality, details uses of discriminatory algorithms in programs like the Allegheny Family Screening Tool (AFST), which is a predictive risk model that forecasts child abuse and neglect.[192] Studies have found that the AFST uses information about families that use public serves and more frequently targets poor residents and disproportionately places certain kinds of people into problematic categories. In South Africa, the apartheid regime was held up by, “classification systems built on databases that sorted citizens by pseudoscientific racial taxonomies.[193] A report by the World Economic Forum voiced concerns that because the success of machine learning is measured in efficiency and profit, these measures may overshadow responsibility to human rights.[194]

Under the second subsection, Political Participation, the author focuses on the ways that discriminatory AI can spread disinformation. When citizens cannot be informed and misrepresentations are made to them about political campaigns and world events, this violates the right to self-determination and the right to equal participation under the ICCPR.

The third subsection, Privacy, concentrates on the use of algorithmic surveillance by private companies, like Amazon, to gather and reveal personal data about users. The article notes that the right to privacy is found in both the UDHR (Article 12) and ICCPR (Article 17). Furthermore, protecting the right to privacy, “is key to the enjoyment of a number of related rights, such as freedoms of expression, association, political participation, and information.[195]

In the fourth subsection, Freedom of Expression, the article largely focuses on the management of social media platforms. Algorithms skew the users social media feed based on personal preferences and interest. Algorithms also remove negative posts or comments, and private companies have the ability to undermine or, “meaningfully determine the boundaries of speech.”[196] The subsection notes the difficulty in finding the right balance between the “legal and social impact relative to multiple rights.”[197]

The final subsection, The Disability Rights Approach and Accessible Design, encapsulates, “how technological developments increases the risk to vulnerable groups,” as well as the difficulty in implementing change.[198] For example, Netflix did not comply with ADA guidelines until disability rights groups advocated and pressured the company for years.[199] The article points out that human rights cannot be implemented without laws. This requires additional incentives like public activism and market forces. Moreover, human rights need to be, “infused into the workflow of the organization as part of the jobs of employees working on quality assurance, test suites, and product design documentation,” not just corporate statements.[200]

In Section 4, Stakeholder Overview, the article provides a snapshot of AI and human rights initiatives in business, civil society, governmental organizations, the UN, intergovernmental organizations, and academia, and the section is divided into the aforementioned six subsections.

Subsection 1, Business, briefly discusses some human rights initiatives by the companies Microsoft, Google, and Facebook. Microsoft completed its first Human Rights Impact Assessment (HRIA) and created a, “methodology for the business sector that are used to examine the impact of a product or action from the viewpoint of the rights holders.”[201] After backlash and petitions from Google employees, the company did not renew its contract with the US Department of Defense to develop AI weapons.[202] A similar situation occurred with Microsoft and facial recognition technology given to US Immigration and Customs Enforcement.[203]

Subsection 2, Civil Society, highlights the fact that AI is dominated by powerful, socio-economically stable countries, which makes it difficult for countries in the Global South to access technology. The subsection notes that four civil society groups—Amnesty International, Global Symposium on Artificial Intelligence and Inclusion, The Digital Asia Hub, and the WEF—have conducted studies to assess the impact of AI on inequality as well as the need to engage diverse groups in AI research and policy making.[204]

Subsection 3, Governments, briefly details the regulatory efforts that some national governments have made. The European Union’s General Data Protection Regulation has secured new guidelines for data protection and privacy. Canada and France have called for, “an international study group that can become a global point of reference for understanding and sharing research results on artificial intelligence issues and best practices.”[205] Both Global Affairs Canada’s Digital Inclusion Lab and Canada’s Treasury Board have conducted studies on AI’s impact on human rights.[206] New York City has passed laws that secure the transparency and fair application of algorithms used by the City in order to prevent a biased system.[207] The UN has investigated, “the impact and responsibilities of tech companies to protect human rights,” including the issue of autonomous weapons and their impact on the conduct of war.[208]

In Subsection 4, Intergovernmental Organizations, the report only notes that the Organization for Economic Cooperation and Development (OECD) has prepared some guidance for its 36 member-countries. This guidance system has also set up National Contact Points, where each nation appoints a representative to hear grievances related to company misconduct.[209]

Subsection 5, Academia, briefly details how academics at Harvard, University of Essex, and Stanford University, have reached the same conclusion that there is an urgent need for greater collaboration between technology and human rights fields, as well as other disciplines to build a universal and effective framework.

In its conclusion, the report makes additional recommendations: 1) tech companies need to make an effort to collaborate with local civil society groups and researchers; 2) all tech companies should implement HRIAs, “throughout the life of their AI systems.”; 3) governments must acknowledge their responsibilities and obligation in protecting fundamental rights and formulate nationally implemented AI polices; 4) lawyers, sociologists, lawmakers, engineers need to work together to integrate human rights into all business and production models; 5) academics and legal scholars should continue to investigate and research the intersection between AI, human rights and ethics; and 6) the UN should continue to investigate and enforce human rights with participating governments and monitor rapidly changing AI technology.[210]


[1] Jürgen Schmidhuber, 2006: Celebrating 75 years of AI - History and Outlook: the Next 25 Years, in 4850 50 Years of Artificial Intelligence 29, 30 (Lungarella M., Iida F., Bongard J., & Pfeifer R. eds., 2007).

[2] Id. at 29-30.

[3] Mathias Risse, Human Rights and Artificial Intelligence: An Urgently Needed Agenda, 41 Human Rights Quarterly 2 (2019).

[4] Id. (citing Julia Angwin et. al., Machine Bias, ProPublica (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing; Reuben Binns, Fairness in Machine Learning: Lessons from Political Philosophy, 81 J. of Mach. Learning Research 1 (2018); Brent Daniel Mittelstadt et al., The Ethics of Algorithms: Mapping the Debate, 3(2) Big Data & Soc’y 3 (2016); Osonde A. Osoba & William Welser IV, An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence (RAND Corporation 2017)).

[5] Eileen Donahoe & Megan MacDuffee Metzger, Artificial Intelligence and Human Rights, 30 J. Democracy 115, 115 (2019).

[6] World Econ. Forum, Harnessing Artificial Intelligence for Earth 7 (Jan. 2018), http://www3.weforum.org/docs/Harnessing_Artificial_Intelligence_for_the_Earth_report_2018.pdf [hereinafter WEF AI].

[7] See Jeremy Rifkin, The End of Work: The Decline of Global Labor Force and the Dawn of the Post-market Era 59-164 (1995).

[8] WEF AI, supra note 6, at 3.

[9] Howard Gardner, Frames of Mind: The Theory of Multiple Intelligences (2011). It is criticized to include too many aspects of human characters in the definition of intelligence; See Kendra Cherry, Gardner’s Theory of Multiple Intelligence, Verywell Mind, (July 17, 2019), https://www.verywellmind.com/gardners-theory-of-multiple-intelligences-2795161.

[10] Id.

[11] See Nick Bostrom, How Long Before Superintelligence?, 2 Int'l J. Future Studs. (1998), reprinted in 5 Linguistic and Philosophical Investigations 11-30 (2006); Steven Livingston & Mathias Risse, The Future Impact of Artificial Intelligence on Humans and Human Rights, 33 Ethics & Int’l. Affairs 141, 141-58 (2019) (quoting the comment by Vernor Vinge at the 1993 VISION-21 Symposium) The algorithms of DeepMind Technologies, Google’s DeepMind, and Google Brian are the best examples relating to AGI.

[12] Jürgen Schmidhuber, supra note 1, at 36.

[13] It is also known as history’s convergence or Omega point Ω. Id. at 39-40.

[14] See Vernor Vinge, Technological Singularity, VISION-21 (1993), https://mindstalk.net/vinge/vinge-sing.html; Jesse Parker, Singularity: A Matter of Life and Death, Disruptor Daily (Sept. 13, 2017), https://www.disruptordaily.com/singularity-matter-life-death/.

[15] Sophia, a humanoid robot is an example. Sophia, Hanson Robotics, https://www.hansonrobotics.com/sophia/ (last visited Nov. 27, 2020).

[16] See Wikipedia, Moore’s Law, https://en.wikipedia.org/wiki/Moore%27s_law. Moore’s Law suggests that technology has been exponentially improving since 1971.

[17] The areas to regulate new human life include the human rights to development, climate change, life, health, education, criminal justice, equal protection, due process, work, and privacy.

[18] Robot Law (eds. Ryan Calo, Michael Froomkin & Ian Kerr, 2016) (Edward Elgar Publishing 2016), p. 3.

[19] Id. at 6.

[20] Id. at 7-8.

[21] Id. at 12-13.

[22] Id. at 15.

[23] Id. at 16.

[24] Id. at 18.

[25] Id. at 19.

[26] Id. at 20.

[27] Id. at 21.

[28] Id. at 215.

[29] Id. at 216.

[30] Id. at 217-218.

[31] Id. at 217.

[32] Id.

[33] Id. at 218.

[34] Id. at 219.

[35] Id. at 220-221

[36] Id. at 222.

[37] Id. at 228.

[38] Id. 223-224.

[39] Id. at 226-227.

[40] Id. at 228.

[41] Id. at 229

[42] Deborah G. Johnson and Mario Verdicchio, Why Robots Should Not Be Treated Like Animals, 20 Ethics and Information Technology (2018), p. 291-292.

[43] Id. at 293.

[44] Id. at 294-295.

[45] Id. at 296.

[46] Id. at 297.

[47] Id. at 298.

[48] Id. at 299.

[49] David J. Gunkel, The Other Question: Can and Should Robots Have Rights?, 20 Ethics and Information Technology (2018), p. 87-89.

[50] Id. at 95.

[51] Id.

[52] Mathias Risse, Human Rights and Artificial Intelligence: An Urgently Needed Agenda, 41 Human Rights Quarterly (2019), p. 2-3.

[53] Id. at 5-7.

[54] Id. at 10.

[55] Id. at 11-12.

[56] Id. at 13-14.

[57] Eileen Donahue and Megan MacDuffee Metzger, Artificial Intelligence and Human Rights, 30 Journal of Democracy 116 (Johns Hopkins University Press, 2019).

[58] Id.

[59] Id. at 119.

[60] Id. at 121.

[61] Id. at 124.

[62] Jutta Weber, Robotic Warfare, Human Rights & The Rhetorics of Ethical Machines, in Ethics and Robots (2009), https://www.academia.edu/1048720/Robotic_warfare_human_rights_and_the_rhetoric_of_ethical_machines, p. 1-2.

[63] Id. at 4.

[64] Id. at 2-3.

[65] Id. at 6.

[66] Id. at 7.

[67] Id. at 10.

[68] Id. at 12-13.

[69] Id. at 13.

[70] Id. at 14.

[71] Id. at 13.

[72] Id. at 14.

[73] Id. at 15.

[74] Filippo A. Raso et al., Artificial Intelligence & Human Rights: Opportunities & Risks (Berkman Klein Center, 2018), http://nrs.harvard.edu/urn-3:HUL.InstRepos:38021439, p. 10.

[75] Id. at 14.

[76] Id. at 15.

[77] Richard M. Re & Alicia Solow-Niederman, Developing Artificially Intelligent Justice, 22 Stan. Tech. L. Rev. (2019), p. 244.

[78] Id. at 247.

[79] Id. at 251.

[80] Id. at 253.

[81] Id.

[82] Id.

[83] Id.

[84] Id. at 255.

[85] Id. at 259-261.

[86] Id. at 262.

[87] Id. at 263.

[88] Id. at 264.

[89] Id. at 265.

[90] Id. at 265.

[91] Id. at 266.

[92] Id. at 267.

[93] Id.

[94] Id. at 268.

[95] Id. at 268-269.

[96] Id. at 269.

[97] Id. at 272.

[98] Id. at 272-275.

[99] Id. at 272.

[100] Id.

[101] Id. at 274.

[102] Id. at 275.

[103] Id. at 277.

[104] Id. at 279.

[105] Id.

[106] Id. at 283.

[107] Id. at 289.

[108] Hin-Yan Liu, Three Types of Structural Discrimination Introduced by Autonomous Vehicles, 51 UC Davis L. Rev. Online (2017-2018), p. 151-152.

[109] Id. at 155.

[110] Id. at 156-158.

[111] Id. at 160.

[112] Id. at 161.

[113] Id. at 162.

[114] Id. at 163.

[115] Id. 163.

[116] Id. at 166.

[117] Id. at 171.

[118] Id. 172.

[119] Id. at 173.

[120] Id. 175.

[121] Id. 178.

[122] Joanna J. Bryson, Robots Should Be Slaves (University of Bath, 2009), https://pdfs.semanticscholar.org/5b9f/4b2a2e28a74669df3789f6701aaed58a43d5.pdf, p. 2.

[123] Id.

[124] Id. at 3.

[125] Id.

[126] Id. at 4.

[127] Id. at 5.

[128] Id.

[129] Id. at 5-6.

[130] Id. at 6.

[131] Id. at 7.

[132] Id.

[133] Id. at 8.

[134] Id. at 9.

[135] Id. at 10.

[136] Id. at 11.

[137] Jason Borenstein and Ron Arkin, Robotic Nudges: The Ethics of Engineering a More Socially Just Human Being, Sci Eng Ethics (2016), p. 32.

[138] Id. at 33

[139] Id. at 34.

[140] Id.

[141] Id. at 36.

[142] Id. at 37.

[143] Id.

[144] Id. at 38.

[145] Id.

[146] Id. at 39.

[147] Id.

[148] Id.

[149] Id. at 40.

[150] Id. at 42.

[151] Id.

[152] Id. at 43.

[153] Id.

[154] Angela Daly et. al., Artificial Intelligence Governance and Ethics: Global Perspectives, The Chinese University of Hong Kong Research Paper Series (2019), p. 7-8.

[155] Id.

[156] Id. at 8

[157] Id. at 10.

[158] Id. at 11.

[159] Id. at 12.

[160] Id.

[161] Id.

[162] Id. at 13.

[163] Id.

[164] Id.

[165] Id. at 13-14.

[166] Id. at 15.

[167] Id. at 16.

[168] Id. at 16-17.

[169] Id. at 16-18.

[170] Id. at 19.

[171] Id.

[172] Id.

[173] Id. at 20.

[174] Id.

[175] Id. at 21.

[176] Id.

[177] Id. at 23.

[178] Id.

[179] Id. at 24.

[180] Id. at 26.

[181] Id. at 27.

[182] Id. at 28.

[183] Id.

[184] Id. at 29.

[185] Id.

[186] Id. at 30.

[187] Id. at 31.

[188] Id. at 32.

[189] Mark Latonero, Governing Artificial Intelligence: Upholding Human Rights & Dignity, Data & Society (Oct. 10, 2018), https://datasociety.net/library/governing-artificial-intelligence/, p. 5.

[190] Id. at 5-6.

[191] Id. at 9.

[192] Id. at 11.

[193] Id.

[194] Id.

[195] Id. at 14.

[196] Id.

[197] Id. at 15.

[198] Id.

[199] Id. at 16.

[200] Id.

[201] Id. at 18

[202] Id.

[203] Id. at 19.

[204] Id. at 20.

[205] Id. at 21.

[206] Id.

[207] Id.

[208] Id.

[209] Id.

[210] Id. at 25.