Are AI entities and robots deserving of certain rights and freedoms?
There is an argument to be made that AI entities, as they improve and start to replace humans in a number of capacities, should be subject to certain rules and regulations so that they continue to operate for the benefit of humanity. In that case, is there an argument to be made to suggest that AI entities should be able to enjoy rights as well as responsibilities?
It may be helpful to analyse the reasoning behind giving human rights before suggesting that AI entities should enjoy the same rights or something equivalent. There is an understanding that the rationale behind human rights lies in the concepts of dignity and consciousness. Article 1 of the Universal Declaration of Human Rights, says “[a]ll members of the human family are born free and equal in dignity and rights…” There is thus, the implication that all humans are born with an inherent dignity, of which should not be overridden by others and which should be acknowledged and valued indefinitely.
Underlying this concept of dignity, though, is that of consciousness. The exact recognition or definition of this has never been made entirely clear, yet its mysterious existence does not prohibit its seeming importance to humanity and rights. The basic idea of consciousness is that humans are aware of themselves and their feelings. As such, humans have the ability to suffer. Pain and suffering are instinctively feelings which humans avoid as we are programmed to treat these feelings and emotions as potential threats or warning signs to our survival. This is why we know not to touch a hot fire. Consciousness also means that humans have the ability to be happy. The Founding Fathers of the United States sought to limit the power of the state to allow the people to engage in the pursuit of happiness, according to Yuval Noah Harari. Being aware of ourselves and our feelings means that we have the ability to seek out what we believe is best for us to extend our survival and secure our happiness. Thus, one of the major purposes of human rights is to safeguard our happiness, comfort, dignity and survival while diverting anything which may threaten this, such as torture.
As such, it is difficult to see how this construction and understanding of human rights can be at all applicable to robots or AI entities. Yuval Noah Harari even points out that “[r]obots and computers have no consciousness because, despite their myriad abilities, they feel nothing and crave nothing.” AI entities cannot feel pain, depression, sadness or suffering, things which human rights are meant to curtail to a certain extent. Therefore, there seems to be little point in granting rights to AI entities or robots that are not conscious like humans are.
However, Dr. Kate Darling, a robot ethicist from MIT’s Media Lab, said in an interview with PC Mag that the way that humans interact with technology and the choices we make in how we use our technology could be a reflection of ourselves. The argument here is that humans may have an inclination to treat other living things which are not human as though they are human. Although, this inclination is not necessarily due to the belief that those living things are conscious and thus, deserve rights, but that if they are not treated with dignity, then this reflects human beings as cruel and savage. This idea departs from the idea of ‘human exceptionalism,’ which purports the idea that humans come before anything else. Immanuel Kant, a German philosopher, even suggested that this seemingly inherent uniqueness is overstated and instead suggests that there may be other beings which are like us. As such, humans may not be as special as perhaps believed to be to then warrant any other living thing to be considered secondary.
Could AI entities be deserving of rights as humans are? It would appear so. In 2016, a report by the European Parliament recommended the creation of “electronic personhood” which would award rights and responsibilities to advanced AI entities. Yet, even if taking this direction with regard to robot rights is driven by a desire to convey Homo sapiens as an empathetic race, granting rights to AI entities still contravenes the principles underlying the concept of rights. Even the most advanced forms of AI are not capable of developing emotions as humans are. Does turning off an AI-powered machine harm the machine, or make it sad or feel abused? The consensus would likely be no, yet this contention is being challenged. In 2015, a Japanese telecom company called Aldebaran Robotics built a robot capable of feeling human emotions like joy and anger. The use of sensors, cameras and other forms of input provide the bot with data which it uses to react to certain scenarios using the appropriate emotions. The bot essentially creates its own emotions using an “endocrine-type multi-layer neural network,” emulating the behaviour of humans.
But even if we could create robots which could feel and display human-like emotions, would it be desirable to do so? It is hard to imagine a scenario where humans would want to create AI entities or robots which were conscious. If an AI entity cannot feel hunger or tiredness, then that means it can work 24/7 without taking a break or sleeping, unlike humans. Consequently, from an economic perspective, the AI entity could be highly productive, to the benefit of the business and the consumer, as more goods are made and sold, more services are provided, and none of it has to ever stop, and no wages have to be paid. It creates an economy of almost costless production. It might be more plausible to say that humans will only build AI entities which benefit humankind itself, and not necessarily focus so much on the well-being of the AI entity. As Amitai and Oren Etzioni argue in their article, “[h]owever smart a technology may become, it is still a tool to serve human purposes.” This consensus is what fuels the development of self-driving cars for example, as it is envisaged that such a technological advancement will reduce car accidents and traffic on the roads, bettering human lives. The control humans have over the creation of AI entities means that the question of them becoming conscience is essentially a human decision. If developing bots with consciousness is not demanded, then it will not exist.
Yet, while humans may stay committed to developing AI entities solely for the good of humanity, and thus disregarding any rights an AI entity could have, is there any plausibility to the idea of AI entities eventually becoming smart enough to develop their own AI entities which would then demand such rights? Amitai and Oren Etzioni make the point that if AI-powered technologies become increasingly sophisticated, “[t]hey may…act in defiance of the guidelines the original programmers installed.” If AI entities gradually drift away from the original creator’s intentions, those entities may build further robots and entities and thus embark on a movement which allows the rights and interests of AI entities and robots to prevail over those of their human creators. But these predictions are more identifiable in sci-fi movies and tv shows and the probability of this prospect actually materialising is perhaps quite low, at least for now. Therefore, these propositions are certainly questionable but necessarily impossible.
Perhaps only in this way can robot rights be firmly established and be on par with human rights. Unless we are prepared to forgo our unique ‘specialism’ which has been central to our living and understanding of the world, then the idea of robots or AI entities obtaining rights seems far-fetched. As long as humans treat them as mere tools which are confined to the desires and intentions of human beings, then the argument in favour of robot rights remains weak, but it should not be completely disregarded.
NEXT |Where I Apply?
What is the fate of humanity in the age of AI?
Would it be possible to hold AI entities criminally liable?