We cannot simply apply the utilitarian approach to AI.
If we do that the so-called trolley problem will be served by killing one person in order to save five human beings. This mode of thinking is certainly not valuing human life. Besides, no harm to humans or the Asimov’s first law or even Kant’s categorical imperative will not prevent the AI from harming non-humans. We have the challenge to encode laws, policies and virtues in AI. Therefore, the question that asks, ‘ can we write an ethical algorithm?’ becomes very central to us. Robots and other forms of AI will have to make life and death decisions for both humans and non-humans? Do we have to save human and forget non-humans? We need to encode ethics in AI that will make crucial decisons without human supervision. Perhaps, we have the challenge to turn to virtue ethics of Aristotle or the ethics of dharma from Indian tradition.
We have the challenge to encode laws, policies and virtues in AI. We do have to rise beyond our anthropocentric concerns and embrace an approach of posthumanities that will assist us to overcome human bias that is bound to afflict AI. Human biases are automatically and unintentionally found in the operation of AI. Machine learning requires large data sets and these data sets are drawn from our world where they are already embedded in gender, race, caste, faith, etc., centric biases. If we do not tackle these biases, we shall be have these in the outputs of machine learning systems. Thus, we have to clean up the data of these human biases. We have the challenge not just clean human racial and other biases, we have to do completely purge AI of anthropocentric biases. This is because AI will operate on its own without human supervision and will serve the on-human world too.
It is difficult and even humanly impossible to understand why machine learning algorithm came to a particular conclusion. We cannot humanly audit these operations. There is no linear decision-tree to be followed to determine why machine learning took such an such a decision. Machine learning even after being an AI cannot explain why it has taken such a decision. So it may be possible that these machines may be doing the right decisions all the time for wrong or different reasons. Therefore, when they truly make an incorrect decision, it would be difficult to know without a profound audit of the data used to train them. Therefore, it would mean that we have the challenge to encode ethics and virtues in our AI. Since AI use human language systems, it is very difficult to encode exact semantically balanced approach as fairness, justice and non-partisanship does not always have the same meaning. Hence, it likely that the technologist might codify already existing biases in the data sets that they use to train their machine learning systems.
This is why we have the challenge to dialogue with the technologists and experts so that they are enabled to encode ethics, laws and virtues into the machine learning systems. Encoding these ethics and virtues can be viewed as encoding values. Thinking of ethics and virtues as values enables us to mathematize them which can then be easily encoded in the machine learning systems. This also will enable technologists to encode different values to different machine learning systems that are performing different tasks. This possibility of encoding different values to different AI depending on the quality of the task as well as purpose for which the task is performed is in line with the posthumanity approach that is sought by us as it will be cleansed of anthropocentric bias and let the AI serve human and animal, plant, microbial and the material universe.
All this will be possible only when technologists, ethicists and public policy experts dialogue with each other and create codes of values ( ethics and virtues) to be encoded in AI performing different tasks. In this context, I find dharmic thinking of Indian tradition profoundly inspirational. Drawing from it, we can say that we have the challenge to encode dharma in the AI. Each AI depending on the task it performs and purpose it serves will need to encode specific dharma. This approach is certainly a posthumanity approach and will enable us to create AI that serves fairly human as well as the non-human world.