Can an AI be ethical agent? It certainly exhibits intelligence. Does it exhibit autonomy? Machine learning AI seems to be exhibiting some level of autonomy. This is why there is a debate about bestowing personhood on AI. Besides, can an AI become a moral guide for humans? Chat-Gpt-3 claims that it can be a good guide to solve some of the moral puzzles. But the fact the such AI depend on the data sets that are available for it to work, indicates that the applications of the universal moral principles to specific cases can be still wanting. Most probably, AI will be morally biased depending on the context from where it is emerging. Hence, the question boils down to this: can ethics be inscribed in the generation and operation of AI? If there is lack of ethics in the generation of AI, it will show in its ruthless operations. Hence, we need ethics for both in the very generation of AI which will then be inscribed in its operations as well as in the operation of AI in specific cases. Besides, this maybe it is important to ask: Can we write ethics in the very operation AI? This would mean we should be able convert AI into ethical agents who will then act with ethical responsibility. We need to critically scrutinize this issue because we are facing an mushrooming of AI and their interaction with humans everywhere. This is the reason why we have the challenge to discern the moral status of AI. The issue is simple: can a self-driving car be held responsible its actions?
It is indeed urgent that we find relevant answers to our moral quandaries arising from AI. It is tempting to either apply virtue ethics , deontological ethics or utilitarian ethics to discern the moral consequences of AI. This suggests that moral agency is discerned through the moral practice of a particular community. In fact, it is these ethical practices of a communities that often guide us to discern the moral abilities of person in comatose or any other such inhibiting disability. But, the issue is should we leave the moral status of AI to some community grounded ethical practices? The issue is: Can we inscribe or write virtue/ de-ontological,/ utilitarian ethics into the operation of AI? Can we make AI HomoKantianus, HomoAristotlianus or HomoBenthamus? All this would mean that we assume that AI can operate autonomously? AI seems to operate in freedom when it chooses an action that would produce optimal results that would then out smart Human abilities. But the issue is will the AI be vulnerable to inflict harm upon itself? Will it self-destruct as humans often do? Will it excel in committing stupid mistakes like several highly intelligent humans do? It only then perhaps, we may say that AI is a moral agent. But shall we benefit by making AI capable of making stupid mistakes? Will stupidity not go against the very purpose that we have invented AI? This is why we cannot accept that AI as a moral agent like human beings. If this is so can they still be moral agents in a limited sense? Can we inscribe moral responsibilty to AI?
We cannot certainly make AI HomoKantianus, HomoAristolianus or HomoBenthanus. Can we make an AI MachinaKantianus or MachinaAristotleianus or MachinaBenthanus? This would mean AI may begin to operate ethically from Kantian deontological ethics or from Aristotleian virtue ethics or Jeremy Bentham’s utilitarian ethics. It seems that such a thing is possible. The question is not just ethics that inform the generation of AI. The issues is about the operation of AI. It means as we have ethical code for the behaviour of humans, can we have a code of ethics for AI? We need this because AI seems to indicate that human agent is a bottle neck in its optimal operations. If the crucial decisions are left to humans, it does not just slows operations but also may lead AI into error. This is why there are those who argue that it is time that humans step aside and allow AI to operate as AI. If this scenario becomes real, we have the challenge to inscribe ethics into the working of AI. All ethics is relational. The ethical code of AI should necessarily consider the behavioural consequences of AI in relations to human and the environment. The AI code may have to operate in such a way that they should not actively harm humans. Even if humans try to induce AI to inflict harm on other humans, they have to be able to refuse to act. AI may pose risks not just to humans but also to our earth and eventually to the cosmos. This is why we need AI to be architected in such a way that it is able to detect fake data and unethical behaviour.
Since AI depends on large data sets, it is important that AI has an inbuilt system of cleaning data that it is using. Otherwise, we may endanger human and other beings. With the rise of deep fakes, the detection of fake data is very important. There is also a danger of AI code getting outdated. AI code of ethics is fundamentally reactive, it has to be proactive and be able to factor in the new or changing contexts of the operations of AI. The best way we teach our children is by giving them moral guiding principles for ethical behaviour. This is different from AI. AI are ‘children’ whom what to do and what not to do is singled out in every case. Therefore, there can be gaps unimagined by us in the writing of the ethical code of AI . Besides, it is we humans who are developing rules for the working of AI. Hence, our writing of the code would be completely biased by our interest. What if we get Chat GPT-4 to write the ethical code of AI? Maybe we will arrive at Machina-Ethicus. What we may need is a middle ground achieved in dialogue with AI like Chat GPT-4. To do this we may need more sentient AIs to emerge since neuroscience today teaches that human emotions play a great role in our moral life.