Chat GPT is a generative model AI. It uses machine learning to give its users friendly intelligent responses. It has been growing rapidly and has offered a new improved version that we call Chat GPT-4. From this point, It can only improve. There is no question about its improve services to humanity. The question is can we still continue to think even with the coming of these Chat-bots that technology is neutral and it is the context of its use as well as the intention of the user that would determine the ethical its merits ? This question takes us to another important query that asks ; how are we to regulate deep synthesis technologyan umbrella term for ‘ images, audio, videos, virtual scenes and other information’ created through generative models ? This includes deepfakes that generates photos, audios,videos depicting a person doing or saying things that have not been recorded as he/she doing and saying. Chat GPT-4 generates texts, images and videos that does not rule out possibilities of its abuse like the deepfakes . Hence, the need of an ethics for both users as well as the service providers of deep synthesis technology.
There is a proposal of marking or labeling the content that is created by the generative models of AI. This is a step in the right direction. Because once the products are released in tje web, they are not just circulated but they may be resynthesized in a new context. This is likely even without the expressed intention of the user as the images that we synthesize through deep synthesis technology remain in the web and are available for the deep synthesis technology to take them to a new level of synthesis. This means we move to the image of image and keep continuously simulating it . This is why the image produced by the deep synthesis technology cannot be regarded as innocent but is ethical and has to be regulated so that it cannot be abused to cause harm to innocent humans. Thus when the products of the deep synthesis technology are labled or marked as synthesized, it blows the whistle of the precautionary principle. This is not to control the spread of synthesized content. It is only to distinguish it from true and real information. Otherwise we will be pushed into mis/disinformation which is detrimental to our society.
The increased availabliity of deep synthesis technology has warranted the need of an ethics for this new technology. I believe that our frontal ethics that only considers what is infront of us appears to be weak. What we need is a dorsal ethics that considers how this technology is exherting a dorsal push on us . It is because of this dorsal thrust that we need to label all the products of the deep synthesis technology. If they are not marked as synthesized they will remain hidden along with the nonsythesized information and frontal ethics that we blindly apply will not easily find its abuse. But labeling is not easy. It brings us back to the dilema that asks; who will bell the cat ? Are we to leave the labeling to the service providers making it mandatory for them to embed it into their vey platforms so that every product that is synthesized on them is already marked. This might solve most of our issues as it will make it difficult to create content that masks as real when it is clearly fake. This does not mean that all our issues are solved. There are some platforms already asking for the real name of the user. Name/ ID verifcation of the user is a good mechanism to guard against those who try to abuse the platform. There maybe few actors who might produce thier own platforms that synthesize information without labeling it . Those who act outside the system laid down have to be brought to book by the Governments. What if the Government itself is the one who is producing synthesized content for propaganda and power games?
We have the challenge to strive to make right-information as our basic human rights. Human rights are denied by Government at several levels yet claiming human right to right-information will put moral pressure on the Governments who then may be challenged for their abuse in the court of law. There may be still thorns on our path. What if the Government defines what will be considered as red line crossing on deep synthesis technology platforms? Such Government go against the freedom of speech and creativity of its citizens. We certainly need to continue our ethical reflection on this rising new technology. Since it is growing with every passing day it is even more difficult task to do the work of catching up. We may have the temptation to introduce some pre-emptive ethical regulations. But that would muzzle creativity and hence, we have the challenge to tread a cautious path and continue seeking ethics for our new world.