Thinking ‘Artificially’

The nimble penetration of Artificial Intelligence (AI ) inour everyday life is no secret. Knowingly or not, a good chunk of thepopulation world-wide experiences it directly, thanks to the apace innovationand development by technical giants such as Google and Apple. However, owing toits dark potential, often the argument goes that AI usage should be limited or best, stopped. Rather, I believe thatit has become a shibboleth that the Artificial Intelligence industry can behalted, or reversed. Hence, a step forward from here should be its regulationand formalisation instead of a total halt.

Keeping this in mind, the Indian Society of InternationalLaw organised a ‘Round Table Consultation on Artificial Intelligence’. Thediscussion panel consisted of a galaxy of participants representing the variousperspectives of society. It began with Chief Justice (Rajasthan HighCourt)  S. Ravindra Bhat’s address whichset the agenda of the day’s proceeding.

   

As the discussion progressed, various facets explaining itsapplication in diverse fields were expressed. This included the marketperspective to its usage in cyber-diplomacy, to its utilisation in medicaldiagnosis etc.

IIT Delhi Computer Science Professor, Subashis Banerjee,having more than 30 years of experience shared his experience of using AI inmodelling a self-driving car in the University Campus. He even mentionedcollaborating with the International Committee of the Red Cross ( ICRC ) toaddress certain ethical aspects of the same.

This was followed by the representatives of the Governmentof India – Ministry of Electronics and Information Technology and Ministry ofExternal Affairs ) – addressing the panel about the steps taken by thegovernment in integrating AI’s usage in governance. According to them, thePrime Minister has been actively urging all the members of the cabinet toincrease research in AI to provide maximum benefit to the citizenry.

Towards the end, the representative of ICRC, concerned withthe usage of AI in warfare, said that “they do not oppose new technologyof warfare, per se”. They added that certain military technologies may evenassist conflict parties in minimizing the humanitarian consequences of war, inparticular on civilians. However, any new technology of warfare, using AI, mustbe used and must be capable of being used, in compliance with existing rules ofinternational humanitarian law.

A major highlight of the discussion was that though we asconsumers were happy with the advances of AI in the public sphere, many aspectsrelated to the ethics, regulation and training remained unanswered.

The contentious ethics behind every decision taken by AIindependently, indeed, exposes a worrying slant. When Normative-Ethics teststhe judgment on a case by case basis, the branch of Meta-Ethics outrightlydeclares any decision taken by AI as unethical. However, since the practicalityof Meta-Ethics is minimum in today’s world, this branch can be safely bypassedwhile drafting any policy/law. Except, even under Normative ethics, thequestions that are reflected by the befuddled computer, often, aren’t easy toanswer. Explaining via two examples: in case of an accident caused by a self-driving car, who is to be blamed?  Anymishap caused while using AI in Radiology, the onus will be on whom, the doctoror the AI manufacturer?  These questionscan only be answered in the form of a policy formulation, drafted after adetailed discussion with sufficient assistance from all stakeholders.

Next, comes the regularization of the sector. Being sodiverse – applicability ranging from food delivery to weaponry system -, itcannot and must not be regulated by a single homogenous authority. Instead,establishing a multi-membered body on the lines of the International FinanceServices Centres Authority, that represents different interests within a singlebody, can be one solution.

ICRC mentions that the final control of the AI, especiallyduring warfare, should remain with human hand. They touch the humanitarian viewpointwhich regards to the safety of human life. Even in the application of othersensitive fields, such as medicine and transport human intervention remainsindispensable.  This brings us to theother important aspect that needs to be addressed- training.  The unfortunate accident of Boeing 737 Maxputs providing ample training to the direct users back on the table. Doctors,coders or any AI user needs be provided with comprehensive instructions. Often,such training is avoided as it inflates the economics involved and makes theproduct expensive. However, one needs to learn from the Chernobyl accident thatsuch cost-saving measures prove to be fatal in future and hence thiseducational exercise becomes a desideratum. Safety of human life must be treatedas an end in itself and not a means to an end

Accommodating these inputs in the formal policy frameworkwill most definitely invigorate the status-quo. The need of the hour is apolicy that allows every vertical to deduce a structure that fits its modusoperandi. The extent of Artificial Intelligence, Big Data or Machine Learningis infinite.  Hence a one size fits allpolicy drafted under a single ministry should be avoided at any cost. Rathercommon aforementioned threads concerning the ethicality, the regulation and thetraining could be taken up by an umbrella ministry ( say Ministry ofElectronics and Information Technology ) and devolve the other aspects of thepolicy to suit the application according to the needs.

Leave a Reply

Your email address will not be published. Required fields are marked *

8 + one =