An eye on AI

How policymakers should look at it? Restrictions without understanding its potential or safeguards to avert crisis? Getting the balance right is the trick

Aparajita Gupta | September 21, 2018


#Artificial Intelligence   #Google Assistant   #COMPAS   #Amazon  
(Illustration: Ashish Asthana)
(Illustration: Ashish Asthana)

Google Assistant, Rekognition and Tay. All these, often seen in news, have a common thread – they are powered by Artificial Intelligence (AI). Only difference is that while some have been in news for right reasons, some others have made it to the headlines for all the wrong reasons. For instance, Google Assistant is an AI assistant that connects with several devices. Other companies are not far behind. To detect frauds, MasterCard and Visa are relying on machine learning algorithms. But there is a flip side too. American Civil Liberties Union found that Amazon’s Rekognition, a facial recognition AI, falsely matched pictures of 28 members of US Congress with those of arrested criminals. Microsoft came up with Tay, its AI chatbox that was meant to learn from conversations. But it all seemed to go wrong when it eventually picked up prejudices and gave racist and sexist messages on Twitter. Here’s another one. Facebook’s translation service, that uses AI, got a Palestinian man arrested by Israeli police for an innocent social media post. He had put a picture of himself against a bulldozer with a caption that meant ‘good morning’. But the translation service translated it to mean ‘attack them’ or ‘hurt them’.

When people were asked whom they would trust, a human or an AI, especially in sensitive areas, the results gave a clear choice in a World Economic Forum (WEF) poll. When asked if one would choose a human prescribed treatment or an AI prescribed one in case of being diagnosed with a life limiting illness, 53% favoured a human doctor prescribed treatment. Similarly, on being asked if one would prefer a human judge or an AI judge when brought to trial on a false allegation of having committed a serious offence, 63% favoured a human judge.

Clearly, many don’t seem to trust AI over humans in areas of sensitive decision making, though the margin doesn’t seem to be huge.
But juxtapose this to the reality of AI and algorithms increasingly being used in sensitive areas like judicial proceedings and medicine.

A few years back, an interesting case in the US caught everyone’s attention for the unique issue involved. Eric L Loomis was held guilty of running away from police and driving a stolen car. He was sentenced to six years of imprisonment. What is unique about this case is that the court, among other factors, relied on the score given by a risk assessment tool called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions). According to COMPAS score, Loomis was at a high risk of committing crime in future. Even the Wisconsin supreme court agreed with this decision and said that since the COMPAS score was not the only factor for the sentence, it could be used. But in an article, a criminal law scholar reasoned that such tools are based on group statistics, so they can’t make predictions on behaviour of individuals. Anyway, humans are definitely unpredictable.

Another sensitive area where AI is being used is medicine. From diagnosis to treatment, AI seems to be entering every area. IBM Watson is used in several countries for helping doctors in patient diagnosis. Even in India, Manipal Hospitals are reportedly using IBM Watson for oncology to help physicians give personalised options for cancer care. But it was recently reported in news abroad that IBM Watson gave unsafe and inaccurate treatment suggestions.

While such applications may help human experts and better inform them, there is need for caution. As highlighted above, AIs have gone wrong since many of its applications are still in the early stage of development. Further, absence of clear laws and rules to regulate AI has complicated matters. What happens if something goes wrong? Who will be liable? People are already asking these questions. But let’s take a step backward. Do people affected by the decisions of AI know that AI is being used? Even if they know, have they consented to the same beforehand?  These are some other questions being asked.

Thinking on these lines, the Artificial Intelligence Committee of the House of Lords (UK) provides useful insights in its report titled “AI in the UK: ready, willing and able?” One of its recommendations is about informing public when significant or sensitive decisions are being made by AI. It says “It is important that members of the public are aware of how and when artificial intelligence is being used to make decisions about them, and what implications this will have for them personally. This clarity, and greater digital understanding, will help the public experience the advantages of AI, as well as to opt out of using such products should they have concerns.

“Industry should take the lead in establishing voluntary mechanisms for informing the public when artificial intelligence is being used for significant or sensitive decisions in relation to consumers.…The soon-to-be established AI Council, the proposed industry body for AI, should consider how best to develop and introduce these mechanisms.” Such fair disclosure and transparency to people would be in their best interests. It would also allow people to consent to use of AI in such matters.

The new General Data Protection Regulation in European Union has gone a step ahead. Article 22 explicitly talks about rights of people not to be subject to decisions that are solely based on automated processing and which have legal effects on people. This is subject to some exceptions.

In light of these developments, India would need to decide for itself. At this stage, some feel that it would not be a good idea to impose restrictions on advancement of AI without understanding its true potential. At the same time, it is important to have safeguards in place to tackle instances where it goes wrong in certain fields that can have huge impact on humans. Getting the balance right is the trick. A good starting point could be to follow the recommendation of UK’s Artificial Intelligence Committee and inform people when AI is taking sensitive decisions that impact them. Further, it would be a good idea to take informed consent before such use in certain areas.

This would not be an easy path. For a diverse country like India, the challenge would first be to raise general awareness about AI among people. Next would be to empower people to give an informed consent to being subjected to AI decision making in critical fields. While the answer to ‘how to do it’ may be difficult, the question is worth pondering upon.

Gandhiji had once said, ‘An eye for an eye will make the whole world blind.’ In today’s world that is witnessing the Fourth Industrial Revolution, an eye on AI may not be too much to ask for.

Gupta is a lawyer and currently a Young Professional with the Economic Advisory Council to the PM and NITI Aayog. The views expressed are personal.

(The article appears in the September 30, 2018 issue)

Comments

 

Other News

Report of India’s G20 Task Force on Digital Public Infrastructure released

The final ‘Report of India’s G20 Task Force on Digital Public Infrastructure’ by ‘India’s G20 Task Force on Digital Public Infrastructure for Economic Transformation, Financial Inclusion and Development’ was released in New Delhi on Monday. The Task Force was led by the

How the Great War of Mahabharata was actually a world war

Mahabharata: A World War By Gaurang Damani Sanganak Prakashan, 317 pages, Rs 300 Gaurang Damani, a Mumbai-based el

Budget expectations, from job creation to tax reforms…

With the return of the NDA to power in the recently concluded Lok Sabha elections, all eyes are now on finance minister Nirmala Sitharaman’s full budget for the FY 2024-25. The interim budget presented in February was a typical vote-on-accounts, allowing the outgoing government to manage expenses in

How to transform rural landscapes, design 5G intelligent villages

Futuristic technologies such as 5G are already here. While urban users are reaping their benefits, these technologies also have a potential to transform rural areas. How to unleash that potential is the question. That was the focus of a workshop – “Transforming Rural Landscape:

PM Modi visits Rosatom Pavilion at VDNKh in Moscow

Prime minister Narendra Modi, accompanied by president Vladimir Putin, visited the All Russian Exhibition Centre, VDNKh, in Moscow Tuesday. The two leaders toured the Rosatom Pavilion at VDNKh. The Rosatom pavilion, inaugurated in November 2023, is one of the largest exhibitions on the histo

Let us pledge to do what we can for environment: President

President Droupadi Murmu on Monday morning spent some time at the sea beach of the holy city of Puri, a day after participating in the annual Rath Yatra. Later she penned her thoughts about the experience of being in close commune with nature. In a message posted on X, she said:

Visionary Talk: Amitabh Gupta, Pune Police Commissioner with Kailashnath Adhikari, MD, Governance Now


Archives

Current Issue

Opinion

Facebook Twitter Google Plus Linkedin Subscribe Newsletter

Twitter