When Artificial Intelligence affects lives

Credit to Author: Igor Kuksov| Date: Tue, 12 Feb 2019 14:00:02 +0000

Despite our previous coverage of some major issues with AI in its current form, people still entrust very important matters to robot assistants. Self-learning systems are already helping judges and doctors make decisions, and they can even predict crimes that have not yet been committed. Yet users of such systems are often in the dark about how the systems reach conclusions.Artificial intelligence assists judges, police officers, and doctors. But what guides the decision-making process?

All rise, the court is now booting up

In US courts, AI is deployed in decisions relating to sentencing, preventive measures, and mitigation. After studying the relevant data, the AI system considers if a suspect to be prone to recidivism, and the decision can turn probation into a real sentence, or lead to bail refusal.

For example, US citizen Eric Loomis was sentenced to six years in jail for driving a car in which a passenger fired shots at a building. The ruling was based on the COMPAS algorithm, which assesses the danger posed by individuals to society. COMPAS was fed the defendant’s profile and track record with the law, and it identified him as an “individual who is at high risk to the community.” The defense challenged the decision on the grounds that the workings of the algorithm were not disclosed, making it impossible to evaluate the fairness of its conclusions. The court rejected this argument.

Electronic clairvoyants: AI-powered crime prediction

Some regions of China have gone a step further, using AI to identify potential criminals. Facial-recognition cameras monitor the public and report to law enforcement authorities if something suspicious swims into view. For example, someone who makes a large purchase of fertilizer might be preparing a terrorist attack. Anyone guilty of acting suspiciously can be arrested or sent to a reeducation camp.

Pre-crime technology is being developed in other countries as well. Police in some parts of the United States and Britain use technology to predict where the next incident is most likely to occur. Many factors are considered: the area’s criminal history, its socioeconomic status, and even the weather forecast. Remarkably, since the tools’ deployment in Chicago districts, gun crime there has dropped by about a third.

The computer will see you now

New technologies are also widely used in healthcare. Artificial doctors consult patients, make diagnoses, analyze checkup results, and assist surgeons during operations.

One of the best-known self-learning systems in healthcare is IBM Watson Health. Doctors coach the AI to diagnose diseases and prescribe therapy. Watson Health has had a lot of positive feedback. Back in 2013, for example, the probability that the supercomputer would select the optimal treatment plan was put at 90%.

However, in the summer of 2018, it was revealed that some of the system’s cancer treatment advice was unsafe. In particular, Watson recommended that a cancer patient with severe bleeding be given a drug that could cause even more blood loss. Fortunately, the scenarios were hypothetical, not real cases.

Sure, human doctors make mistakes too, but when AI is involved, the lines of responsibility are blurred. Would a flesh-and-blood doctor risk contradicting a digital colleague whose creators have crammed it with hundreds of thousands of scientific articles, books, and case histories? And if not, would the doctor shoulder any negative consequences?

AI must be transparent

One of the main problems with using AI to decide the fate of humankind is that the algorithms are often opaque, and tracing the cause of errors so as to prevent a repeat isn’t easy at all. From the viewpoint of developers of self-learning systems, that is understandable: Who wants to share knowhow with potential competitors? But when people’s lives are at stake, should commercial secrets take priority?

Politicians worldwide are trying to come to grips with regulating nontransparent AI. In the European Union, “data subjects” have the right to know on what basis AI decisions affecting their interests are made. Japan is going down a similar route, but the relevant law is still only being considered.

Some developers are in favor of transparency, but they are thin on the ground. One is tech company CivicScape, which in 2017 released the source code of its predictive-policing system. But this is very much the exception, not the rule.

Now that the AI genie is out of the bottle, there is little chance of humankind ever putting it back. That means until AI-based decisions become provably fair and accurate, AI’s use must rely on well-crafted laws and the competence of both the creators and the users of self-learning systems.

https://blog.kaspersky.com/feed/