The new world powered by Artificial Intelligence is all around us nowadays. From Alexa to self-driving cars to smart homes, AI is changing the way in which we interact with this world. A lot of people ( and some guy called Elon Musk ) have also mentioned quite a bit about the dangers of AI and how it can lead to humanity as we know it. But can AI make mistakes ? And how serious can these mistakes be ?

As you well know I am a big fan of Artificial Intelligence and have written quite a bit on it; but even then I am aware that AI like any technology has the potential to be misused. I dont believe AI will destroy human beings like some people like to predict but there is a huge potential for damage if AI is not governed and regulated properly . Lets take a look at a case study to get a proper context for how AI can make mistakes and cause real life damage

How Machines Learn ( and how they can goof up )

In order to fully understand how AI can make mistakes , lets understand the driving force behind AI which is Machine Learning. Without going too much into technical jargon, Machine Learning is the science of teaching a machine to make decisions without hardcoded programming instructions.

Like the diagram shows below , a machine is provided with training data and an algorithm with which to understand this data. Once the machine understands it is able to create a model to make decisions. These decisions are tested for accuracy and more and more data is fed until a high level of confidence is achieved with regards to how accurate these decisions will be.

AI mistakes
Machine Learning in action

All of this seems to be pretty straightforward but it also leads to one question. The accuracy of the Machine Learning algorithm is directly tied to the qualify of training data that is being fed to it. What if the data was skewed in a particular direction ?

For example if a facial recognition algorithm was only trained with caucasian faces then would it recognize other ethnicities also ? This is where we start getting into the interesting implications of not training Machine Learning properly.

Let us take a look at actual examples of AI getting it wrong

The Compas fiasco

A starting case of AI misuse came to light in 2016 in which a report revealed that an AI system being used in courtrooms called Correctional Offender Management Profiler for Alternative Sanctions ( COMPAS ) was acting in biased manner against Black people.

This system was being used to assess a defendant and take into account factors like age and previous arrest history and use it to determine if a criminal was a β€œhigh risk” or not. This could lead to the court imposer stricter jail sentences and heavier fines etc. so a real life impact was present. The report revealed that the algorithm was making two very serious mistakes when determining if a criminal was likely to re-offend in the future :

  • Black defendants were being twice as likely to be labelled as future offenders
  • White defendants were more likely to be labelled low risk

To put it into context, take a look at the below cases and the level of risk assigned by COMPAS. Do you think the prior offenses and assigned risk rating match ?

AI making mistakes
Whats wrong with this picture ?

COMPAS was significant as it made people realize that machine learning algorithms can actually inherit the racial biases and prejudices from training data which is not properly representative of all races. This led to actual real life consequences of people getting sentences which they did not deserve.

The full report can be read here

How to stop bias in AI systems

Scientists and researchers have proposed certain attributes which have to present in AI systems in order for the public to have trust in them. Basically the algorithm should have the following characteristics :

Integrity : Has the model been built fairly ? Are there controls over it to make sure its parameters do not change. How do we know no one is tampering with it

Fairness : Care have to be taken to make sure the data is not inheriting any racial bias from its collection phase. Tests have to be done to ensure this and data re-balanced if needed

Transparency : The public needs to know how the model was made and how it is reaching its decisions. This is crucial to know if the model is making any decisions that impact the well being of other people

I hope this was useful . Stay tuned for more articles on AI governance