In an earlier post I discussed how AI can potential make mistakes due to biases present in the training data that is fed to it. These mistakes can have potentially life changing consequences for people which is why it is so critical to have a governance framework around the usage of AI. Today I want to focus on how AI systems can potentially be hacked and what cyber-criminals can achieve through an example of Autonomous vehicles

Autonomous vehicles

Self driving cars or Autonomous vehicles are one of the biggest applications of AI and which are touted to slowly remove human beings from the driving seat over a period of time. These cars will be designed to do everything a human being can do by sensing their environment and taking intelligent decisions powered by AI.

AI based vehicles run on machine learning which builds its ability to make decisions based on the data that is fed to it. In the same way that a human being learns to drive and gets better at it over time, a machine learning algorithm gets better the more data it gets. The algorithm are trained to recognize stop signs , vehicles, pedestrians, road markings etc. based on which it takes actions when going to a destination.

These cars are expected to have a lower failure rate than manual driving but like any technology powered system there are cyber-security risks that are present which might prove even deadlier than getting a driving ticket !

How do Autonomous vehicles get hacked ?

A recent report by European Union Agency for Cybersecurity (ENISA) took a look at the cybersecurity risks of autonomous vehicles and what can be done to mitigate them.

  • Compromising the AI supply chain : The hardware and software that is used in the AI is at risk of being contaminated by cyber-criminals similar to the earlier supply chain attacks I mentioned. A lot of AI models are pre-trained and then imported into an organizations AI eco-system and attackers can potentially contaminate them as a back-door for the future.
  • Evading the model via physical means : By modifying the environment slightly cyber-criminals can potentially β€œtrick” the model . For example paining over a stop sign and adding graffiti to the road which can lead the AI based system to make wrong decisions.
  • Evading the model via adversarial inputs: Adversarial examples are a technique which attackers due to evade machine learning models especially where computer vision is involved. By slightly manipulating the input to an AI system the attacker can produce a completely different output. This carefully crafted β€œnoise” to an image which will be unnoticeable to human beings can lead to AI completely reclassifying a particular input as different objects which can cause serious problems . The below is an example where an image recognition system classifies a school bus as a Guacamole !

Autonomous vehicles hacked
Source : ENISA report

Security recommendations

A lot of the recommendations given the report are standard for any company such as conducting risk assessments and making sure that security by design is a necessary part of the AI present in these machines. Some of the key recommendations of the report that are unique to AI are below :

  • Periodic evaluations of AI models and data that is fed into the model to ensure it has not been changed or altered
  • Thorough vetting of the supply chain including third party providers to ensure there is no weak link in the chain
  • Increasing knowledge of AI cyber-security across developers and professionals as that is a major obstacle and a cause for risks being introduced.

The above is just a quick summary of the provisions of the report which is definitely worth reading if you want to see some of the real world implications of not secure AI .