AI Security – 5 Facts About The Security Of Artificial Intelligence

Some find it fascinating, others are directly involved in its further development, and third, it gives a queasy feeling in the stomach: We are talking about artificial intelligence, or AI for short. Although the technology is already being used in many places and has unbelievably great potential, it does not only generate enthusiasm. Above all, AI security is perceived as a problem by many people – laypeople and experts. We clarify and dedicate ourselves to the following five central facts about AI security.

# 1: Survey Reveals Seven Main Problems For AIs And AI Safety

First of all, it’s worth looking at a survey that came out of a collaboration between BlackBerry Cylance and the SANS Institute in 2018. The survey interviewed a total of 260 cybersecurity experts who ultimately identified seven significant problems with the technology:

  • Unreasonable reliance on a single AI master algorithm
  • Negative impact on privacy
  • Lack of understanding of the limits of an algorithm
  • Inappropriate training situations
  • Insufficiently protected data and metadata
  • Lack of transparency about the algorithms’ decision-making methods
  • Incorrectly used algorithms

# 2: A Study By The BSI And ANSSI On AI Security Comes To A Questionable Result

A study carried out by the BSI (Federal Office for Information Security) and the French ANSSI (Agence nationale de la sécurité des systèmes d’information) is also interesting.

The result: The database of neural networks and the data input are highly vulnerable, and reliability problems could have potentially dangerous consequences. The fallibility of artificial intelligence should not be underestimated. It should be recognized as a real danger, especially about the use of AI in critical task areas such as autonomous driving or medical diagnosis.

# 3: “Data Poisoning” Is Seen As A Significant Threat To AI Security

The term “data poisoning” is becoming more and more popular – but what exactly does it mean? In simple terms, this is the deliberate feeding of the machine learning system with incorrect data, which falsifies the entire environment—a clear threat to AI security and the reliability of supposedly “safe,” self-learning AI applications.

# 4: Experts Rate GAN Trend As Dangerous

Another topic that must not be concealed about AI security is Generative Adversarial Networks, GAN for short. Translated into German, this trend stands for “generating, opposing networks.” Two neural networks work as opponents here: training data are used. One network creates a candidate, which the so-called discriminator is accepted or rejected by the second network. Many experts view this technology as dangerous because it can make harmful instruments or even weapons out of neural networks.

# 5: Trust In AI ​​Security Needs To Be Regained

If you look through the population, you can see that trust in AIs and AI security has been almost completely lost in many places. The origin of the mistrust is based on the one hand in a lack of knowledge, on the other hand in extensive knowledge about the potential dangers of artificial intelligence. Suppose the future brings a significant increase in AI applications in a business context and everyday private life. In that case, much educational work must be done on AI ​​security and the problems.

Also Read: What Does Artificial Intelligence Mean And How It Works?

Carol Eunice
My interest in Computer Science has led me to explore the field of Machine Learning. From an ML enthusiast in my college days (2017-2021) to a Machine Learning Engineer at Wavelabs Technologies, I have come a long way in terms of knowledge and skill.  My interests are not limited to finding out the best algorithms to get the desired results, but also span entrepreneurship, web development, poetry, books and volunteering.
RELATED ARTICLES