Some find it fascinating, others are directly involved in its further development, and third, it gives a queasy feeling in the stomach: We are talking about artificial intelligence, or AI for short. Although the technology is already being used in many places and has unbelievably great potential, it does not only generate enthusiasm. Above all, AI security is perceived as a problem by many people – laypeople and experts. We clarify and dedicate ourselves to the following five central facts about AI security.
First of all, it’s worth looking at a survey that came out of a collaboration between BlackBerry Cylance and the SANS Institute in 2018. The survey interviewed a total of 260 cybersecurity experts who ultimately identified seven significant problems with the technology:
A study carried out by the BSI (Federal Office for Information Security) and the French ANSSI (Agence nationale de la sécurité des systèmes d’information) is also interesting.
The result: The database of neural networks and the data input are highly vulnerable, and reliability problems could have potentially dangerous consequences. The fallibility of artificial intelligence should not be underestimated. It should be recognized as a real danger, especially about the use of AI in critical task areas such as autonomous driving or medical diagnosis.
The term “data poisoning” is becoming more and more popular – but what exactly does it mean? In simple terms, this is the deliberate feeding of the machine learning system with incorrect data, which falsifies the entire environment—a clear threat to AI security and the reliability of supposedly “safe,” self-learning AI applications.
Another topic that must not be concealed about AI security is Generative Adversarial Networks, GAN for short. Translated into German, this trend stands for “generating, opposing networks.” Two neural networks work as opponents here: training data are used. One network creates a candidate, which the so-called discriminator is accepted or rejected by the second network. Many experts view this technology as dangerous because it can make harmful instruments or even weapons out of neural networks.
If you look through the population, you can see that trust in AIs and AI security has been almost completely lost in many places. The origin of the mistrust is based on the one hand in a lack of knowledge, on the other hand in extensive knowledge about the potential dangers of artificial intelligence. Suppose the future brings a significant increase in AI applications in a business context and everyday private life. In that case, much educational work must be done on AI security and the problems.
Also Read: What Does Artificial Intelligence Mean And How It Works?