What You Need To Know About AI
When it comes to technical terms from the universe of artificial intelligence, some people get confused. We can help.
Buzzwords that do not mean anything are widespread in the IT world – a further complication is that hardly two experts understand precisely the same term as we often find out in our web TV section “Ready when you are – the buzzword interrogation” were allowed.
This is a little different in artificial intelligence, cognitive computing and thinking robots – here, the meaning is mostly clear, but not the appropriate term itself. Sometimes two terms mean the same thing – or do you know a difference between machine intelligence and artificial intelligence? What about machine learning and deep learning? We clarify.
Generic Term KI / AI
” Artificial Intelligence ” (AI), or the English expression “Artificial Intelligence” (AI), refers to “a wide range of methods, algorithms and technologies in order to software so smart that it looks like a human intelligence to outsiders, “describes Lynne Parker, head of the Information and Intelligent Systems department of the American National Science Foundation. In other words: machine learning (ML), machine vision (computer vision), Natural Language Processing, Robotics and all other related topics are part of KI / AI.
Machine Intelligence = AI
“Some people will distinguish between machine and artificial intelligence, but there is no consensus that the two terms have different meanings,” said Parker. The use of the two terms is regionally different – “machine intelligence” goes back more to classic engineering work, which can be found mainly in Europe, explains Thomas Dietterich, professor at Oregon State University and chairman of the Society for the Promotion of Artificial Intelligence AAAI. ” Artificial intelligence “, on the other hand, has a kind of “science fiction touch” and is more widespread in the USA. In Canada, for example, the term “Computational Intelligence” is also common.
Machine Learning As A Standard Word
As part of AI, the term machine learning (ML) describes a wide range of algorithms and methods to improve software performance to improve with growing amounts of data. This involves both neural networks and deep learning – both terms will play a role later.
“Machine learning is about reading developments from amounts of data or recognizing categories in which the dataset classifies. As soon as the software comes into contact with new data, it can make appropriate decisions
Face recognition is used as an example. “I don’t know exactly how it works that I recognize my wife’s face,” says Dietterich. “This is what makes it so difficult to program a computer to do just that.” Machine learning, therefore, works with examples. “It’s more about input-output than coding,” says the AAAI boss.
According to Parker, common ML varieties are artificial neural networks, support vector machines, decision trees, Bayesian networks, closest-neighbour classifications, self-organizing maps, case-based reasoning instance-based learning,, the hidden Markov model and various types of regression analysis. If you want to know more about the individual terms, you will find detailed information on the stored Wikipedia links.
Neural Networks vs Deep Learning
Artificial neural networks are a particular type of ML-based on the way the human brain works – even if, according to Parker, the accurate comparability is extremely poor. There are different types of neural networks – but essentially, all are based on a system of nodes connected by cables of different weights. The nodes are also called “neurons“. They are arranged in several layers – including an input layer over whichDataget into the system and an output layer through which the responses are made. In addition, there are one or more hidden layers on which the actual learning takes place. Typically, neural networks learn by changing the weight of the cross-connections between the nodes.
The term “deep learning” now refers to a “deep neural network”, namely one that comprises a vast number of neurons in different hidden layers. A” flat “neural network, on the other hand, usually consists of only one or two hidden layers.
Parker explains, “The idea behind deep learning is not new, but it has recently become popular because we have very large amounts of data today and also have fast processors that enable successful solutions to difficult problems.”
Cognitive Computing – It’s Complicated
Cognitive Computing is a further sub-area of AI – but it is not that “easy” to define. It’s pretty controversial.
Cognitive Computing, according to Parker, “refers to computing that deals with reasoning and understanding at a higher level – often in a way that is similar to – or at least modeled after – human consciousness.” It typically revolves more about symbolic and conceptual information than pure data and sensor flow – to make highly qualified decisions on complex issues.
Cognitive systems use various machine learning techniques, but they do not represent any such techniques per se. Instead, they can be described as “complete architecture made up of different AI subsystems that interlock,” says Parker. “It is therefore a sub-area of AI that deals with cognitive behavior that we would associate with the term ‘thinking’ as the opposite of mere perception and motor control,” says Dietterich.
Whether cognitive Computing is an AI area or just a popular buzzword has not been conclusively clarified. “Cognitive is marketing nonsense,” says Gartner analyst Tom Austin, for example. “It implies that machines think. That is nonsense.”