Neuroscience and the exploration of the human brain regularly grab the headlines. Their progress calls for a question filled with hope and fear: would it one day be possible to reproduce the entire human brain? Today, computers are superior to it when it comes to computing power, although the human brain has a much higher degree of complexity: will the course soon be reversed?
These doubts raise artificial intelligence (abbreviated as IA or AI, for artificial intelligence in English). Research on artificial intelligence consists, through computer science, neurology and psychology, in recreating the technical functionalities of the brain. The synthetic intelligence approach deeply questions our conception of humanity and what we call intelligence.
Artificial intelligence, with its own autonomous will, is still the realm of fiction. Yet innovative technologies play a vital role in many aspects of our lives., without always noticing it. Many people are unaware of what artificial intelligence is and how it works. Doctors use it for diagnostics and to plan treatments, market forecasting is more efficient with AI, and Google’s search algorithms are more dynamic. The AI sits behind every assistant like Cortana or Siri, helps cars be self-sufficient, or can select new employees. In the United States, laws are already being created with the help of artificial intelligence. Research has made many advances in recent years concerning subdivisions.
In particular search engines and online marketing, the Internet is also affected by these rapid developments. Understanding the basics of AI is therefore essential for SEO: what is artificial intelligence, and how does it work? What impacts can it have on SEO and online marketing? What are the goals of contemporary research, and in what areas can artificial intelligence be used? What opportunities and what risks does it present?
Definition Of Artificial Intelligence: Vision And Reality
What Is Artificial Intelligence?
We can define it as follows: artificial intelligence is a field of computing whose goal is to recreate a technological equivalent to human intelligence. Specialized computer scientists work together with experts in many areas. But there are several theories when it comes to the definition of intelligence and the theories and methods used to reproduce it.
Artificial intelligence is a concept that is difficult to define entirely precisely due to its very complexity. The faculties that are part of intelligence and are controversial in humans are even more so when applied to machines. For example, should a device be programmed as a priority for rationality? Or on the contrary, should we include other human skills, such as intentionality, intuition, the ability to learn? Likewise, social skills, empathy, and a sense of responsibility may play a decisive role. The question, then, is: should technology produce essentially analytical capabilities or artificial humanity?
There are also differences in the similarity relationship with humans: should a machine be built in the same structure as the human brain? This simulation approach aims to reproduce precisely the same functionalities as the brain. On the contrary, should the machine look like human intelligence? This phenomenological approach, in which the technical processes to create the result ultimately does not matter, is generally what people know about artificial intelligence.
Defining artificial intelligence has always been complicated. In 1950, mathematician Alan Turing developed a test to measure artificial intelligence. The Turing test makes it possible, thanks to a series of questions, to determine whether or not a machine can be identified as such. If the computer’s responses are indistinguishable from humans, the computer is considered artificially intelligent. However, this definition of AI is not of much help since artificial intelligence is developed for many tasks today. Artificial intelligence does not control human communication but performs specific duties very efficiently. For these technologies, we use the Turing test, admittedly limited: if a technical system, within a particular field, has the same capacities as a human being, whether for a medical diagnosis or a chess game, we can speak of it as an intelligent system. We divide artificial intelligence into two categories: strong AI and weak AI.
Also Read: The 10 Cloud Trends In 2021
The Vision: Solid Artificial Intelligence
The definition of solid artificial intelligence corresponds to intelligence that can fully replace the extent of human intelligence in all its complexity. This universal human-machine approach has been around since the Enlightenment but remains a fantasy today.
Several dimensions of intelligence belong to solid artificial intelligence: cognitive, psychomotor, social, and emotional bits of intelligence. Most new AI programs rely primarily on cognitive intelligence: logic, organization, problem-solving, autonomy, or training from an individual perspective.
The premise of strong AI is that artificial intelligence could develop an autonomous consciousness and its own will. With this long-term goal, AI research joins philosophy and raises several ethical and legal questions. Some legal theorists already believe that beings endowed with artificial intelligence should also be subject to the legal laws governing humanity. But the question of the legal competence of intelligent machines is still debated.
The Reality: Weak Artificial Intelligence
On the other hand, weak artificial intelligence is defined by the development and use of artificial intelligence only in defined and limited application areas. AI research is currently at this stage, as its fields of application are restricted to “weak” but highly specialized fields: self-driving cars, medical diagnostics, search algorithms, and so on.
Research has made tremendous progress in the area of weak AI. Developing intelligent systems in specific areas has proven to be more practical and more ethical than research into superintelligence. The areas of application of weak artificial intelligence are huge but are particularly successful in medicine, finance, transport, marketing, and the Internet. We can already foresee that artificial intelligence technologies of this type will become more and more important in almost all areas of daily life.
How Does Artificial Intelligence Work? History And Methods Of AI
How to describe the functioning of artificial intelligence? There are two different methodological approaches: symbol processing and the neural approach.
- Symbolic artificial intelligence represents the knowledge in question using symbols and works with what is called symbol processing. Symbolic artificial intelligence involves processing information “from above”, and operates through characters, abstract concepts, and logical conclusions.
- Neural artificial intelligence represents knowledge in the form of artificial neurons, linked together and forming a network. Neural AI, therefore, works with processing information coming from “below” and simulates individual artificial neurons, which, gathered in different connected groups, constitute an artificial neural network.
Symbolic AI corresponds to the classic approach we have of artificial intelligence. It is based on the idea that human intelligence can be reconstructed at a conceptual, logical, and ordered level, independently of concrete empirical values: a top-down approach. Knowledge, including spoken and written languages, is represented in the form of abstract symbols. By manipulating symbols and relying on algorithms, machines learn to recognize, understand and use these symbols. The intelligent system obtains information from expert systems, within which data and characters are classified in a specific way, most of the time in a logical and interconnected way. The intelligent system can rely on these databases to compare their content with its own.
Typical AI applications include word processing and speech recognition and other logical disciplines, such as chess. Symbolic AI works according to strict rules and can solve highly complex problems thanks to computing capacities. This is how Deep Blue, IBM’s computer with symbolic artificial intelligence, defeated world chess champion, Garri Kasparov in 1996.
The performance of symbolic IA depends on the quality of expert systems but is also inherently limited. The developers had placed high hopes in these systems: Thanks to advances in technology, intelligent systems could also become more powerful, and the dream of artificial intelligence seemed at hand. However, the limits of symbolism are more and more evident. Therefore, it does not matter the degree of complexity of the expert system because symbolic artificial intelligence remains relatively inflexible in comparison. Indeed, the system based on strict rules is difficult to manage when faced with exceptions, variations, or uncertainties. Besides, symbolic IA has great difficulty in acquiring independent knowledge.
Too rigid, not dynamic enough, this technology could not meet the expectations mentioned above. In the mid-1970s, the AI winter began, a difficult period for artificial intelligence, which suffered many criticisms and financial setbacks. This disgrace lasted until the 1980s when a revolutionary concept emerged: systems capable of learning independently and progressing. This is how research developed around artificial neural intelligence.
Also Read: 5G – The Technology Behind The Network Of The Future
Neural Artificial Intelligence
It was Geoffrey Hinton and two of his colleagues who, in 1986, developed the concept of artificial neural intelligence, and at the same time, revitalized the field of AI. They developed backpropagation of the gradient even further. This made it possible to lay the foundations for deep learning, which is used today by almost all artificial intelligence technologies. Thanks to this learning algorithm, deep neural networks can continuously learn and develop independently of each other. This represents a great challenge, which the symbolic IA was not in a position to meet.
Neuronal artificial intelligence (also called subsymbolic AI) is therefore distinguished from the principles of symbolic representation of knowledge. As with human intelligence, learning is segmented into small functioning units, artificial neurons linked to ever-growing groups. This is called a bottom-up approach. The result is a rich and varied artificial neural network.
Neural artificial intelligence intends to mimic the functioning of the brain as precisely as possible and artificially simulate a network of neurons. Unlike symbolic AI, the neural network is stimulated and trained to progress; in robotics, for example, this stimulation is done using sensory and motor data. It is through these experiences that AI itself generates ever-growing knowledge. This is where the major innovation lies: although the training itself takes a lot of time, it allows the machine to learn on its own in the longer term. We sometimes talk about learning machines. This makes neural AI-based machines highly dynamic and adaptive systems that are sometimes no longer entirely understandable by humans.
The construction of an artificial neural network almost invariably follows the same principles:
- An infinity of artificial neurons is arranged on top of each other in a system of layers. Simulated cables connect them.
- The deep neural networks, that is to say, communicating with more than two layers, function. The middle layers are arranged hierarchically on top of the others; information about millions of connections is transmitted upwards in some systems. For your information, AlphaGo (Google DeepMind) has 13 middle layers, and Inception (Google) already has 22.
- The top layer acts as a sensor, which absorbs data into the system, be it text, images, or sound. The information is then sent through the network according to specific patterns. Compared to the previous statement, the upper layer constantly feeds and trains the entire system.