From the beginning until today, the evolution of humans runs through a common thread – the invention, use, and optimization of tools. Think of the monumental scene from the film “2001: A Space Odyssey” by Stanley Kubrick: The leader of a group of great apes has a spiritual inspiration. He recognizes the potential of using an animal bone as a weapon and uses it to put the rivals on the run. In the intoxication of success, he throws the bone in the air. The camera follows the flying bone, which is transformed into a spaceship with a famous camera cut. Hardly any other picture illustrates the evolution of humanity better than a long chain of tools – literally from animal bones to spaceships.
Table of Contents
How Can We Talk About Artificial Intelligence?
Part of this chain is also the modern artificial intelligence ((AI)) algorithms available to us today. If we want to look at and discuss them in a targeted manner, we quickly come across a central challenge: We have always been dealing with AI under the premise of creating our artificial counterpart in the best case. In this sense, when looking at AI algorithms, we often move in symbolic and philosophical spheres.
It is problematic to project ourselves onto technology and tools, and vice versa, technology onto us.people tend to compare themselves to the latest and best-developed technologies. Sigmund Freud viewed humans as a thermodynamic system – a saucepan – which explodes if the pressure is too high. The physician Fritz Kahn illustrated man as a capitalist machine. While the workers go to work in the stomach, the foremen in the brain manage the system.
Metaphors like this help us better understand the world and our role in it. However, we must not make the mistake of mistaking this symbol for the actual. The discussion about AI is also often dominated by metaphors: the dual image of the machine as an artificial human and the human as a machine. But man is neither a machine nor a saucepan. Just as a map helps us find our way around a strange city, these graphic comparisons allow us better to understand the world, people, and human behavior. However, the map is not equated with the city, and the person is not with the metaphor.
Because metaphors such as the human-machine duality can lead to incorrect conclusions: pessimists paint the picture of an artificial, intelligent machine that replaces humans. On the other hand, optimists expect that future AI machines will exponentially exceed the human mind – even to the point that we upload our minds into the machine and thus become part of our invention. Both views have in common the false premise that humans are intelligent machines that one only needs to replicate.
Artificial Intelligence – Used Responsibly
We already use the key technology AI in many areas to expand our mental and physical abilities in our society. This is not happening for the first time: From the first-hand tools to the steam engine and the automobile to computers and smart telephones – people have already integrated technologies into their lives quickly in the past.
But suppose we want to understand the potential and risks of new AI algorithms and comprehensively control the technology. In that case, it is necessary to lift the discussion out of metaphors and see AI for what it is: Another, albeit very powerful, tool in our hands. Like any other tool, if misused, AI can discriminate, marginalize, or oversee. However, the technology can also help detect diseases more quickly, find the right vaccine from billions of potential candidates, or help us maintain our physical mobility into old age.
To better understand, research, apply responsibly and further develop AI algorithms as technological tools, we at Lindera define the following principles:
There Is No Such Thing As Ethical AI With Unethical Goals
Whether an AI technology is ethical and used for the individual and society depends on the objective pursued as a developer or company. As a health tech company, we at Lindera pursue the goal of promoting, improving, and maintaining the physical mobility of people of all ages and as much as possible. Together with scientists, doctors, athletes, and nurses, we develop deep-technological applications that bring us closer to this goal. We thus contribute to maintaining individual freedom and quality of life and at the same time relieving the burden on society.
People Are Not Data, And Data Are Not An End In Themselves
We know about the power and dependence of our AI algorithms on large amounts of data. However, we will only successfully achieve our goal if it is in harmony with the people affected. However, humans are not data that can be collected and analyzed. To integrate a deeply technological application sustainably, the recognition of this important fact is central. At Lindera, we work with experts from different areas to determine which parts of certain AI algorithms are used in the technical application. In turn, we decide on the type, amount, and use of data. With this procedure, we guarantee data economy and data protection naturally.
Decision Intelligence: Knowing How AI Makes Its Decisions
When deciding which AI algorithms to use in our deep-technology applications, we at Lindera apply the discipline of Decision Intelligence ((DI)). With the help of DI, we can understand the entire decision-making process of an AI. Let us illustrate this with our decision-making: To make a decision, we consider all important elements for the decision: What do I start with? What information do I need to make an informed choice? What are the consequences of my decision? From all these questions and the train of thought, a scheme results, a causal decision diagram ((Causal) (Decision) (Diagram, CDD)). At each edge of the chart, we could now insert an AI algorithm to generate data-based knowledge to make an informed decision.
If our knowledge is uncertain at any point in the CDD, then a data-based statistical inference will help us. An example of such uncertainty can be whether or not a particular medical treatment has a specific effect. We could also include a randomized controlled trial or a data-based causal inference method to educate ourselves in the latter case.
Explainable AI – How Does The Algorithm Make Its Decisions
Once the use of an AI algorithm has been determined, the next step is to make the AI itself explainable. Because, especially with medical decision-making systems, it is necessary that we can explain the decision-making paths of neural networks and other so-called black-box algorithms. This way, we can avoid errors, incorrect correlations, and unconscious bias such as prejudice or discrimination because every technology reflects its creators and systems. DI can help us with the right tools such as Explainable AI ((XAI)). At Lindera, we use these valuable tools to test our AI algorithms and make their decision-making processes transparent.
Open To Research And Technology
As a young technology company, we at Lindera are open to change – be it the latest research and developments on AI, classic algorithms, data protection or cognitive science, learning theory, and game theory. To achieve our goal of promoting, improving, and maintaining the individual physical mobility of people, we have set up our technological toolbox broadly and are constantly expanding it. We are open to any new tool that takes us a little further towards our goal and complies with the previous principles.
Also Read: Artificial Intelligence At Work For Health