Terms such as machine learning, deep learning, and neural networks are often used for artificial intelligence. And technology is not infrequently associated with dystopias. But how realistic is the picture of superhuman machines? What is the status quo? And what are the approaches, challenges, and solutions in the development process?
Whether physically superior and highly intelligent replicants who turn against people over time, as in Blade Runner, a supercomputer that
lets robots advance against people as in I am Robot or an all-important fight against artificial
machines as in Terminator – gloomy scenarios
Robots superior to humans abound. And beyond the cinema landscape, well-known entrepreneurs and thought leaders have long been concerned with artificial intelligence (AI) and the related issues of the future. Tesla boss Elon Musk has already warned against AI in several tweets, and economic philosopher Anders Indset pleads for a new economic system to improve the
In the competition for the supremacy of the world over robots to keep the upper hand, Raymond Kurzweil assumes that the technological singularity – i.e., the point at which we humans
can no longer understand the development of technology – will occur in 2045. But to what extent is the image of a technical,
intelligent beings that are our equal, even superior, justified? And: What
Does intelligence mean in this context?
The founding father of KI is John McCarthy, who
brought the term into play at the Dartmouth Conference in 1956. However, six years earlier, Alan Turing
asked, “Can machines think” the crucial question that continues to shape the field to this day.
In general, a lot has happened in artificial intelligence since the conference – from Eliza, the world’s first chatbot developed by MIT, to the chess-playing AI Deep Blue, which defeated the reigning world champion Garri Kasparov in 1997, to the first self-driving taxis from Waymo. And that was just the beginning. Artificial
intelligence is developing exponentially – or like Canadian Prime Minister Justin
As early as 2018, Trudeau put it aptly about technological change: “The
the pace of change has never been so fast, and yet it will never be so slow again
.” However: All existing AI solutions to date are “weak” – they are not in the
, up to a position as the human mind quickly and flexibly adapt to new circumstances
. Intelligence, as it is depicted in many Hollywood films, is
simply not there – or to put it more drastically: Artificial intelligence has no intelligence.
However, this does not mean that the past decades of research have been in vain – on the contrary: the fields of application of AI are extremely diverse and have enormous potential. According to Fortune Business Insights, the global market for AI applications will amount to 267 billion US dollars by 2027 – for comparison: in 2019, it was just 27.23 billion US dollars. The areas of logistics, healthcare, cybersecurity, research and development, finance, advertising, information security, e-commerce, manufacturing, public transport, cloud computing, and the entertainment industry will particularly benefit from this – in general, every industry benefits from AI, none cannot take advantage of the technology – and the application possibilities of the technology are becoming more and more diverse.
Contribute A Guest Article Or Guest Post
Chatbots in customer service, image and face recognition, sales forecasts, recognition of customer preferences and fraudulent financial transactions, autonomous driving, or recommendations for action for patient therapies – none of this would be possible without AI or, better said, without machine or deep learning and its neural networks. While machine learning (ML) is about how artificial systems can independently generate knowledge and make predictions using algorithms, deep learning is an area of ML in which neural networks analyze large data sets. The latter comprises many interconnected nodes that interact with each other and make new connections as you learn. Unstructured and large data sets can hardly be used meaningfully without deep learning.
There are several network models – for example, feed-forward and recurrent networks. The former is characterized by the fact that all nodes are connected, and the activation runs in one direction from the input to the output layer via at least one further layer in between. It is different from the recurrent network: The direction of activation varies: it goes in various directions and can therefore form activation loops within the network.
This type of network is particularly important when considering contexts, for example when processing text or images. It is different from the recurrent network: The direction of activation varies: it goes in various directions and can therefore form activation loops within the network. This type of network is particularly important when considering contexts, for example when processing text or images. It is different from the recurrent network: The direction of activation varies: it goes in various directions and can therefore form activation loops within the network. This type of network is particularly important when considering contexts, for example when processing text or images.
In practice, it is recommended to use the simplest possible model to keep the complexity low, but not to overload the server capacity of the real-time processing and not to use too much energy. At the beginning of a project, it is worthwhile to approach a question using simple statistical heuristics or algorithms and, based on these, to decide whether it makes sense to continue working on the solution.
When developing neural networks for practical applications, the focus must be on the customer or the user. So if the first ideas and approaches are collected internally, the central question is: How large is the number of customers who could benefit significantly from the end product of the idea? What are the customer needs – today and tomorrow – that should be met? Only after this “ideation process” and the final decision do data come into play to develop a minimum viable product with which the first version of the application can be tested.
Currently, the research field of AI has matured to such an extent that setting up algorithms to solve a problem – the proof of concept – is quite possible. The big challenge is to make this solution suitable for the industry and to be able to scale it. Many developers are aware that creating the ML code is relatively low compared to the entire project. The real mammoth task is to embed it so that the application works not only for one user but also for thousands of users simultaneously. And once this challenge has been overcome, the success of an application is not automatically given – because: If the user does not accept it, it will not find its way into society, and then even the best technology is worthless. It is therefore important to think about the user experience (UX) from the outset. Only a user-friendly, i.e., simple,
In connection with AI applications, there is always talk of loss of control or black box – the horror scenarios of science fiction literature. This can be a hindrance for AI-based solutions in the work environment. In general, however, customers’ reservations about new applications can be reduced step by step; it could well be a process lasting several months. The aim is to automate processes to make their work or life easier for users. Ideally, the degree of automation can be selected depending on trust in the technology itself. Interaction should always be possible.
Technology conclusions become particularly interesting when they initially seem counterintuitive. The term “explainable AI” has established itself for this. How can you understand how a machine makes decisions? How can we bring applications to the user, and how do they have to be designed without society being afraid of using them? Or in business: How does an AI get its recommended price? And why is the return probability for a customer with package 1 higher than with package 2?
When it comes to the issue of traceability, it is worth taking a closer look at the structure of the application – different neural networks can be used. In principle, for example, with feed-forward or recurrent models, it can be traced at any time which unit or which node has been activated. However, this does not automatically answer the question on what basis the algorithm made its decision. Depending on the algorithm’s complexity, an explanation is easier or more difficult – if neural networks come into play, the traceability becomes significantly more difficult. The following approach provides an approach to the problem: For example, if an algorithm makes a decision based on six data points, one of these points can be taken out and then examined how the algorithm decides again—the more complex the model, the more difficult.
In general, another major challenge is that an AI’s data to train with is already distorted because humans influence it. An example: soap dispensers that do not respond to dark skin because they have not been trained on it. Or the automatic preselection of CVs in the application process that excluded women. Why? Because the technology trained with data from the previously human recruiting process, some of the recruiters had generally rejected women’s résumés. There are many such examples. Setting up as diverse a development team as possible is part of the solution – as is a constant review of the results and adjustments to the technology. Socially, however, there is still a need for some debates and proposed solutions. To be able to achieve significant progress in this area,
There are some promising approaches and potentials in the field of artificial intelligence – and developments in the next few years will be significantly faster than in the last few decades. Nevertheless, we are not only far removed from the dystopia of superhumanly intelligent machines, but the picture is also exaggerated. Machines make use of the mathematical world – and are subject to the developers’ codes. Nonetheless, the potential of the technology is huge – across all industries. Here we will see major developments in the future, especially in logistics, healthcare, cybersecurity, finance, advertising, information security, e-commerce, manufacturing, transport, cloud computing, and the entertainment industry.
It is important to deal with it technically and always keep an eye on the customer. To what extent can a new, strange application be introduced in the market and gain users’ trust? How good is the applicability of the product? And how much does it benefit the customers and satisfy their needs? It is important to note that ethical questions and challenges must be taken seriously and included in the development as well as possible.
This is the only way to end up with a product that finds its way to the benefit – and not to the horror – of everyone in the economy and society. Technology should make our lives easier. After all, for nerve-wracking entertainment, there’s Hollywood. Are alien applications introduced into the market and gain the trust of users? How good is the applicability of the product? And how much does it benefit the customers and satisfy their needs? It is important to note that ethical questions and challenges must be taken seriously and included in the development as well as possible.
This is the only way to end up with a product that finds its way to the benefit – and not to the horror – of everyone in the economy and society. Technology should make our lives easier. After all, for nerve-wracking entertainment, there’s Hollywood. Are alien applications introduced into the market and gain the trust of users? How good is the applicability of the product? And how much does it benefit the customers and satisfy their needs? It is important to note that ethical questions and challenges must be taken seriously and included in the development as well as possible.
This is the only way to end up with a product that finds its way to the benefit – and not to the horror – of everyone in the economy and society. Technology should make our lives easier. After all, for nerve-wracking entertainment, there’s Hollywood. which finds its way to the benefit – and not to the horror – of everyone in the economy and society. Technology should make our lives easier. After all, for nerve-wracking entertainment, there’s Hollywood. which finds its way to the benefit – and not to the horror – of everyone in the economy and society. Technology should make our lives easier. After all, for nerve-wracking entertainment, there’s Hollywood.
Also Read: 8 Examples Of Artificial Intelligence In Everyday Life