In October 2015, an Artificial Intelligence accomplished something that had long been considered impossible: defeating a professional player of the highly complex Go game. The AI by the name of AlphaGo, which was developed by Google, beat the multiple European champion Fan Hui and since then has confirmed its success multiple times, even against the world’s best Go player Ke Jie. AlphaGo is based on the method of machine learning. Its latest iteration, Alpha Zero, perfected the process of the algorithm since then, trains itself based on the respective set of rules and has mastered chess and shogi better than any human ever did.
Google’s AI is therefore at the forefront of a development in the field of machine learning that has given new momentum to the research field of Artificial Intelligence, which had been stagnating beforehand. Machine Learning describes the process of deriving patterns and rules from existing data and has been greatly developed in the recent past. It is now common sense that AI has come to stay. That is why Artificial Intelligence is one of the core topics at Fraunhofer FOKUS and why its role will become even more important in the near future. For example, FOKUS will be involved in two events in the near future dealing with the issues of the impact of AI progress. Links to the ASQF Quality Day and AIQE 2020 Workshop websites can be found at the end of this email.
What is AI?
The definition of AI is difficult in the absence of a common definition of intelligence.
Fundamentally, a distinction must be made between strong and weak AI. The aim of strong AI is to reproduce logical-human thinking in its entirety and if possible, to exceed it. A strong AI acts autonomously and on its own initiative. To this day, however, the development of this form of Artificial Intelligence has not yet been accomplished.
Weak Artificial Intelligence, on the other hand, describes the attempt to develop computer algorithms in ways that enable them to solve special problems independently and ideally better than human users. Weak AI is already used today in many areas and is an integral part of industrial processes. However, one must distinguish between two approaches to generating AIs.
Two kinds of AI
Symbolic AI is an approach from the top-down perspective: It only works where there is a formal system of symbols, i.e. a fixed and completely known set of rules. Using mathematical logic, new knowledge can be generated based on the specifications. That is also why it is often referred to as knowledge-based AI. However, symbolic AI reaches its limits where humans can no longer provide correct and consistent information. Additionally, symbolic AIs cannot process very large state-spaces, such as the aforementioned Go, where there are about 10^170 possible constellations, and can therefore not be used.
Subsymbolic AI, in contrast, is a bottom-up approach: The goal is to generate insights and predictions about the best possible next steps, again based on large amounts of raw data and with the help of mathematical models. The most important branch of subsymbolic AI is the so-called neural networks. In the usually multi-layered process that is not directly observable by humans, artificial neurons are being connected through machine learning, thus creating a network of artificial knowledge and intelligence. This process is called machine learning. Learning in the context of computer-based systems is usually used synonymously with the process of optimization. The AI, therefore, trains itself based on the provided data sets and continuously improves. However, the human operator still carries out the selection of the training data which is made available to the AI as a starting point.
Hurdles of AI
Since the AI trains itself during machine learning using the training data sets, it is a so-called black box procedure: The human user is unaware of the reasons for the AI’s decision. This creates a number of issues of the AI. On the one hand, these are statistical methods that nowadays function with sufficient accuracy. On the other hand, however, when mistakes or errors occur, this happens unexpectedly and oftentimes undetected by the user. In some cases, this can lead to fatal consequences, as was the case, for example, when autonomously driving Teslas were involved in accidents in the United States of America. The growing importance of Artificial Intelligence raises not only technical but also ethical and legal questions about the responsibility for the algorithm’s decisions, especially in safety-critical areas.
Additional problems may also arise from the training data sets used. If these data sets already contain a bias, this bias taught to the AI and passed on. This happens since Artificial Intelligence always assumes objectivity in the provided data, from which it derives its patterns and rules. This is not true for biased data, however – prejudices are therefore transplanted. An example illustrates the problem: If one were to leave the selection of potential candidates for a management position in a company to Artificial Intelligence, it would be conceivable for the AI to propose only men from Central Europe since in the past this group has almost exclusively occupied management positions. As this evaluation is made through a black box procedure, this bias cannot be detected from the outside, leading to the mistake not being recognized and ultimately in the procedure only being seemingly objective.
A further source of error is the so-called overfitting, which occurs when training data is analyzed in too much detail and the AI becomes too concentrated specifically on the training data set used. Conversely, there is also the opposite effect of underfitting, where too little data is used for training, resulting in an AI that cannot recognize all underlying rules and patterns and thus only functions inaccurately.
These issues have often led to justified criticism of Artificial Intelligence in the past and require the regulation of AI applications. The AI Bundesverband – a group of 160 companies in the AI industry – has launched the initiative of the KI Gütesiegel (AI Quality Seal). The quality seal represents a self-commitment of the participating companies: They guarantee to comply with the self-imposed standards of the industry. This includes, among other things, carefully checking and handling training data sets to minimize the risk of possible discrimination mechanisms of the algorithms. However, it must be mentioned critically that there is currently no sanction for non-compliance with the standards and that the seal is not awarded by experts. Nevertheless, the introduction of the Quality Seal is probably the right step towards more transparency in the field of Artificial Intelligence and Machine Learning
The research field Mobility contains a number of safety-critical areas where the functionality of the software used must be guaranteed at all times. This makes the use of AI, for example in the context of autonomous driving of automobiles, a particularly great challenge. For fully autonomous vehicles, highly developed and multi-layered neural networks would have to be used. However, this would inevitably lead to the problems with the lack of transparency, which were already described, and the decision making of the AI would not be comprehensible. Furthermore, errors in the AI cannot yet be ruled out completely. Thus, there are still too many unknown factors that could be targeted by attackers and thus negatively influence the function of the systems.
At FOKUS’ System Quality Center, we have therefore concentrated our research on the safety of driver assistance systems that support the driver but still require human control. This is called a modular procedure in which the recognition and, in some cases, assessment of consequences of hazards is carried out by the computer, but the final control decision lies with the human being. At present, image and video recognition methods are mainly used for the detection of hazards. However, caution is required here as well: Experiments by a consortium of universities in the United States have shown that even marginal changes to traffic signs that are barely recognizable to humans can lead to misinterpretations and false recognitions on the part of the on-board computer.
Fraunhofer FOKUS in cooperation with the Automotive Quality Institute in Berlin is currently working on improving the security of Artificial Intelligence in the field of automated driving. Projects in the field of aviation are also planned for the future.
In the area of Smart Cities, one of the most important application areas at SQC, AI is used for example in proactive transportation planning. The aim is to address the various needs of different users and stakeholders adequately. An example of this is a route that takes the user from a Berlin suburb to Munich by train. This route involves the transport networks BVG (Berlin), the Deutsche Bahn for the train journey, as well as the MVG (Munich) which must be coordinated with each other, a process that is very complex and very costly today.
The use of AI in Smart Cities requires a large amount of data. The good news is that these data are already being collected in many cities. The bad news: The data is rarely combined effectively at present. Besides, the data is often highly sensitive and its analysis would, for example, allow the generation of movement profiles of citizens. For this reason, FOKUS aims to further develop homomorphic encryption methods, which anonymize the data, but at the same time retain their structure and therefore allow the use and training of Artificial Intelligence without violating the personal rights of individuals.
Another project will improve the efficiency of food supply chains in everyday life. There are a number of stakeholders, too. From the producer to the distributor, to the seller to the consumer, a sustainable process must be maintained especially in today’s world, which can be extremely complex given the sheer amount of parties involved. In this area, AI can help to identify and improve the weak points in existing ongoing processes.
The research in the field of AI fits seamlessly into the OUP Plus Smart Cities reference architecture developed at FOKUS. Particularly in the areas of “Data Management & Analytics Capabilities” and “Integration, Choreography, and Orchestration Capabilities”, AI applications could help to further optimize data processing and analysis in the future.