This image has no alt text.
olaser/ iStockphoto

Formal verification for functional safety

Whether in the railway or automotive industries, in medicine or in the aerospace industry: in many application areas there are software functions whose safe operation must be proven, or human lives will be at risk. Although testing is the predominant method for quality assurance in these domains, for the highest safety requirements, the use of formal methods for verification is recommended for good reason. However, increasingly complex systems pose special challenges to both methods. The goal of formal verification is to obtain strict statements about the correct functioning of a system by means of mathematically founded methods. This area of software quality assurance is also part of Fraunhofer FOKUS' expertise.

Procedure and goals of verification

The formal verification of software is based on so-called formal methods, a combination of mathematical logic, discrete mathematics and formally defined languages. The use of formal methods is motivated by the expectation that the performance of appropriate mathematical analyses, as in other engineering disciplines, contributes to determining the correctness of a system. This is also the big advantage compared to testing: If the formal method is successful, a definitive statement can be made whose significance goes far beyond the successful execution of tests. In many cases, the use of formal methods can facilitate certification by authorized bodies.


The requirement profiles for formal verification are laid down in standards in the various fields of application: The DO-178C, published in January 2012, is decisive for software in aviation. An important extension of this standard were the guidelines and notes for the use of formal methods published in Appendix DO-333. In the field of rail traffic, EN 50 128, also published in 2012, is the central European standard for safety-related software. This standard also specifies which procedures, principles and measures must be carried out in order to describe the software to be tested as safe. ISO 26262, on the other hand, is the standard for electrical and electronic systems in motor vehicles, which also defines a procedure model and methods for development and production.


Formal verification poses various challenges that need to be solved. As already mentioned, the translation of real situations into formal, mathematical descriptions is often difficult. In addition, verification increasingly focuses on the interaction of safety (avoidance of damage) and security (avoidance of offences). The following scenario is intended as an illustration: An emergency exit door in a cinema hall must always ensure that visitors can leave the cinema as quickly as possible. At the same time, however, it must prevent unauthorized persons from entering the auditorium from outside without paying. Accordingly, it must be possible to open the door from the inside, but it must remain locked from the outside. Similar problems are encountered in software development in many places.

Another challenge is the correct development of the verification software itself. Thus it must always be prevented that a tested software is marked as error-free despite defects (false-positive). In this case the verification is not only wrong, but potentially dangerous. In correctly developed verification software, such misjudgments therefore do not occur. The opposite case, in which an error is reported although none exists (false-negative), is less critical, but also undermines the suitability of the verification software.

Formal methods

Nowadays formal methods are roughly subdivided into the following three groups:

1) Deductive verification

These methods involve mathematical arguments and evidence to establish certain properties of a formal model. A concrete application could be to create formal functional contracts to describe the expected behavior. Mathematical theorems are then generated by an analysis of the functional contracts and implementations based on the Hoare logic. It describes mathematically-formally how the execution of a piece of code changes the state of the computation. Ideally, the proofs of these theorems are performed by machine using automatic theorem proofers. In difficult cases, interactive proof assistants are also used.

2) Model Verification

In model checking, all possible behaviors of a formal model are analyzed to determine whether a particular desired property is met. Typically, specifications using temporal-logical formulas are used here. If this is not the case, a reason is given why the desired result could not be achieved.

3) Abstract Interpretation

While testing involves executing a program with concrete input data, abstract interpretation involves statically analyzing the execution of the program. In addition, you do not limit yourself to individual input data, but analyze the program state for very large data areas. In this abstract execution (interpretation) of the program, the program state is considered in a coarse form for the sake of simplicity. This coarse analysis can lead to inaccurate results, but the analysis ensures that no error is ever reported, where in reality one exists. 

Verification at FOKUS

The scientists at Fraunhofer FOKUS have been working for years on the further development and practical introduction of formal verification methods.

Device-Soft: Deductive verification for safety-critical software of embedded systems

In the DEVICE SOFT project, deductive verification methods were used for the quality assurance of safety-critical embedded systems. The static analysis platform Frama-C of CEA List was used for this purpose. Mathematical methods, such as the Hoare logic, are used to provide a set of logical rules that allow statements to be made about the correctness of computer programs using mathematical logic. Experience has shown that unit proofs provide higher quality at lower costs than software tests. The use of deductive methods thus facilitates the certification of safety-critical software, since, in contrast to testing, it can be proven that a software works according to specification for all possible inputs.


In safety-critical industries, high investment sums are invested in lengthy verification processes. With the Internet of Things (IoT), the industry focus has shifted to short development cycles and the lowest possible costs. This leads to a conflict between the earlier verification methods and the requirements of the new industries, which makes a comprehensive examination of safety and security in the area of IoT more difficult.

In the Vessedia project, which has been running since September 2017, the application possibilities of software analysis tools are being expanded in order to make them more flexible and applicable at short notice in highly dynamic systems with less critical applications. In addition, the analysis methods developed will cause lower costs. The work in the project takes place both on the basic level and in the practical application of the results. The main goals are therefore:

  • the development of a methodology for the effective use of software analysis tools,
  • the standardization of tools for static analysis in order to expand their application possibilities,
  • the proof of the improved analysis methods on the operating system Contiki OS, which is widely used for the development of IoT applications,
  • the development of a “Security Certification Level” (SCL) for IoT applications for which Common Criteria certification is too cost-intensive.

Fraunhofer FOKUS is in charge of the Vessedia project and is responsible for methods. The scientists are also concerned with the quality assurance of the analysis tools used. In addition, research is being carried out on approaches to enable static analyses to be carried out more quickly using the Frama-C tool on a cloud-based basis. In addition, guidelines for a cost-efficient combination of static analysis and testing for IoT applications are being developed.