SPATIAL
Security and Privacy Accountable Technology Innovations, Algorithms, and Machine Learning
Sep. 01, 2021 to Aug. 31, 2024
Artificial intelligence (AI) is being treated as a key technology that can make cars drive intelligently and autonomously, supply chains easier to plan, production processes more efficient, and thus achieve sustainability goals. However, legal-ethical issues, data protection, and resilience also play a role in the use of AI. The SPATIAL (Security and Privacy Accountable Technology Innovations, Algorithms, and Machine Learning) project, which has now been launched, aims to answer questions relating to trustworthy AI.
The SPATIAL project is funded under the Horizon 2020 program of the European Commission. The project partners aim to support the development of trustworthy artificial intelligence. Especially in safety-critical areas, AI must also be protected against malicious attacks (cybersecurity). To this end, SPATIAL aims to close identified gaps in data and black-box AI. To this end, resilient metrics, privacy-compliant methods, verification tools, and system solutions are being developed. In addition, the project partners have set themselves the goal of imparting appropriate skills through further training programs.
Within the project, FOKUS researchers will work on developing special toolchains for detecting potential vulnerabilities and possible attacks on AI/ML algorithms for cyber security, defining effective countermeasures and testing them in controlled testbeds and network simulations. In addition, there will be a strong focus on the explainability of AI/ML algorithms - exploring and defining specific metrics that can be used in an efficient way to evaluate ML-based systems and serve as a basis for certification of AI-based algorithms. Various use cases from the areas of intrusion detection systems and firewalls, 112 emergency communications, 5G and IoT will serve as initial use cases for these metrics and for the targeted explainability concepts.