Contact Person
Ilja Radusch
Dr.-Ing. Ilja Radusch
Director
Business Unit ASCT
+49 30 3463-7474
Fahrzeugfernsteuerung
Fraunhofer FOKUS

Tele-operation for highly automated vehicles

News from Oct. 10, 2018

At NVIDIA’s GPU Technology Conference (GTC) from October 9th – 11th, 2018 in Munich, Fraunhofer FOKUS researchers will demonstrate on the outdoor exhibition grounds how they can control a vehicle tele-operatively with minimal latency. That helps to bridge the gap from driver-controlled to fully autonomous vehicles. In addition, they will present their semi-automated tool for processing image data from Lidar laser scanners and cameras for training the artificial intelligence (AI) of smart cars.

There will be near-future situations, in which highly automated and autonomous vehicles will also utilize a remote human operator. For example, a truck that can’t drive autonomously from the company premises to a loading ramp because it has no access to digital maps of the private property. Or an autonomous car that must navigate an unforeseen construction zone. At GTC Europe, the FOKUS researchers from the business unit Smart Mobility together with the Daimler Center for Automotive IT Innovations (DCAITI) at the TU Berlin, will demonstrate how tele-operated driving with low latency can be achieved to help smooth the path to full-autonomy.

The operator can control the vehicle realistically, with a steering wheel and accelerator, or brake pedal. A map with the current route is viewed on a second monitor. The decisive factor for safe tele-operation is that the data transmission of both the environmental data and the control signals produce as little delay as possible.

Tool for semi-automatic annotation of image data

AI is of great importance for the perception of the immediate surroundings by the vehicle. When a highly automated vehicle drives on the road, a large number of cameras and other sensors, such as a Lidar, record information from the surroundings. The data must be analyzed in real time and reliably detecting pedestrians, bicyclists, vehicles and other objects in the scene.

With the help of FOKUS' innovative AI algorithms, image data from cameras and laser scanners are divided into object classes such as roadway, vehicle, pedestrian, and individual elements of the objects are separated from each other. In this way, even pedestrians in a crowd can be kept apart. For each object, the algorithms continue to calculate the correct position and rotation in 3D space.