Fahrzeugfernsteuerung
Fraunhofer FOKUS

Remote-control for highly automated vehicles

News from Oct. 10, 2018

Press Release – At NVIDIA’s GPU Technology Conference (GTC) from October 9th – 11th, 2018 in Munich, Fraunhofer FOKUS researchers will demonstrate on the outdoor exhibition grounds how they can control a vehicle tele-operatively with minimal latency. That helps to bridge the gap from driver-controlled to fully autonomous vehicles. In addition, they will present their semi-automated tool for processing image data from Lidar laser scanners and cameras for training the artificial intelligence (AI) of smart cars.

There will be near-future situations, in which highly automated and autonomous vehicles will also utilize a remote human operator. For example, a truck that can’t drive autonomously from the company premises to a loading ramp because it has no access to digital maps of the private property. Or an autonomous car that must navigate an unforeseen construction zone. At GTC Europe, the FOKUS researchers from the business unit Smart Mobility together with the Daimler Center for Automotive IT Innovations (DCAITI) at the TU Berlin, will demonstrate how tele-operated driving with low latency can be achieved to help smooth the path to full-autonomy.

The operator, in this case a person running the demonstration, is connected to the highly automated vehicle by means of wireless data and encrypted transmission – via WLAN and in the future via 5G mobile radio. The vehicle is steered with high efficiency by means of its own compressed environment model. This model interprets image data from up to eight cameras in combination with Lidar laser scanners. The NVIDIA DRIVE platform provides the computing power to run the diverse and redundant algorithms and applications to enable safe highly automated driving. Significantly reduced metadata is then transmitted to the operator. This saves bandwidth and provides near real-time intelligence of the current situation.

The operator can control the vehicle realistically, with a steering wheel and accelerator, or brake pedal. A map with the current route is viewed on a second monitor. The decisive factor for safe tele-operation is that the data transmission of both the environmental data and the control signals produce as little delay as possible.

Tool for semi-automatic annotation of image data

AI is of great importance for the perception of the immediate surroundings by the vehicle. When a highly automated vehicle drives on the road, a large number of cameras and other sensors, such as a Lidar, record information from the surroundings. The data must be analyzed in real time and reliably detecting pedestrians, bicyclists, vehicles and other objects in the scene.

With new AI methods, particularly deep learning, the computer learns what a human or a tree looks like by training on a huge amount of data. The better the learning data is, the more accurately the car will learn to “see”. Until now, it was very time-consuming to label the training data, i.e. to mark what a tree, a person, or a car is. At GTC Europe, Smart Mobility researchers will present their labeling tool for image data from cameras and Lidar laser scanners, powered by the NVIDIA DGX-1 AI supercomputer datacenter solution. With this tool, the annotations can be pre-labeled, checked and corrected in the shortest possible time. Labeling experts will need on average only 10% of the time that is normally required to generate high-quality learning data.

With the help of FOKUS' innovative AI algorithms, image data from cameras and laser scanners are divided into object classes such as roadway, vehicle, pedestrian, and individual elements of the objects are separated from each other. In this way, even pedestrians in a crowd can be kept apart. For each object, the algorithms continue to calculate the correct position and rotation in 3D space.

FOKUS’ mobility-data-backend manages the sensor data of hundreds of vehicles and enables labeling experts and developers worldwide to view and correct the data in a standard web browser. This results in an ever-larger pool of high-quality learning data for AI algorithms in the vehicle.

Further information

Session at the GTC on Thursday, October 11, 11:00-11:50, room 14c:

“Deep Learning software applications for Autonomous Vehicles” with Dr. Ilja Radusch, head of the business unit Smart Mobility