Contact Person
Ilja Radusch
Dr.-Ing. Ilja Radusch
Director
Business Unit ASCT
+49 30 3463-7474

FLLT.AI

AI-assisted annotation of 3D point clouds

Artificial intelligence (AI) can only operate as efficiently as the quality of the data, with which it was trained allows. This applies in particular to the methodology of Deep Learning, which uses neural networks inspired by the human brain and is an efficient method of supervised machine learning. The detailed and precise labeling of data recorded with the help of cameras and sensors is a prerequisite for this.

Using the labeled data, a vehicle learns to perceive its surroundings according to reality. The larger the data pool, the better the computer system can learn. It continuously optimizes itself, thus increasing the recognition accuracy and usefulness.

Accumulated as sets, the annotated images are regarded in the automotive industry as the capital needed to bring autonomously driving vehicles to the track. Usually, these sets are designed for camera images delivered by the cameras mounted on the vehicles. Although cameras are already used in assistance systems, they are susceptible to interference from changing weather conditions during object detection and recognition.

Vehicles require additional sensor components for detailed recording of the (learning) environment. The recorded data of a camera is linked with those of laser systems, since these capture objects more precisely than the calculation based on camera images allows.

The FLLT.AI labeling tool presented here makes the “training” of the AI easy and comfortable. While this kind of tool for the labeling of camera images is largely established on the market, corresponding tools for the labeling of laser scanner data, which are displayed as point clouds, are not yet available.

For three years now, researchers from the FOKUS Smart Mobility business unit have been working on such a tool, which closes the market gap in the annotation of laser scanner data. It helps to connect cameras and laser scanners. The result: an image in which the individual objects can be precisely separated from each other.

Three-stage labeling for image data from cameras and lidar laser scanners

The tool has three modes that include the semantic, instances, and boxes labeling. While semantic labeling describes the semantic properties within the image motif and the point cloud, instance and box labeling provide the ability to add more information about individual and identifiable objects, such as the separation between individual objects and their poses and dimensions in 3D space. The interface lets the user start with semantic labeling and then specify individual objects, which are then placed into individual frames, so-called “bounding boxes”. The user interface adapts to each step in the labeling process and thus supports the user in his work.

Automatic Pre-Labeling

Current AI methods are already capable of delivering a high recognition quality. In FLLT.AI, these state-of-the-art networks are used for automated pre-labeling. The labels generated in this way offer a human expert a frame (camera and associated point cloud) that has already been pre-processed at a high level, so that humans can concentrate on the essential improvements compared to current AI systems.

Advantages at a glance

  • The web-based tool offers easy and fast access to the individual labeling modes, so that the data can be viewed and adjusted in the user’s usual browser.
  • A clear interface and intuitive controls allow efficient labeling. The tool is designed for highly efficient and time-saving labeling.
  • When working with the tool, the user can switch between a 2D and a 3D perspective.
  • The processed information is managed on a server at FOKUS and can also be stored on a corresponding customer server.
  • It is possible that the backend of the application can be managed independently by the customer.