Point cloud object detection demonstrator

Point cloud object detection demonstrator

Overview

Can easics create a point cloud demonstrator on FPGA?
With a performance below 60ms?
combining both image and lidar data?

The customer is offering true solid-state lidar to OEMs in automotive and industrial market.

Requirements

Our solution

Images are acquired using a LIDAR. The LIDAR creates 2 types of images: a visual image based on ambient light, and a signal image which is based on the transmitted laser pulse.
The signal image is sent to the FPGA which extracts the bounding boxes of a number of objects (people, cars, bicycles, …), using a custom AI model developed by the customer. The customer processing unit then adds the distance information to the bounding boxes, which is then displayed as an overlay on the visual image.
performance benchmark on FPGA
The inference times are estimated by the estimator tool and compared to the actual inference time on the Achilles DevKit.
Correctness of the FPGA implementation outputs
The implementation has been compared to the reference model using the histogram technique. Additionally a number of image sequences were processed and the box locations were visually compared to the box locations from the reference model. The only differences noticed were small offsets in the box locations.

Results:

The point cloud object detection demonstrator has been implemented on the Achilles DevKit The inference time is below 60ms. The input images are received over ethernet. The results (list of resulting bounding boxes) is sent back over the same connection. The implementation has been debugged using the histogram technique. Additionaly the detections were visually compared with the results of the reference model. This is based on real life image sequences.
Testimonials

Latest technology in embedded systems