Industrial AI accelerator

Industrial AI accelerator

Industrial AI Accelerator – AI FPGA SoM

easics provides the Industrial AI accelerator or nearbAI SoM by combining the nearbAI IP core with the FPGA SoM (System on Module).

Together we will deploy your AI model inside your embedded application. Your model and the constraints of your application like speed, latency, power consumption and cost defines the right FPGA SoM as inference engine. The easics software tools will convert the model and the weights into an FPGA build file that is ready to deploy on the chosen FPGA hardware.

Why choose nearbAI SoMs as an industrial AI engine?

FPGAs have product lifecycles of 15 years.
Industrial temperature ranges: -45 to 85 degrees C
High performance per Watt and low latency make it suitable for real-time embedded applications.
Low memory footprint because of fixed point (16 bit, 12 bit, 8 bit, 6 bit) data types
The FPGA logic can be shaped to match any neural network architecture.
Performance, cost and power will define the FPGA of choice.
Future proof and scalable solution as the FPGA architecture can be re-configured for future neural networks.
Easics' framework offers a flexible approach to program the FPGA and a fast-time to market.
The deep learning core can be easily integrated with other CPU’s, vision functionality and connectivity.

Deep learning on nearbAI SoM

FPGA SoM accelerator

The performance of the AI accelerator is defined by the amount of operations and memory accesses needed to run the entire neural network per frame. An FPGA System on Module combines the FPGA with DDR memory banks. easics will select and prepare the right FPGA SoM depending on the performance of the application. You can also ask the easics team if your selected FPGA (SoM) is up for the task to reach your required performance.

The FPGA SoM can be plugged in your own tailor made carrier board of your application or an existing off the shelf carrier board. Via an API the classification result (what & where it is) of the deep learning algorithm will be sent to the application where the detection of the result will be applied. easics can supply a complete system design on the FPGA SoM including the Deep learning core, camera interfaces and external interfaces. A standard solution can combine our TCP Offload Engine with the nearbAI IP core.

Some benchmarks are shown in the table below but please contact us with your requirements:

Network modelInput image resolutionFPGAFPS
Resnet 50224x224ARRIA 10 GX 48055.6
Mobilenet V2224x224ZU2CG/EG59.7
YoloV3416x416ZU5CG/EG9
YoloV3224x224ARRIA 10 GX 48025.9

nearbAI software tools

easics’ deep learning software tools support different neural networks based on existing frameworks like Tensorflow, Caffé, python, ONNX, C/C++, …

The network model or description can be uploaded inside the estimator tool. Based on a preset of hardware parameters it evaluates the performance of the AI accelerator. It also gives an overview of the used resources on different FPGAs. This overview can be used to change the hardware parameters to reach the performance or select the right FPGA SoM. The estimator tool is a free online tool provided by easics.

The next step is to deploy the AI accelerator on the selected FPGA SoM. The weights of the trained model are converted (floating point quantization) in an fixed point image. A bitstream will be generated and deployed on the FPGA. The binary or microcode and quantized weights will be stored inside the DDR memory.

sw_flow_FPGA_SoM

Deep learning on FPGA – download PDF documentation

Want to know more about our deep learning framework on FPGA?

Request a demo or evaluation kit!

embedded AI engine demonstrator