Industrial AI accelerator
Industrial AI Accelerator – AI FPGA SoM
easics provides the Industrial AI accelerator or nearbAI SoM by combining the nearbAI IP core with the FPGA SoM (System on Module).
Together we will deploy your AI model inside your embedded application. Your model and the constraints of your application like speed, latency, power consumption and cost defines the right FPGA SoM as inference engine. The easics software tools will convert the model and the weights into an FPGA build file that is ready to deploy on the chosen FPGA hardware.
Why choose nearbAI SoMs as an industrial AI engine?
Deep learning on nearbAI SoM
The performance of the AI accelerator is defined by the amount of operations and memory accesses needed to run the entire neural network per frame. An FPGA System on Module combines the FPGA with DDR memory banks. easics will select and prepare the right FPGA SoM depending on the performance of the application. You can also ask the easics team if your selected FPGA (SoM) is up for the task to reach your required performance.
The FPGA SoM can be plugged in your own tailor made carrier board of your application or an existing off the shelf carrier board. Via an API the classification result (what & where it is) of the deep learning algorithm will be sent to the application where the detection of the result will be applied. easics can supply a complete system design on the FPGA SoM including the Deep learning core, camera interfaces and external interfaces. A standard solution can combine our TCP Offload Engine with the nearbAI IP core.
Some benchmarks are shown in the table below but please contact us with your requirements:
|Network model||Input image resolution||FPGA||FPS|
|Resnet 50||224x224||ARRIA 10 GX 480||55.6|
|YoloV3||224x224||ARRIA 10 GX 480||25.9|
nearbAI software tools
easics’ deep learning software tools support different neural networks based on existing frameworks like Tensorflow, Caffé, python, ONNX, C/C++, …
The network model or description can be uploaded inside the estimator tool. Based on a preset of hardware parameters it evaluates the performance of the AI accelerator. It also gives an overview of the used resources on different FPGAs. This overview can be used to change the hardware parameters to reach the performance or select the right FPGA SoM. The estimator tool is a free online tool provided by easics.
The next step is to deploy the AI accelerator on the selected FPGA SoM. The weights of the trained model are converted (floating point quantization) in an fixed point image. A bitstream will be generated and deployed on the FPGA. The binary or microcode and quantized weights will be stored inside the DDR memory.
Deep learning on FPGA – download PDF documentation
Want to know more about our deep learning framework on FPGA?