nearbAI on FPGA
AI IP core – Deep Learning Accelerator
easics has created a parameterizable AI IP core that can be deployed on intel and Xilinx FPGA’s examples are intel Arria10, Cyclone10 and Xilinx Zynq, Zynq ultrscale+. The deep learning model and the constraints of your application like performance, latency, power consumption and cost defines the right parameters of the nearbAI core as inference engine. The nearbAI software tools will convert the model and the weights into an FPGA build file that is ready to deploy on the chosen FPGA hardware.
Why choose nearbAI as AI accelerator?
nearbAI software tools
easics’ deep learning framework can support different neural networks based on existing frameworks like Tensorflow, Caffé, python, ONNX, C/C++, …The input for the framework is the network description and the weights of the trained deep learning model. The nearbAI compiler is converting the network description in a runtime schedule as microcode and the weights of the trained model in fixed point by a floating point quantization. The estimator GUI provides the right hardware configuration and performance for the chosen FPGA or for different possible FPGAs. When your nearbAI core is configured and the FPGA is chosen the FPGA configuration file is generated.
Deep learning on FPGA
The following diagram summarizes the automated hardware implementation flow. The hardware configuration file describes the number of multipliers, buffer sizes, interface widths and clock frequencies. The CNN model file contains the CNN network topology and the parameters (weights, bias, …). The hardware generation results in an FPGA configuration file (bitfile) based on primitive CNN operations and a scaled amount of available FPGA resources. The runtime schedule generation is a compiled program of low-level commands to run the AI acceleration.
You upload and store the microcode and the weights on the SDRAM connected to the FPGA. The classification result (what & where it is) of the deep learning algorithm will be sent to the application where the detection of the result will be applied. We can supply a complete system design around the Deep learning core including camera interfaces or external interfaces. A standard solution can combine our TCP Offload Engine with the nearbAI IP core.
Deep learning on FPGA – download PDF documentation
Want to know more about nearbAI on FPGA?
Which customers benefit from deep learning on FPGA?
The nearbAI accelerator offers benefits to semiconductor companies, machine builders and even manufacturing companies.