nearbAI on FPGA

nearbAI on FPGA

AI IP core – Deep Learning Accelerator

easics has created a parameterizable AI IP core that can be deployed on intel and Xilinx FPGA’s examples are intel Arria10, Cyclone10 and Xilinx Zynq, Zynq ultrscale+. The deep learning model and the constraints of your application like performance, latency, power consumption and cost defines the right parameters of the nearbAI core  as inference engine. The nearbAI software tools will convert the model and the weights into an FPGA build file that is ready to deploy on the chosen FPGA hardware.

Why choose nearbAI as AI accelerator?

The nearbAI IP core is optimized for the best performance on the FPGA of your choice
easics' software development kit offers a fast-time to market.
The FPGA logic can be shaped to match any neural network architecture.
Our software tools offer a flexible approach to program the FPGA and map the neural networks
High performance per Watt and low latency make it suitable for real-time embedded applications.
Performance, cost and power will define the nearbAI IP.
Future proof and scalable solution as the FPGA architecture can be re-configured for future neural networks.
The deep learning core can be easily integrated within the top level of your application

nearbAI software tools

03 nearbAI software tools

easics’ deep learning framework can support different neural networks based on existing frameworks like Tensorflow, Caffé, python, ONNX, C/C++, …The input for the framework is the network description and the weights of the trained deep learning model. The nearbAI compiler is converting the network description in a runtime schedule as microcode and the weights of the trained model in fixed point by a floating point quantization. The estimator GUI provides the right hardware configuration and performance for the chosen FPGA or for different possible FPGAs. When your nearbAI core is configured and the FPGA is chosen the FPGA configuration file is generated.

Deep learning on FPGA

The following diagram summarizes the automated hardware implementation flow. The hardware configuration file describes the number of multipliers, buffer sizes, interface widths and clock frequencies. The CNN model file contains the CNN network topology and the parameters (weights, bias, …). The hardware generation results in an FPGA configuration file (bitfile) based on primitive CNN operations and a scaled amount of available FPGA resources. The runtime schedule generation is a compiled program of low-level commands to run the AI acceleration.   

03 Deep learning on FPGA-1

You upload and store the microcode and the weights  on the SDRAM connected to the FPGA. The classification result (what & where it is) of the deep learning algorithm will be sent to the application where the detection of the result will be applied. We can supply a complete system design around the Deep learning core including camera interfaces or external interfaces. A standard solution can combine our TCP Offload Engine with the nearbAI IP core.

02 Deep learning on FPGA-2

Deep learning on FPGA – download PDF documentation

Want to know more about nearbAI on FPGA?

Request a demo or evaluation kit!


Intel Arria 10 Evaluation System

  • Delivered with the neural network of your choice
  • FPGA IP evaluation core and CPU SDK
  • Ethernet, USB and HDMI
  • SFP+ for 10GigE
  • Supports Quartus design flow
  • Runs on ReflexCES achilles instant development kit or PCIe Carrier Board Arria 10 SoC SoM Development Kit

Xilinx Zynq UltraScale+ MPSoC Evaluation System

  • Delivered with the neural network of your choice
  • FPGA IP evaluation core and CPU SDK
  • Ethernet, USB, HDMI and MIPI
  • SFP+ for 10GigE
  • Supports vivado design flow
  • Runs on Xilinx ZCU104 instant development kit

Which customers benefit from deep learning on FPGA?

The nearbAI accelerator offers benefits to machine builders, semiconductor companies and even manufacturing companies.

Machine builders and OEMs
Machine builders and OEMs will benefit from nearbAI if it comes to outperforming classical vision algorithms and AI integration in their systems for cameras, vehicles, robotics, inspection machine and more. We offer an embedded solution for Deep Learning on FPGA, preferably on a System-on-Module (SoM). Working with FPGA instead of GPU or CPU offers lots of advantages in terms of performance, size, power, latency and overall cost efficiency. It is also scalable to future FPGAs.
Semiconductor companies
For semiconductor companies or sensor manufacturers we provide an AI solution for smarter sensors and structured data output. nearbAI can outperform AI on MCU for real-time decision making. Possible sensors include: image, audio, lidar and many more.
Manufacturing companies
If your company is looking for a solution or application that uses AI at the edge, nearbAI is an excellent choice. We help you to quickly verify your AI or vision concept and we also build it with sensor, AI hardware, firmware and embedded software.