News

Affordable AI in the box

main visual
Blog

Affordable AI in the box

54 – Get your AI out of the cloud with easics from imec on Vimeo.

Today, artificial intelligence often runs in the cloud, with data being sent back and forth between the application and the AI algorithms running on energy-guzzling cloud processors. At easics, we use our expertise in system-on-chip design to develop small, low-power and affordable AI that runs locally, e.g. on your camera system or inside a robot or machine. The result is more secure, faster, and has a low and predictable latency. Which we prove in our demo where you may challenge our hardware demonstrator for real-time object recognition!

Getting AI out of the cloud

Artificial intelligence is becoming the preferred solution to make many applications and production facilities smarter. With machine learning – the most successful form of AI – it becomes possible to have applications learn from actual data, being mostly sensor readings. This way, engineers no longer have to program intelligence explicitly and applications can be made smart in a much faster, cheaper, and more flexible way.

Today’s smart factories crave for self-learning engines that make fast in-line decisions, close to the applications and sensors. Think of in-line quality control, factory automation, flexible robotics, automated sorting, … Such on-premise AI engines need to be low-latency, energy-efficient, small and cost-effective. That’s a combination of requirements that is hard to achieve with GPU-based cloud computing. What is needed instead is highly-customized yet affordable hardware with long-term availability.

Generating an AI processor for the application

To create such innovative hardware, easics is developing a software framework that automatically generates hardware descriptions of the deep neural networks needed to make a specific application smart. This optimal hardware description then allows realizing custom FPGA or ASIC solutions.

Such a solution is compact, consumes less power than alternatives, and has low and fixed latency and a fast inference rate. In addition, it is future proof, allowing to scale up the performance of your application as new generations of FPGA components become available.

The easics solution is generated based on application-specific parameters such as the resolution of input images, frame rate, type of neural network (e.g., ResNet, Mask R-CNN, Yolo, MobileNet, or a custom net), required latency, the power budget and the target hardware cost.

Application domains that will profit from such embedded AI engines include:
– industry 4.0: in-line quality control, factory automation, robotics, predictive maintenance
– smart city & surveillance: crowd & traffic monitoring
– smart mobility: self-driving cars
– smart health: medical image analysis, low-power wearables

The technology showcased in this demo consists of an embedded deep learning inference engine on FPGA using CNNs (convolutional neural networks) for real-time object recognition, localization, and tracking in images or live video. The application is “programmed” by first feeding it with labeled data, after which it is able to recognize objects independently.

Easics, a dependable partner

Originally a spin-off of imec and KU Leuven – ESAT, easics has 27 years of experience and an impeccable track record designing first-time-right ASICs and FPGAs. Easics’ solutions are at the heart of many applications, including mobile communication devices, intelligent cameras, infrared image sensors, food sorting machines, cochlear hearing aids and earth-observation satellites.

Easics and imec have always had a close collaboration, with easics being a partner in many innovative designs. Lately, easics has partnered in a number of imec.icon projects, collaborative efforts between academia and industry. The AI solution that is demoed was partly developed in the imec.icon project HELP Video! – a project around scalable embedded video processing. Two more imec.icon projects cREAtIve SenseCity, are currently running. Research and development of the automated framework to generate optimal AI engines both for FPGA and ASIC platforms continues in these projects.

Taking its AI platform as a start, easics plans a tight integration with a number of sensors such as image sensors capturing light inside and outside the visible spectrum (such as hyperspectral and thermal infrared), radar, LIDAR, Time-of-Flight, ultrasound, microphones, …

Challenge our real-time object recognition

The easics deep learning demo is set up to recognize and label a large number of objects on a real-time video capture of the audience and its surroundings. At the heart of the demo sits an easics FPGA board running a trained deep neural network. Objects that are recognized on the live video include people, laptops, backpacks, cell phones, plants, fruit, various animals, silverware, … The audience is of course welcome to challenge the demo.

Contact

Bram Senave – business development manager – bram@easics.be  

Ramses Valvekens – managing director – ramses@easics.be

More information: Blog DSP Valley