LICT Workshop on Deep Learning
LICT Workshop on Deep Learning
On the 8th of June, 5 of us went to the LICT workshop on Deep Learning. This workshop was organized by the Leuven Center on Information and Communication Technology. Easics as a whole is currently building a healthy expertise around FPGA implementations of neural networks to follow the trends in the market. This workshop fits perfectly into that idea. Geert and I are currently working on an ICON project on design methodology of multimedia applications. Our demonstrator for this project focuses on a generic implementation of a neural net on FPGAs. Anthony, Bert P. and Ilse joined us so the built up know-how is more widely spread in the company for when we focus more on projects linked to deep learning.
Foundation and application domains
The first half of the workshop featured five talks and one keynote. It mainly focused on giving the audience a sound, yet concise, foundation in the theory of deep learning in general and neural networks specifically. After the theoretical introduction, focus mainly lay on application possibilities for deep learning across multiple domains.
The theoretical basis was very helpful because it put everyone on an even footing. As mentioned before, Geert and I have been working inside the domain of neural networks and as such have been reading a lot of literature. This works well, even if neither of us had a formal background in the subject. Deep learning and neural networks however, are such a hype in the tech industry, that most of the information is hopelessly chaotically (dis)organized. Having it explained in a lecture focusing on non-experts, is still helpful to solidify our understanding of basic concepts. For people that had little previous experience with thinking about these ideas, the lecture gave just enough information to follow the rest of the talks comfortably.
The application talks were spearheaded by the keynote by Jonathan Berte, CEO of Robovision. The other talks gave more academic insights in the application domains. What was very interesting to me, is that the academia were not just talking about studying deep learning algorithms themselves. The group of for example prof. Polin and prof. Moens use neural networks as an actual tool for otherwise loosely related research. This part of the workshop solidified my idea that while deep learning is a hype and a buzzword, its concepts and consequences are disrupting enough to stay.
Practicalities and challenges
After a short break and some time to socialize, the second half of the talks started. Four more talks by academia and a closing keynote by Laurent Sorber of Forespell were scheduled here. This time, the focus was put on lower level practicalities and possible problems to face when using or implementing neural networks. These considerations were mainly questions like “Which environment should I run my deep learning application on?”, “What is the impact of certain decisions on the power consumption of my network?”, “How can I transform non-obvious problems into deep learning suitable ones?”. As such, they zoomed in more on specific use cases, rather than taking a broad look.
While non of the subjects handled implementations of neural networks on FPGA, following these talks surely help us to reason about system level decision for deep learning applications. This will help us conquer the market with our highly efficient FPGA and ASIC architectures!