Aldec Design and Verification
Farhad Fallahlalehzari works as an Application Engineer at Aldec focusing on Embedded System Design. He provides technical support to customers developing embedded systems. His background is in Electrical and Computer Engineering, concentrating on Embedded Systems and Digital Systems Design. He received his Masters of Science in Electrical Engineering from the University of Nevada, Las Vegas in 2016. He completed his Bachelors of Science in Electrical Engineering at Azad University, Karaj Branch, Iran in 2013. « Less
Farhad Fallahlalehzari works as an Application Engineer at Aldec focusing on Embedded System Design. He provides technical support to customers developing embedded systems. His background is in Electrical and Computer Engineering, concentrating on Embedded Systems and Digital Systems Design. He … More »
How to develop an FPGA-based Embedded Vision application for ADAS, series of blogs – Part 1
February 28th, 2018 by Farhad Fallahlalehzari
When should we use the term “Vision for Everything”, as vision-based applications are entering various industries? It’s been a few years since the emergence of Embedded Vision and we see that it’s being used in a wide range of applications including Security, Medical, Smart homes, Robotics, Transportations, Automotive Driver Assistance Systems (ADAS) and Augmented Reality (AR).
This is the first in a series of blogs explaining what you need to know to start designing Embedded Vision applications which can be used in ADAS, from choosing the right device and tools to demystifying the vision algorithms used in automotive applications and how to implement them into FPGAs.
ADAS consists of two main parts, vision and sensor fusion. Cameras used in a smart car can provide the information such as object detection, classification and tracking. However, they don’t provide the distance between the vehicle and obstacles needed to prevent a collision. To do that, sensors such as LIDAR or RADAR come to play.
In this series of blogs, we will mainly focus on the vision side of the ADAS; but will cover sensor fusion in the future. The main goal of this series of blogs is to give an in-depth knowledge of Aldec’s complete ADAS reference design which includes 360-Degree Surrounding View, Driver Drowsiness Detection and Smart-Rear View.
Device and tool selection
In this section, popular devices used for Embedded Vision are investigated and at the end the most suitable one will be identified along with the right tools and development board to start designing the ADAS solution.
For Embedded Vision applications CPUs, GPUs, FPGAs, DSPs, ASICs and microcontrollers can be used. However, there is a big war between FPGAs and GPUs because of their high performance graphical capabilities. This war has always been about the tradeoff between power consumption and performance.
Due to the immense progression of HW, SW and algorithms used in Embedded Vision, re-configurability plays an important role which is supported by FPGAs. These devices are not only superior to ASICs – by offering the low cost and fast acceleration solution because of the millions of programmable gates and hundreds of I/O pins, but they are also better than CPUs; which have to time-slice or multi-thread tasks as they compete for compute resources (by providing the simultaneous acceleration of multiple portions of a computer vision pipeline).
In a nutshell, the proliferation of vision applications demands high performance, low-power and reprogrammable processing systems like FPGAs. We shouldn’t disregard the ease of programming for CPUs and GPUs. However, the SoC devices can provide us with a combination of FPGAs and CPUs.
I want to introduce you to the Xilinx All programmable Zynq™ 7000 and Zynq Ultrascale+ MPSoC, both of which comprise HW and SW. I have written a dedicated blog about this architecture which you can find here. Because of its unique features, the Zynq is an efficient solution for an Embedded Vision project; and particularly for ADAS, since the acceleration of vision algorithms in the HW side of the Zynq makes a huge difference in terms of overall agility and power consumption. Xilinx SDSoC tool enables the user to partition the vision algorithms into the SW and HW automatically. Another tool which also eases the way is Vivado HLS. This is a high-level synthesis tools which converts the C/C++ codes into HDL. This makes life easier for software engineers using Zynq devices.
For the rest of this article, visit the Aldec Design and Verification Blog.