When should we use the term “Vision for Everything”, as vision-based applications are entering various industries? It’s been a few years since the emergence of Embedded Vision and we see that it’s being used in a wide range of applications including Security, Medical, Smart homes, Robotics, Transportations, Automotive Driver Assistance Systems (ADAS) and Augmented Reality (AR).
This is the first in a series of blogs explaining what you need to know to start designing Embedded Vision applications which can be used in ADAS, from choosing the right device and tools to demystifying the vision algorithms used in automotive applications and how to implement them into FPGAs.
ADAS consists of two main parts, vision and sensor fusion. Cameras used in a smart car can provide the information such as object detection, classification and tracking. However, they don’t provide the distance between the vehicle and obstacles needed to prevent a collision. To do that, sensors such as LIDAR or RADAR come to play.
In this series of blogs, we will mainly focus on the vision side of the ADAS; but will cover sensor fusion in the future. The main goal of this series of blogs is to give an in-depth knowledge of Aldec’s complete ADAS reference design which includes 360-Degree Surrounding View, Driver Drowsiness Detection and Smart-Rear View.