Aldec Design and Verification Igor Tsapenko
Igor has over 10 years of experience in Electronic Design Automation industry with Aldec. He began his career as a research engineer, advancing to product manager in Aldec’s hardware emulation group, and is currently Aldec’s Director of Applications Engineering. Igor holds Bachelor and Master Degrees in Computer Engineering from National Tech University in Donetsk, Ukraine. « Less Igor Tsapenko
Igor has over 10 years of experience in Electronic Design Automation industry with Aldec. He began his career as a research engineer, advancing to product manager in Aldec’s hardware emulation group, and is currently Aldec’s Director of Applications Engineering. Igor holds Bachelor and Master … More » Performing cross spectrum video processing on a TySOM-3 boardMarch 12th, 2021 by Igor Tsapenko
While immunization vaccines are rolling out at an impressive pace, and as society slowly reopens, our best defense against the Coronavirus continues to be early detection and rapid response (such as self-isolation). An early symptom of having the virus is an increased body temperature, which can be easily measured using contactless methods such as thermal sensors or cameras sensitive to IR radiation. However, general purpose cameras still have a role to play – in augmenting and putting into better context the thermal data. Imagine two cameras – one IR and one standard – observing the entrance to a place of work or indoor public venue. If the image captured by the standard camera feeds a system with face detection software, then the thermal image can be made more meaningful: yes, that heat source is a human face.
Paired thermal and standard optical images can be at the heart of a largely automated human temperature screening processes for detecting COVID-19-infected people. So, how do you set about pairing the two images? As a well-known supplier of Zynq MPSoC based embedded prototyping boards, Aldec has risen to the challenge with our brand-new AI-based thermal vision camera demo application. It leverages the great I/O expansion flexibility of our TySOM-3-ZU7EV board – which supports several different sensor interfaces as well as high-performance UltraScale+ FPGA fabric; which has been proven in convolution neural network (CNN) acceleration and sensor fusion tasks at the edge. The demo application runtime is shown in Figure 1. Figure 1: Demo Application Runtime Demo Application Flow The main idea of the demo application is to create a visual representation of the IR sensor data, locate the position of the human face using a standard camera image stream processed by a CNN-based face detection algorithm and calculate body temperature. For the best visual effect, both video streams are merged. The imagers used in the project are:
A top-level view of the project is shown in Figure 2. Figure 2: Demo application overview Software Implementation This is done using the GStreamer Linux media framework – see Figure 3. Standard framework plugins are marked in orange and custom software such as mlx-grabber and sdxfacedetect are marked in blue. The entire software flow is split up into three separate GStreamer pipelines (IR flow, BlueEagle camera flow and legend) producing independent 24-bit BGR video data streams which merged then into common output HDMI image using Video Mixer. HDMI Video Mixer is a hard IP block commonly used in video output interfaces to combine several video data streams into a one, which is then passed to a video output device. It is configured for four separate overlay BGR layers (planes 29-32), three of which are used to compose the final video stream for an HDMI monitor. For the rest of this article, visit the Aldec Design and Verification Blog RelatedTags: AI, Computer Vision, COVID-19, embedded, FPGA, Gstreamer, IR, Linux, MPSoC, Zynq Categories: TySOM Boards, TySOM EDK This entry was posted on Friday, March 12th, 2021 at 3:21 pm. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. |