Open side-bar Menu
 Aldec Design and Verification
Farhad Fallahlalehzari
Farhad Fallahlalehzari
Farhad Fallahlalehzari works as an Application Engineer at Aldec focusing on Embedded System Design. He provides technical support to customers developing embedded systems. His background is in Electrical and Computer Engineering, concentrating on Embedded Systems and Digital Systems Design. He … More »

How does the Mars Perseverance rover benefit from FPGAs as the main processing units?

April 6th, 2021 by Farhad Fallahlalehzari

Tasked with finding life in the form of microorganisms, the rover Perseverance landed on Mars at about 04:00 EST on February 18, 2021. The rover has multiple sensors and cameras to collect as much data as possible and, due to the volume of live data being recorded and the long data transmission time from Mars to Earth, a powerful processing system is essential.

However, whereas early Mars rovers were equipped mainly with CPUs and ASICs as the processing units, FPGAs are taking on much of the workload in Perseverance. Let’s consider why that is the case.
Read the rest of How does the Mars Perseverance rover benefit from FPGAs as the main processing units?

Performing cross spectrum video processing on a TySOM-3 board

March 12th, 2021 by Igor Tsapenko

While immunization vaccines are rolling out at an impressive pace, and as society slowly reopens, our best defense against the Coronavirus continues to be early detection and rapid response (such as self-isolation).

An early symptom of having the virus is an increased body temperature, which can be easily measured using contactless methods such as thermal sensors or cameras sensitive to IR radiation.

However, general purpose cameras still have a role to play – in augmenting and putting into better context the thermal data.

Imagine two cameras – one IR and one standard – observing the entrance to a place of work or indoor public venue. If the image captured by the standard camera feeds a system with face detection software, then the thermal image can be made more meaningful: yes, that heat source is a human face.

Read the rest of Performing cross spectrum video processing on a TySOM-3 board

SynthHESer – Aldec’s New Synthesis Tool

August 11th, 2020 by Sunil Sahoo

In the early days of digital design, all circuits were designed manually. You would draw K-map, optimize the logic and draw the schematics. If you remember, we all did many logic optimization exercises back in college. It was time consuming and very error prone. This works fine for designs with  a few hundred gates, but as the designs get larger and larger this became non-feasible.

Designs that are described at a higher level of abstraction are less prone to human errors. High-level descriptions of designs are done without significant concern regarding design constraints. The conversion from high-level descriptions to gates is done by using synthesis tools. These tools use various algorithms to optimize the design as a whole. This circumvents the problem with different designer styles for the different blocks in the design and sub-optimal design practices. Logic synthesis tools also allows for technology independent designs. Logic synthesis technology was commercialized around 2004, and since then it’s been part of the standard EDA tool chain for ASICs and FPGAs.

Read the rest of SynthHESer – Aldec’s New Synthesis Tool

Enabling TySOM Zynq-based Embedded Development Board for AWS IoT Greengrass

June 16th, 2020 by Farhad Fallahlalehzari

Everyday there are new devices appearing in homes, offices, hospitals, factories and thousands of other places that are part of the Internet-of-Things (IoT). Clearly, they need to be connected to the internet and there is a need for a huge amount of raw data to be collected , stored and processed on the cloud.

There are many data centers available to store the data. However, only some provide features specifically for IoT applications. One of the most complete cloud-based IoT services available is Amazon Web Services (AWS) IoT Greengrass. It enables edge devices to act locally on the data networked devices generate, and provides secure bi-directional communication between the IoT devices and the AWS cloud for management, analytics and storage.
Read the rest of Enabling TySOM Zynq-based Embedded Development Board for AWS IoT Greengrass

Connecting Emulated Design to External PCI Express Device

May 4th, 2020 by Krzysztof Szczur

These days verification teams no longer question whether hardware assisted verification should be used in their projects. Rather, they ask at which stage they should start using it.

Contemporary System-on-Chip (SoC) designs are already sufficiently complex to make HDL simulation a bottleneck during verification, without even mentioning hardware-software co-verification or firmware and software testing. Thus, IC design emulation is an increasingly popular technique of verification with hardware-in-the-loop.

Recently, hardware assisted verification became much more affordable thanks to the availability of high capacity FPGAs (like Xilinx Virtex UltraScale US440) and their adoption for emulation by EDA vendors.

A good example of such an emulation platform is Aldec’s HES-DVM. It can accommodate designs over 300 Million ASIC gates on multi-FPGA boards (HES-US-1320), where the capacity scales by interconnecting multiple boards in a backplane.

Read the rest of Connecting Emulated Design to External PCI Express Device

How to Develop a 4K Ultra High Definition Image/Video Processing Application Using Zynq® MPSoC FPGA

April 7th, 2020 by Farhad Fallahlalehzari

Achieving higher resolution is a never-ending race for camera, TV and display manufacturers. After the emergence of 4K ultra high definition (Ultra HD) imaging in the market, it became the main standard for today’s multimedia products. 4k Ultra HD brings us bigger screens which give an immersive feeling. With this standard, the pixilation problem was solved in the big screens. 4K consumers are everywhere, from live sport broadcasting to video conferencing on our mobile devices. There are, however, many technical challenges in developing systems to process 4k Ultra HD resolution data. As an example, a 4K frame size is 3840 x 2160 pixels (8.5 Mpixel) and is refreshed at a 60Hz, equating to about 500 Mpixel/sec. This requires a high-performance system to process 4k frames in real time. Another bottleneck is power consumption particularly for embedded devices where power is critical. Being low power yet high performance, FPGAs, have shown a strong potential to tackle these challenges. In this blog, you’ll learn all you need to know to start developing a 4K video conferencing project using FPGAs.

Read the rest of How to Develop a 4K Ultra High Definition Image/Video Processing Application Using Zynq® MPSoC FPGA

ARM-based SoC Co-Emulation using Zynq Boards

February 19th, 2020 by Michelle Mata-Reyes

Have you ever worked on a group project where you had to combine your work with that of a colleague of a different engineering discipline but the absence of an efficient means of doing so affected the project’s overall outcome? Well, for software and hardware engineers developing an SoC, the merging of their respective engineering efforts for verification purposes is a big challenge.

Early access to hardware-software co-verification allows hardware and software teams to work concurrently and set the foundation to a successful SoC project. However, many co-emulation methodologies are based on processor virtual models which are not accurate representations of the design. Fortunately, Aldec has a solution that integrates an ARM-based SoC from Xilinx, specifically a Zynq UltraScale+ MPSoC, with the largest Xilinx UltraScale FPGA. Since the Zynq device includes the hard IP of the ARM processor, our solution provides an accurate representation of the ARM-based SoC design for co-verification.

Read the rest of ARM-based SoC Co-Emulation using Zynq Boards

When is robustness verification for DO-254 projects complete?

December 12th, 2019 by Janusz Kitel

Understandably, hardware designed for an aircraft, or indeed any safety critical application, must be robust. I also believe that all engineers wish to verify their designs as thoroughly as possible, anyway. However, there are limiting factors; most notably the high complexity of most designs. Since we are unable to discover and verify the design against all abnormal conditions, the main question is: when is robustness verification truly complete?

Random nature of robustness

Test scenarios for robustness verification always contain many input stability issues, such as erroneous values, lack or loss of value, unexpected timing and unpredicted toggling. Certainly there is a significant random factor but it should not lead us to oversimplify this part of verification by applying less or more advanced randomization methods only. The robustness of any design, and especially for projects where human lives are potentially at risk, cannot be achieved by inspecting the results of randomly generated scenarios. It must be part of the original design.


The “Design Assurance Guidance for Airborne Electronic Hardware” document does not explicitly address robustness testing. However, two supplements – “FAA Order 8110.105A” and the “EASA Certification Memorandum” – clarify that to demonstrate robustness, the applicant should also define the requirements-based testing. In other words, it is expected that abnormal operating conditions be captured and documented as derived requirements.

How many “robustness requirements” do we need?

Having extra requirements for robustness verification does not solve the problem stated above, so the question really is: how many “robustness requirements” will be enough?
Read the rest of When is robustness verification for DO-254 projects complete?

Understanding the inner workings of UVM – Part 3

March 26th, 2018 by Vatsal Choksi

In this blog, I am going to discuss different phases that UVM follows.

The reason why UVM came up with such phases is because synchronization among all design-testbench was necessary. Using Verilog and VHDL, verification engineers did not have facilities such as clocking block or run phases. Now, it is very important that the time at which test vectors applied from test-bench reaches the Design Under Test(DUT) at the same time. If timing for different signals varies then synchronicity lacks and thus verification can not be achieved as expected. That is the main reason why UVM has different phases.

The whole environment of UVM is structured on phases. They are active right from the beginning of the simulation to the end of the simulation. The topic discussed here will help people who are new to UVM. To start with, most of the phases are call back methods. The methods are either function or task. They are all derived from the UVM_Component class, same as other test-bench components. If you remember the first blog, we went through how to write a class. We understood the OOP concepts such as inheritance and even used them by extending the base class. Now, creating objects of the class is also important in order to use it as and when required. It is known as build_phase. This step takes place first. Next, after we write different classes, it is important to connect them. For example, if I write different classes with different functionality, at the end I provide them all under one top class. In Verilog it was top level module. In system Verilog it was class Environment. Under that main class, you connect all your semi classes which is known as connect_phase. Next, comes the end_of_elaboration_phase. By the time this phase becomes active, everything is connected and simulation next moment on wards is ready to begin.
Read the rest of Understanding the inner workings of UVM – Part 3

Do I really need a commercial simulator?

March 26th, 2018 by Sunil Sahoo

As an Applications Engineer I visit lots of potential customers, or talk to them at trade shows, who are doing FPGA designs but don’t own a commercial simulator. I ask them why that is. Most of the time it is budgetary restrictions. They don’t have funds to buy additional tools. I understand their situation and point out to them that at Aldec we have a very cost-effective simulator. But that is not what I want to talk about in this blog. I want to talk about engineers who say: “I am happy with the simulator my FPGA vendor provided me”, or “My simulations only take 15-20 minutes to run, I don’t think I need a faster simulator”, or “We don’t run simulations”.

That last response haunts me the most. For instance, at a recent site visit I was told: “We just load the design on our FPGA and test it out”. I asked how long does a full test iteration (i.e. program FPGA -> test -> debug -> re-code -> re-program) takes. They said about an hour or two, depending on the bug. I then asked how much of that time spent just running synthesis and programming the board? They said about 30 minutes.

Next, I proceeded to explain the benefits of running simulations in such scenario.
Read the rest of Do I really need a commercial simulator?

© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise