Open side-bar Menu
 Aldec Design and Verification

Archive for the ‘FPGA Design Creation and Simulation’ Category

How does the Mars Perseverance rover benefit from FPGAs as the main processing units?

Tuesday, April 6th, 2021

Tasked with finding life in the form of microorganisms, the rover Perseverance landed on Mars at about 04:00 EST on February 18, 2021. The rover has multiple sensors and cameras to collect as much data as possible and, due to the volume of live data being recorded and the long data transmission time from Mars to Earth, a powerful processing system is essential.

However, whereas early Mars rovers were equipped mainly with CPUs and ASICs as the processing units, FPGAs are taking on much of the workload in Perseverance. Let’s consider why that is the case.
(more…)

SynthHESer – Aldec’s New Synthesis Tool

Tuesday, August 11th, 2020

In the early days of digital design, all circuits were designed manually. You would draw K-map, optimize the logic and draw the schematics. If you remember, we all did many logic optimization exercises back in college. It was time consuming and very error prone. This works fine for designs with  a few hundred gates, but as the designs get larger and larger this became non-feasible.

Designs that are described at a higher level of abstraction are less prone to human errors. High-level descriptions of designs are done without significant concern regarding design constraints. The conversion from high-level descriptions to gates is done by using synthesis tools. These tools use various algorithms to optimize the design as a whole. This circumvents the problem with different designer styles for the different blocks in the design and sub-optimal design practices. Logic synthesis tools also allows for technology independent designs. Logic synthesis technology was commercialized around 2004, and since then it’s been part of the standard EDA tool chain for ASICs and FPGAs.

(more…)

Connecting Emulated Design to External PCI Express Device

Monday, May 4th, 2020

These days verification teams no longer question whether hardware assisted verification should be used in their projects. Rather, they ask at which stage they should start using it.

 

Contemporary System-on-Chip (SoC) designs are already sufficiently complex to make HDL simulation a bottleneck during verification, without even mentioning hardware-software co-verification or firmware and software testing. Thus, IC design emulation is an increasingly popular technique of verification with hardware-in-the-loop.

Recently, hardware assisted verification became much more affordable thanks to the availability of high capacity FPGAs (like Xilinx Virtex UltraScale US440) and their adoption for emulation by EDA vendors.

A good example of such an emulation platform is Aldec’s HES-DVM. It can accommodate designs over 300 Million ASIC gates on multi-FPGA boards (HES-US-1320), where the capacity scales by interconnecting multiple boards in a backplane.

(more…)

ARM-based SoC Co-Emulation using Zynq Boards

Wednesday, February 19th, 2020

Have you ever worked on a group project where you had to combine your work with that of a colleague of a different engineering discipline but the absence of an efficient means of doing so affected the project’s overall outcome? Well, for software and hardware engineers developing an SoC, the merging of their respective engineering efforts for verification purposes is a big challenge.

Early access to hardware-software co-verification allows hardware and software teams to work concurrently and set the foundation to a successful SoC project. However, many co-emulation methodologies are based on processor virtual models which are not accurate representations of the design. Fortunately, Aldec has a solution that integrates an ARM-based SoC from Xilinx, specifically a Zynq UltraScale+ MPSoC, with the largest Xilinx UltraScale FPGA. Since the Zynq device includes the hard IP of the ARM processor, our solution provides an accurate representation of the ARM-based SoC design for co-verification.

(more…)

Emulation on the Cloud: HES Cloud delivers access to a high performance emulation platform

Thursday, June 15th, 2017

‘The cloud’ has been an industry buzz word for some time now and whilst the initial focus was on data storage and sharing – and spawned the likes of Dropbox – ‘cloud computing’ is currently the latest trend. For instance, Amazon’s cloud platform, Amazon Web Services (AWS), gives users access to servers and a range of applications. Storage is available as before but so too now are dedicated relational databases; which in Amazon’s case is provides through a different service.

Enterprise businesses are taking advantage of cloud computing platforms, and for a number reasons. These include pay-as-go (as opposed to investing considerable cap ex), speed and flexibility (resources and storage can be made available quickly), and one is spared the headache of maintaining a mass of IT hardware and keeping on top of software license renewals.

Also, earlier this year Amazon announced EC2 (Elastic Compute Cloud) F1, a compute instance with FPGAs that users can program to perform hardware accelerations. The F1 instance includes an FPGA developer Amazon Machine Image (AMI) which includes a development environment with scripts and tools for code compilation and design simulation.

It is expected the primary users of EC2 F1 will be software developers, working on complex and compute-intensive algorithms for which FPGAs lend themselves particularly well. For instance, High Performance Computing will increasingly exploit FPGA technology.

But let’s not forget one of the most important roles that FPGAs have been playing in our industry – EDA – for a number of decades: hardware acceleration for ASIC prototyping purposes.

(more…)

FPGAs in an SoC World: How modern FPGA architecture influences verification methodologies

Thursday, June 1st, 2017

The SoC domination observed so far in the ASIC industry is coming to the FPGA world and changing the way FPGAs are used and FPGA projects are verified. The latest SoC FPGA devices  offer a very interesting alternative of reprogrammable logic powered with the microprocessor, usually ARM. With new types of devices there is always a need for extended verification methodology. SoC ASIC has so far been the main pioneer for advanced and highly scalable verification methodologies. Due to the complexity and size of such projects, ASIC labs were actually driving EDA vendors to deliver verification solutions for their projects.

 

With the growth of these projects, hardware emulation became a common tool which was then integrated with virtual platforms and labeled ‘hybrid co-emulation’. This hybrid solution offered a single verification platform for both software and hardware teams. Such platforms allow the performance of verification at the SoC level, allowing the entire project to be verified before the final design code is actually written and available for example, to perform the prototyping.

 

Hybrid emulation allows the connection of the work environment of software teams using virtual platforms with the hardware engineers using emulators. Why is this so important? The issue is, until now the software portion of the project worked on the virtual models, separate from the hardware portion. Connecting these two domains allows for testing of the project at the SoC level instead of the subsystems level, which in turn increases the coverage of testing and enables the detection of problems much earlier.

 

Hybrid_co-emulation_verification_system

Figure 1 – Hybrid co-emulation verification system.

(more…)

Software Driven Test of FPGA Prototype: Use Development Software to Drive Your DUT on an FPGA Prototyping Platform

Monday, April 10th, 2017

on chip analyzerMost everyone would agree how important FPGA prototyping is to test and validate an IP, sub-system, or a complete SoC design. Before the design is taped-out it can be validated at speeds near real operating conditions with physical peripherals and devices connected to it instead of simulation models. At the same time, these designs are not purely hardware, but these days incorporate a significant amount of the software stack and so co-verification of hardware and software is put at high importance among other requirements in the verification plan.

 

However, preparing a robust FPGA prototype is not a trivial task. It requires strong hardware skills and spending a lot of time in the lab to configure and interconnect all required peripheral devices with an FPGA base board. Even more difficult is to create a comprehensive test scenario which contains procedures to configure various peripherals. Programming hundreds of registers in proper sequence and then reacting on events, interrupts, and checking status registers is a complex process. The task which is straightforward during simulation, where full control over design is assured, becomes extremely hard to implement in an FPGA prototype. Facing this challenge, verification engineers often connect a microprocessor or microcontroller daughter card to the main FPGA board. The IP or SoC subsystem you are designing will be connected with some kind of CPU anyhow, so this way seems natural. Having a CPU connected to the design implemented in an FPGA facilitates creating programmatically reconfigurable test scenarios and enables test automation. Moreover, the work of software developers can be now reused as the software stack with device drivers can become a part of the initialization procedure in the hardware test.. The software can become a part of the initialization procedure in the hardware test. If that makes sense to you, then why not use an FPGA board that has all you need – both FPGA and the CPU?
(more…)

An Easier Path to Faster C with FPGAs

Monday, November 14th, 2016

For most scientists, what is inside a high-performance computing platform is a mystery. All they usually want to know is that a platform will run an advanced algorithm thrown at it. What happens when a subject matter expert creates a powerful model for an algorithm that in turn automatically generates C code that runs too slowly? FPGA experts have created an answer.

More and more, the general-purpose processor found in server-class platforms is yielding to something more optimized for the challenges of high-performance computing (HPC). Advanced algorithms like convolutional neural networks (CNNs), real-time analytics, and high-throughput sensor fusion are quickly overwhelming traditional hardware platforms. In some cases, HPC developers are turning to GPUs as co-processors and deploying parallel programming schemes – but at a massive cost in increased power consumption.

A more promising approach for workload optimization using considerably less power is hardware acceleration using FPGAs. Much as in the early days of FPGAs where they found homes in reconfigurable compute engines for signal processing tasks, technology is coming full circle and the premise is again gaining favor. The challenge with FPGA technology in the HPC community has always been how the scientist with little to no hardware background translates their favorite algorithm into a reconfigurable platform.
(more…)

FPGA VHDL Verification – Can we do this faster – with better quality – at no extra cost?

Monday, November 14th, 2016

As I recently shared, UVVM, VHDL’s long-awaited alternative to UVM, promises to be interesting. Later this week, I’ll be joined by Espen Tallaksen, Bitvis Managing Director and Founder for a joint webinar, UVVM – A game changer for FPGA VHDL Verification.

Below, please find Espen Tallaksen’s recent guest blog on the topic that originally appeared on the Aldec Blog.

FPGA VHDL Verification
How can we do this faster and with better quality – at no extra cost?
by Espen Tallaksen, Bitvis Managing Director and Founder

This is actually possible – and with an average efficiency improvement of 20 to 50% for medium to high complexity FPGAs. Less for data path oriented designs and more for control or protocol oriented designs. At no extra cost.

 All that is required is that you do your testbench development the same way you do your design. Every single FPGA designer knows that a good top level design architecture is critical. Most FPGA designers also know that a good microarchitecture is at least as important for module design. It should thus be obvious that a good architecture is also equally important for your testbench, but for some strange reason most testbenches do not have the same good architecture as the design being verified.

Most designers agree that the following are critical for an efficient development of a high quality design module:

–          Overview, Readability, Simplicity

–          Modifiability, Maintainability, Extendibility

–          Debuggability

–          Reusability

So why should testbenches be any different, with on average the same time usage as the actual design?

(more…)

Beer, Cars, and Verification. My thoughts after DVCon Europe

Wednesday, November 9th, 2016

DVCon-Europe-AldecAs I write this, I am visiting the Aldec corporate office in the US on the day following their historical presidential election. It’s been a busy travel season for this product manager, and only a few weeks ago I was at DVCon Europe in Munich – the city of pork knuckles, beer… and of course, cars.

Munich-DVCon-EuropeThe DVCon Europe conference is certainly growing, and the methodology presented yearly continues to be more mature and ready to use. This year’s DVCon Europe was not thematically different from other conferences. Subjects like Automotive and IoT have flourished these past few years.

Yet, nowhere else like here in the heart of Bavaria has the discussion about cars acquired such importance. In this region known as the European Detroit, cars are a secular religion.

A few years ago we were wondering… how? Two years ago… when? Today production has its hands full and engineers are simply wondering… what’s next?

As with the American election, there is a sense that we are awaiting another revolution. Within a few dozen years, internal combustion engines will become extinct like dinosaurs, or become as Juergen Weyer of NXP has said, like “Kodak in the era of digital photography”.

And so we turn to the electronics field as the main solution for this new era. With this turn, ahead of us opens up new challenges related to design and testing, not to mention the safety of billions of users.

In the vastness of topics such as Automotive and IoT, I would not want us to miss this nugget from this year’s DVCon Europe Conference: UVVM, VHDL’s long-awaited alternative to UVM. With the large presence of VHDL in Europe, Universal VHDL Verification Methodology (UVVM) could not have been born anywhere else. The concept is based on the Bus Functional Model enriched by the favor of the well-known and liked OSVVM… and it promises to be interesting.

(more…)




© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise