Open side-bar Menu
 Aldec Design and Verification
Janusz Kitel
Janusz Kitel
Janusz Kitel is a Requirements Management Specialist at Aldec, responsible for Spec-TRACER product. He has 5 years of experience in requirements engineering and over 10 years of experience in product quality assurance. Janusz received his M.S. in Electronics and Telecommunication from Silesian … More »

Don’t be a Slave to the Documentation

 
September 20th, 2017 by Janusz Kitel

Are you a requirements engineer but your main goal is to provide well organized documentation? Do you have a great knowledge about the industry, business analysis and systems but you are struggling with the shape and look of your documentation? Do you still hear, for instance, that the specification document is not easy to read and difficult to use?

Requirements first

Requirements are the starting point of all other activities in a project lifecycle. So the specification document is crucial for the project. The document has many audiences such us stakeholders, designers, verification engineers and other groups involved in the project. This forces the author of the document to take care of the structure and organization of the document. It is not a big deal to prepare such a document. The problem is that the document has to be modified many times. The requirements are constantly changing, with new features appearing, some being modified and some being removed. Reclassification and reorganization must be repeated many times. In which case, I am pretty sure you will be contending with issues such as auto numbering, indentation, paragraph styles as well as tables and drawings that just do not fit the page.

Another kind of trouble comes from collaboration. Requirements should be developed by more than one engineer but working together on the same document is really a challenge. Forgetting to enable Track Changes, using the wrong version of a document or even using different version of Office tools are the most common collaboration issues.

Finally, there may be a situation in which you focus on a document’s structure and aesthetics more than its content. In the end your document may be well prepared but there is a serious risk that the requirements will be ambiguous, incomplete and/or inconsistent. This can happen when huge amounts of energy are spent solely on keeping the document organized and current. For the rest of this article, visit the Aldec Design and Verification Blog.

Demystifying AXI Interconnection for Zynq SoC FPGA

 
September 14th, 2017 by Farhad Fallahlalehzari

Imagine traveling back in the time to the early human ages. It’s going to be both scary and interesting when you meet a person who probably cannot speak or if they do you won’t be able to understand them. Clearly, communication will not be possible until you find a mutual way to convey your respective meanings/intentions. The same principle applies in the world of electronics as there are various types of interfaces among electronic devices. Therefore, a standard communication protocol eases the transformation of data in a system, especially in a System-on-Chip (SoC) system which consists of different systems.

SoC FPGAs such as Xilinx® Zynq™ establishes the ARM Advanced Microcontroller Bus Architecture (AMBA) as the on-chip interconnection standard to connect and manage the functional blocks within the SoC design. The Advanced eXtensible Interface (AXI) is designed for FPGAs based on AMBA as a protocol for communication between blocks of IP.

Read the rest of Demystifying AXI Interconnection for Zynq SoC FPGA

Introduction to Zynq™ Architecture

 
August 25th, 2017 by Farhad Fallahlalehzari

The history of System-on-Chip (SoC)

Do we prefer to have a small electronic device or a larger one? The answer will often be “the smaller one”. However, before the commercialization of small radios, many people were interested in having big radios for the extravagance. Subsequently, at the beginning of the emergence of compact radios, those who preferred the flamboyance of large radios refused using compact radios. Slowly, but surely, the overwhelming benefits of owning a more compact radio led to the proliferation of smaller devices. These days the progression of the technology enables cutting-edge companies to encapsulate different parts of a system into increasingly smaller devices, all the way down to a single chip, which added the System-on-Chip (SoC) concept to the electronics world. By way of an example of a SoC, I will explain the Zynq-7000 all-programmable SoC. It consists of two hard processors, programmable logic (PL), ADC blocks and many other features all in one silicon chip.

Before the invention of the Zynq, processors were coupled with a Field Programmable Gate Array (FPGA) which made communication between the Programmable Logic (PL) and Processing System (PS) complicated. The Zynq architecture, as the latest generation of Xilix’s all-programmable System-on-Chip (SoC) families, combines a dual-core ARM Cortex-A9 with a traditional (FPGA). The interface between the different elements within the Zynq architecture is based on the Advanced eXtensible Interface (AXI) standard, which provides for high bandwidth and low latency connections.

Before implementing the ARM processor inside the Zynq device, users were using a soft core processor such as Xilinx’s Microblaze. The main advantage of using Microblaze was, and remains, the flexibility of the processor instances within a design. On the other hand, the inclusion of hard processor in Zynq delivers significant performance improvements. Also, by simplifying the system to a single chip, the overall cost and physical size of the device are reduced.

Zynq Design Flow

The design flow for the Zynq architecture has some steps in common with a regular FPGA. The first stage is to define the specifications and requirements of the system. Next, during the system design stage, the different tasks (functions) are assigned to implementation in either PL or PS which is called task partitioning. This stage is important because the performance of the overall system will depend on tasks/functions being assigned for implementation in the most appropriate technology: hardware or software. For the rest of this article, visit the Aldec Design and Verification Blog.

Accelerating Simulation of Vivado Designs with HES

 
August 11th, 2017 by Krzysztof Szczur

FPGA Design Verification Challenge

The FPGA design and verification “ecosystem” changes rapidly to keep pace with the fast growing size of FPGA devices. The largest Xilinx Virtex UltraSCALE chips provide 4.4 Million logic cells or using another metric 50 million equivalent gate count.

To enable efficient design process for Virtex-7 and newer UltraSCALE FPGAs, Xilinx provides software called Vivado Design Suite. Besides supporting a classical HDL design flow, it also provides system level design tools like IP Integrator, System Generator or even High Level Synthesis, that are very convenient for designing large and complex designs.

Verification has always taken a significant share of the project schedule with HDL simulation being the main stage of that process. With such big designs however, even the fastest simulators would spend hours in simulation tasks.

Simulation Acceleration with HES-DVM™

Aldec’s HES-DVM bridges this gap enabling accelerated simulation with the design running in the FPGA and the testbench in the simulator.

Aldec has been providing HES™ – Hardware Emulation Solutions since 2001. During that time the HES evolved to address the most sophisticated design requirements and fulfill customers’ requirements. Thus, simulation acceleration is only one example of how HES can be used with other applications being hybrid co-emulation, in circuit emulation, and physical prototyping.

With simulation acceleration the user can move any synthesizable module from simulator to the FPGA thus offload some processing from the HDL simulator. Typically, an entire design is implemented in HES board and the simulator only executes the testbench.

Figure 1: Signal-level simulation acceleration

The HES boards are seamlessly integrated with the simulator with PCI Express x8 physical connection to the host workstation. The HES-DVM provides co-simulation interfaces for Aldec’s Riviera-PRO and Active-HDL simulators but also for other 3rd party simulators. It can be used both in Linux and Windows operating systems with all required PCIe drivers and interfaces working out of the box.

The DVM tool automates the process of design compilation and implementation for HES boards. It generates all necessary scripts and configuration files to run simulation acceleration in a given HES board but also brings many useful debugging features. Despite running your design in FPGA hardware you can keep simulation level visibility with an RTL View of all internal probes.

Figure 2: Design setup flow for acceleration using DVM™

Acceleration Benchmark

MIG controller for DDR3, AXI interconnect, two AXI traffic generators and one AXI protocol checker as shown in the following diagram.How much acceleration can I achieve? This is always the first customer’s question and frankly there is no straight answer because the result depends on the complexity of both the design and the testbench. Usually a good estimation can be obtained from running simulation profiling and then applying Amdahl’s rule. However, the best way to verify acceleration potential is just to experiment with a typical design, so we have created a simple design of a memory sub-system using Xilinx Vivado Design environment. It contains MIG controller for DDR3, AXI interconnect, two AXI traffic generators and one AXI protocol checker as shown in the following diagram.

Figure 3: Diagram created for memory subsystem benchmarking

Benchmark Results

Workstation and software used for benchmarking:

Workstation:
CPU: Intel(R) Core(TM) i7-3770K CPU @ 3.50GHz
RAM: 32 GB
HES Board: HES7XV4000BP_REV2, contains 2x Virtex7 2000 FPGA

Software:
OS: Linux CentOS 6, x86_64
Simulator: Riviera-PRO 2017.02
Design env: Vivado 2016.4
Acceleration env: HES-DVM 2017.02

If you are interested in further details about this project, benchmark, and tools which can significantly accelerate your simulation you can view the following application note: https://www.aldec.com/en/support/resources/documentation/articles/1915

Traceability Matrices: Headache or Real Value

 
August 2nd, 2017 by Janusz Kitel

Traceability is becoming increasingly important in most engineering projects, if only on the grounds of ‘good practice’, and it is specifically required for projects that have to meet safety standards such as DO-254 and ISO 26262.

To provide traceability, you must maintain the relationships between all aspects of a project; from the system-level requirements through implementation and verification. Unfortunately, many organizations reduce their traceability responsibilities to preparing a few matrices for major reviews and to comply with respective safety standards. Such an approach is very time-consuming and I would argue it does little if anything to improve the overall management of the project. Or the design for that matter.

Good traceability data helps prevent errors and omissions in specifications, design and test plans. The first step is to define a traceability model.

Figure 1. Example Traceability Model

Read the rest of Traceability Matrices: Headache or Real Value

Emulation on the Cloud: HES Cloud delivers access to a high performance emulation platform

 
June 15th, 2017 by Krzysztof Szczur

‘The cloud’ has been an industry buzz word for some time now and whilst the initial focus was on data storage and sharing – and spawned the likes of Dropbox – ‘cloud computing’ is currently the latest trend. For instance, Amazon’s cloud platform, Amazon Web Services (AWS), gives users access to servers and a range of applications. Storage is available as before but so too now are dedicated relational databases; which in Amazon’s case is provides through a different service.

Enterprise businesses are taking advantage of cloud computing platforms, and for a number reasons. These include pay-as-go (as opposed to investing considerable cap ex), speed and flexibility (resources and storage can be made available quickly), and one is spared the headache of maintaining a mass of IT hardware and keeping on top of software license renewals.

Also, earlier this year Amazon announced EC2 (Elastic Compute Cloud) F1, a compute instance with FPGAs that users can program to perform hardware accelerations. The F1 instance includes an FPGA developer Amazon Machine Image (AMI) which includes a development environment with scripts and tools for code compilation and design simulation.

It is expected the primary users of EC2 F1 will be software developers, working on complex and compute-intensive algorithms for which FPGAs lend themselves particularly well. For instance, High Performance Computing will increasingly exploit FPGA technology.

But let’s not forget one of the most important roles that FPGAs have been playing in our industry – EDA – for a number of decades: hardware acceleration for ASIC prototyping purposes.

Read the rest of Emulation on the Cloud: HES Cloud delivers access to a high performance emulation platform

FPGAs in an SoC World: How modern FPGA architecture influences verification methodologies

 
June 1st, 2017 by Zibi Zalewski, General Manager, Hardware Division

The SoC domination observed so far in the ASIC industry is coming to the FPGA world and changing the way FPGAs are used and FPGA projects are verified. The latest SoC FPGA devices  offer a very interesting alternative of reprogrammable logic powered with the microprocessor, usually ARM. With new types of devices there is always a need for extended verification methodology. SoC ASIC has so far been the main pioneer for advanced and highly scalable verification methodologies. Due to the complexity and size of such projects, ASIC labs were actually driving EDA vendors to deliver verification solutions for their projects.

With the growth of these projects, hardware emulation became a common tool which was then integrated with virtual platforms and labeled ‘hybrid co-emulation’. This hybrid solution offered a single verification platform for both software and hardware teams. Such platforms allow the performance of verification at the SoC level, allowing the entire project to be verified before the final design code is actually written and available for example, to perform the prototyping.

Hybrid emulation allows the connection of the work environment of software teams using virtual platforms with the hardware engineers using emulators. Why is this so important? The issue is, until now the software portion of the project worked on the virtual models, separate from the hardware portion. Connecting these two domains allows for testing of the project at the SoC level instead of the subsystems level, which in turn increases the coverage of testing and enables the detection of problems much earlier.

Hybrid_co-emulation_verification_system

Figure 1 – Hybrid co-emulation verification system.

Read the rest of FPGAs in an SoC World: How modern FPGA architecture influences verification methodologies

Software Driven Test of FPGA Prototype: Use Development Software to Drive Your DUT on an FPGA Prototyping Platform

 
April 10th, 2017 by Krzysztof Szczur

on chip analyzerMost everyone would agree how important FPGA prototyping is to test and validate an IP, sub-system, or a complete SoC design. Before the design is taped-out it can be validated at speeds near real operating conditions with physical peripherals and devices connected to it instead of simulation models. At the same time, these designs are not purely hardware, but these days incorporate a significant amount of the software stack and so co-verification of hardware and software is put at high importance among other requirements in the verification plan.

However, preparing a robust FPGA prototype is not a trivial task. It requires strong hardware skills and spending a lot of time in the lab to configure and interconnect all required peripheral devices with an FPGA base board. Even more difficult is to create a comprehensive test scenario which contains procedures to configure various peripherals. Programming hundreds of registers in proper sequence and then reacting on events, interrupts, and checking status registers is a complex process. The task which is straightforward during simulation, where full control over design is assured, becomes extremely hard to implement in an FPGA prototype. Facing this challenge, verification engineers often connect a microprocessor or microcontroller daughter card to the main FPGA board. The IP or SoC subsystem you are designing will be connected with some kind of CPU anyhow, so this way seems natural. Having a CPU connected to the design implemented in an FPGA facilitates creating programmatically reconfigurable test scenarios and enables test automation. Moreover, the work of software developers can be now reused as the software stack with device drivers can become a part of the initialization procedure in the hardware test.. The software can become a part of the initialization procedure in the hardware test. If that makes sense to you, then why not use an FPGA board that has all you need – both FPGA and the CPU?
Read the rest of Software Driven Test of FPGA Prototype: Use Development Software to Drive Your DUT on an FPGA Prototyping Platform

Aldec Springs Into Action: A look back at a busy show season

 
April 6th, 2017 by Henry Chan

Aldec at DVConIt’s been a busy season for Aldec. The weather has warmed here in the desert and as the trees and greenery enliven in spring, Aldec has also been bursting with activity. From DVCon to the International Symposium on FPGAs in the US to Embedded World and CTIC in Europe, there have been some exciting developments from Aldec in verification, embedded systems, and DO-254.

These major events and conferences have been a great time to provide some updates on the latest Aldec endeavors and to provide an in-person look at the capability of our tools.

The DVCon U.S. Conference and Exhibition held in San Jose, California, holds a special place in my heart because it was the first industry conference I attended after starting my career in EDA. Every year I enjoy returning in order to see the latest verification advancements and to speak with those who are hard at work trying to improve verification efforts. Portable stimulus was a hot topic and it seemed like emulation was growing in popularity. This year we brought our Hardware Emulation Solutions (HES™) so that people could get an in-person look at our hardware. We showed off the speed benefits of emulation over traditional simulation by hooking up a UVM testbench to an in-house network-on-chip design running in our FPGA boards. As design sizes increase, I think emulation will become a more widely adopted solution to the simulation bottleneck.

Read the rest of Aldec Springs Into Action: A look back at a busy show season

An Easier Path to Faster C with FPGAs

 
November 14th, 2016 by Zibi Zalewski, General Manager, Hardware Division

For most scientists, what is inside a high-performance computing platform is a mystery. All they usually want to know is that a platform will run an advanced algorithm thrown at it. What happens when a subject matter expert creates a powerful model for an algorithm that in turn automatically generates C code that runs too slowly? FPGA experts have created an answer.

More and more, the general-purpose processor found in server-class platforms is yielding to something more optimized for the challenges of high-performance computing (HPC). Advanced algorithms like convolutional neural networks (CNNs), real-time analytics, and high-throughput sensor fusion are quickly overwhelming traditional hardware platforms. In some cases, HPC developers are turning to GPUs as co-processors and deploying parallel programming schemes – but at a massive cost in increased power consumption.

A more promising approach for workload optimization using considerably less power is hardware acceleration using FPGAs. Much as in the early days of FPGAs where they found homes in reconfigurable compute engines for signal processing tasks, technology is coming full circle and the premise is again gaining favor. The challenge with FPGA technology in the HPC community has always been how the scientist with little to no hardware background translates their favorite algorithm into a reconfigurable platform.
Read the rest of An Easier Path to Faster C with FPGAs




© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise