The Breker Trekker
Adnan Hamid, CEO of Breker
Adnan Hamid is the founder CEO of Breker and the inventor of its core technology. Under his leadership, Breker has come to be a market leader in functional verification technologies for complex systems-on-chips (SoCs), and Portable Stimulus in particular. The Breker expertise in the automation of … More »
The Next Wave in Verification
September 29th, 2016 by Adnan Hamid, CEO of Breker
There is an important standard being worked on within Accellera and given its name, you might think that this is another incremental standard on a somewhat tired theme. It is called Portable Stimulus and yet it has almost nothing to do with stimulus and that stimulus, once generated by a tool not defined in the standard, is most certainly not portable. It is a fundamentally new approach to verification that could transform how chips and low-level software are verified. We will get back to the name in a moment, but the important thing is that users become informed about this new language and choose to have their voices heard in the standardization effort.
Let’s start by stating exactly what this standard is attempting to define. It is a high-level model of verification intent based on the notion of graphs. From this model, various tools can be run that generate, using constrained random techniques, a scenario that directly targets a specific execution engine, such as an emulator, virtual prototype or post silicon. The scenario generally executes on one or more processors embedded in the design and may coordinate with I/O interfaces driven, for example, by a UVM VIP. Coverage is collected in the originating model. A tool also generates the checker and enables coordination of external activities necessary to have that scenario interact with its environment.
You will notice that nowhere in that description did the words stimulus or portability pop up. One of the potential reasons for mentioning stimulus is a connection to the past. SystemVerilog and UVM define a methodology that concentrates on stimulus and this is their big downfall. They randomly generate stimulus with no idea about what they are intending to accomplish. So long as each vector of stimulus abides by the combinatorial constraints, then it is fair game. That leaves the user with the task of sorting out which tests are valuable and which add nothing towards verification closure.
The new methodology starts with an end-to-end definition of what a system is capable of doing. Each scenario that is generated is a valid example of a possible operation of the device. Every scenario accomplishes something useful and unique, so there is an immediate increase in efficiency and effectiveness. Schedulers in the synthesis tool can work out how to stress the application by running multiple scenarios at the same time.
There is another huge benefit to this approach which solves one of the big problems in the UVM flow today. UVM models are not hierarchically composable, which means that every time you integrate a few blocks together, you have a significant amount of work to do before you can generate tests again. Similarly, you cannot just take the verification environment that comes along with a piece of 3rd party IP and directly use it in your system-level testbench. Those problems become a thing of the past with graph models. Describing what a piece of IP is capable of doing is the same at the block-level, the subsystem-level and the system-level.
Better names may have been verification intent language, model for verification synthesis, scenario verification or software-driven verification because these things describe what is being created. However, Portable Stimulus is a reasonable working name for the committee and gives people a general pointer towards the area of verification it is addressing. It remains to be seen what the language will be called when released.
We wanted our readers to know that the potential of what is being offered is much broader in scope and capabilities than what the name might suggest. For example, if you think that what is being worked on is just another SystemVerilog-type language, then you are in for a big surprise. What is being worked on is the next generation verification methodology and it is important that it meets your needs and not something that just fits in with the solutions that EDA companies may already have. This is your verification standard for the next twenty years, being collaborated on by major EDA vendors and big name user companies. It is important to get it right! In that spirit, Breker is always open to learning about user requirements and what you want in a verification methodology.
The user’s voice has already been heard within the committee and some important decisions were made that brings us closer to something the whole industry can be proud of, rather than something quick and dirty that can be patched in the future. Breker has been at this longer than anyone else in the industry and we have succeeded by listening to you. We are ready to do whatever innovation it takes to solve your toughest problems and will do our best to ensure it becomes part of the standard. Even better is if you are up to helping us by getting involved in the Accellera standard’s process.
In the next few blogs, we will start to explain various aspects of our proposed solution while other blogs will provide some background and explanation for the decisions which we believe will yield the most flexible and powerful solution. Together we can make this happen.
Category: Knowledge Depot
4 Responses to “The Next Wave in Verification”