Open side-bar Menu
 Decoding Formal
Roger Sabbagh - VP of Applications Engineering at Oski
Roger Sabbagh - VP of Applications Engineering at Oski
Roger Sabbagh works with silicon design teams to find opportunities to deploy formal verification solutions that accelerate verification and close gaps.

Use Formal Like Your Life Depended On It

 
June 7th, 2017 by Roger Sabbagh - VP of Applications Engineering at Oski

CDNLive EMEA was held on May 15-17, 2017 at the Infinity Hotel and Conference Centre in Munich, Germany and Oski was there exhibiting as a Cadence partner in the vendor exposition. It was a privilege to attend and to meet many Cadence users from across Germany, plus a wide spectrum of other places including Brazil, the Czech Republic, England, France, Ireland, Italy, Netherlands, Scotland, Serbia and Spain.

It was there that Davide Santo, NXP Semiconductor’s Advanced Driver Assistance Systems (ADAS) visionary, delivered a standing-room-only keynote in which he described the coming revolution in automated and autonomous driving. He predicted massive growth in future computing requirements that will be necessary to achieve full vehicle automation. He explained how a hybrid topology of a central fusion unit and distributed smart sensors will be required to “sense”, “think” and “act” in unison.

Davide Santo at the CDNLive EMEA Keynotes

Davide certainly wasn’t alone in his focus on semiconductor design for automotive applications as it was also the topic of many other sessions at the conference [1]. The topic of verification and validation of these devices was often discussed. These are complex system-on-chip (SoC) designs, akin to those used in mobile devices, such as the NXP SV32V234 [2], which NXP claims is the first SoC with the features required to automate and co-pilot a self-aware car. It features a quad-core ARM Cortex-A53 processor, dual NXP APEX Image Cognition Processors, Vivante GC3000 3-D Graphics Processing Unit (GPU) and an advanced memory bus system architecture.

NXP ADAS SoC

Now, it’s no secret that I am biased in favor of formal verification. I admit it. I work at Oski, the formal verification methodology company. I’ve spent the last 17 years of my career working to promote the use of formal verification in the semiconductor industry. So, I am certainly viewing the issue of verifying ADAS silicon through this lens.

With that out in the open, let’s consider the verification of a device involved in the control and safety systems for an autonomous vehicle. A safety or security flaw in any such device may lead to serious consequences. We are talking life or death consequences. Given that, wouldn’t you want the devices that control your autonomous vehicle of the future to be formally verified? The answer should be: Yes!

Fortunately, the good people who brought us the ISO 26262 standard [3] for automotive safety agree. Like most safety standards, the ISO 26262 standard defines a process by which safety is prioritized through a well-defined hardware development process. Let’s look at some of the steps in that process and how formal verification can play a vital role.

Safety Requirements Traceability

One of the first steps in the process is an analysis phase to determine all the safety related design requirements and their designated level of importance, otherwise called the Automotive Safety Integrity Level (ASIL). The ASIL can be one of four levels, A through D, with D being the highest. The levels are set based on factors such as the related risk of damage or injury, the likelihood of occurrence of a hazardous failure and the control that the user has over the system to minimize the effect of a failure. A requirements management system must be used to provide traceability of the safety requirements through the verification and validation phases to demonstrate that the implemented design meets the requirements. So, something in the test plan must be traceable back to the requirements, whether it be a test, a coverage point or an assertion. Here’s where we find the first application of formal in this process. Assertions that capture safety requirements and are subsequently formally proven will provide the ultimate confidence that the requirements have been met.

Random Failure Analysis

Another key step is the analysis of the effect of random failures on the system. The failures can be transient, such as Single-Event Upsets (SEUs), or permanent due to a component fault. The impact of the failures on safety must be mitigated using mechanisms such as Error Correcting Codes (ECCs) to detect and correct data bit errors and triple modular redundancy (TMR) circuitry. Subsequently, the safety mechanisms must be tested to demonstrate that they are effective in detecting and mitigating the effect of random errors. This negative testing approach involves fault simulation of a potentially enormous number of faults. This is where we find the second application of formal verification in the process. Formal verification tools can be used to pre-qualify the faults before fault simulation to prune out faults that are proven to be redundant or non-propagatable, which reduces the burden on fault simulation.

Avoiding Systematic Failures

The ISO 26262 development process also includes requirements for avoiding “systematic failures”, which is another name for what we commonly refer to as design bugs. The standard recommends that formal verification be used to verify safety related requirements with ASIL level designations of B and above. The standard doesn’t dictate the process, but when they “recommend” something, there better be a good reason to justify going against the recommended approach when it comes time for regulatory review and approval. So, this is where formal property checking can play a huge role in ensuring the safety of these devices and that no latent bugs are going to cause harm or damage. Oski has pioneered the methodology for end-to-end formal sign-off, which ensures that no bugs are left behind after the verification process. This process can be broken down into the four “Cs” of formal sign-off.

Oski Formal Sign-off Methodology

  • End-to-end Checkers

End-to-end checkers are quite different than typical assertion based checks. They model the end to end behaviour of the block under test, and predict what the output should be based on the input, much like a scoreboard. However, they are coded in a distinct way that makes them suitable for formal verification. They are lightweight and don’t add too much complexity to the state space of the model which is important for formal.

  • Constraints

By default, formal will explore all possible input stimulus, so it’s important to filter out the illegal input space with constraints so that we don’t see false failures of the checkers. However, care must be taken to verify that there are no over-constraints which could mask real failures and cause us to miss bugs in the design. We verify the constraints through formal methods such as assume-guarantee and by running the constraints as assertions in the higher-level simulation environment.

  • Dealing with Complexity

Now formal tends to run into complexity barriers for complex designs, especially when dealing with end to end checkers This brings the requirement for the next piece of the methodology, which are the abstraction models. Simply speaking, the abstraction models reduce the state space or latency of the design so that formal verification can explore beyond it’s default threshold

  • Formal Coverage

Formal coverage is a key component of the sign-off methodology. Much like in simulation, formal coverage metrics help us to measure controllability – how much of the design states have been covered by the input stimulus that we’ve explored. But, they also have a unique ability to report observability coverage – that is how much of the design states are being checked by our end-to-end checkers. This makes formal coverage a very strong metric for measuring progress and closing verification holes.

This process has been successfully deployed on many dozens of tapeouts over many years. Over and over, it has shown the ability to find all bugs in a design very efficiently, including those obscure bugs that would have been likely to slip through the verification process. It seems like such a natural fit for the verification of control and safety systems in autonomous vehicles. I certainly would want to use it if lives were on the line and I hope you do to. I’d use formal like my life depended on it, because with ADAS, it just might!

[1] CDNLive EMEA,
https://www.cadence.com/content/cadence-www/global/en_US/home/cdnlive/emea-2017.html

[2] NXP S32V230 Family of Processors for Advanced Driver Assistance Systems, http://www.nxp.com/products/automotive-products/microcontrollers-and-processors/arm-mcus-and-mpus/s32-processors-microcontrollers/s32v230-family-of-processors-for-advanced-driver-assistance-systems:S32V230

[3] ISO 26262 Standard, “Road Vehicles — Functional Safety”, www.iso.org

Logged in as . Log out »




© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise