[ Back ]   [ More News ]   [ Home ]
April 19, 2004
ESL Chapter 1
Please note that contributed articles, blog entries, and comments posted on EDACafe.com are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
Jack Horgan - Contributing Editor


by Jack Horgan - Contributing Editor
Posted anew every four weeks or so, the EDA WEEKLY delivers to its readers information concerning the latest happenings in the EDA industry, covering vendors, products, finances and new developments. Frequently, feature articles on selected public or private EDA companies are presented. Brought to you by EDACafe.com. If we miss a story or subject that you feel deserves to be included, or you just want to suggest a future topic, please contact us! Questions? Feedback? Click here. Thank you!

It would seem axiomatic that design teams would benefit from a methodology that enabled meaningful design tradeoff early in the design cycle and that supported significant levels of parallelism particularly between hardware and software development activities. The two groups have traditionally operated with a high degree of independence with the software team largely gated by the availability of hardware prototypes. Given the high cost in dollars and time of changing the hardware design, the challenge to correct defects or make enhancements often falls on the shoulders of the software team late in the cycle. This can result in poor performance, missing functionality or slipped delivery
schedules.


ESL or “electronic system level” design, a simulation driven top-down design approach professes to do address these issues. The key to the ESL approach is modeling at higher levels of abstraction. It is quicker and easier to generate models at higher levels of abstraction than at the RTL level. This is similar to software programming in a high level programming language like C/C++ versus assembly programming. Simulations also run orders of magnitude faster by avoiding unnecessary operational details. In combination these speedups greatly increase the productivity of overall (model, simulate, evaluate, refine) cycle enabling far more alternatives to be examined in a given
time period.
One could look at the methodology as progressive refinement to lower levels of abstraction: from untimed functional model to time
functional model to transaction level model to behavioral hardware model to pin-accurate and cycle accurate model to RTL model.


ESL design covers the areas of


System Level Design
Architecture exploration
Executable Functional Spec
Virtual prototyping
HW/SW Co-design
Co-simulation/co-verification
HW/SW Partitioning
 


Generic Diagram of ESL Design Flow


The first stage in an ESL design flow is Requirements. Requirements may come from a specific customer or from the marketplace through Marketing. Inputs for a Marketing Requirements Statement come from the usual suspects: specific customers/prospects, user groups, field sales, competitive analysis, industry trends, emerging technologies and so forth. In some cases algorithmic design and verification is performed in a programming language like C or using tools like MathLab/Simulink from MathWorks or SPW from CoWare. Using a combination of textual and graphical tools an executable functional specification is produced describing the behavior of the design along with constraints. The
functional
specification is technology independent and likely these days to be written in SystemC. The description is devoid of implementation details. Decisions as to what pieces will execute in software running on a processor and which will be targeted for hardware are deferred. The functional spec can be used to minimally verify that the design complies with the functional requirements. This can serve as a master testbench or golden reference for the later stages of design.


The next step is the hardware/software partitioning process which determines the number and types of processors and assigns functions to those processors consistent with requirements of speed, cost, silicon area and power consumption. An optimum solution would be some weighted function of these metrics. In some cases performance
would be the most important in others cost. To a mathematician this looks like a classic integer programming problems where one defines an objective function and seeks an optimum solution consistent with
constraints by some hill climbing algorithm. In practice there are no accepted automated partitioning schemes. Partitioning can start by assuming that everything will be in software, running the software on the target processor or on a reference host processor, and determining the performance. Portions needing speedup can be transferred to a hardware implementation until performance constraints are met. One could use the opposite approach of modeling everything on hardware and migrating elements to software until constraints are violated. The process is interactively guided by heuristics and experience. ESL vendors point to very fast simulation as the best vehicle for exploring the
possible architectural space. The user can create multiple configurations with different mapping and allocation strategies, and select the best architecture. The user can study the influence of design parameters (e.g. bus bandwidth, processor speed, size of data) at simulation-time by tuning interactively model variables.


At any point in the ESL design cycle the questions include what is being simulated, the level of abstraction, support for mixed levels of abstraction, the accuracy of the simulation, the speed of the simulation, the required inputs, the language used, and measurable metrics. Also important is how well the results of one stage feed the next stage. No one wants duplication of effort. In particular, to what extent does the behavioral model guide, drive or automate the HDL or RTL modeling of the hardware? The answers to these questions vary among the different vendors.


The way the diagram is drawn one might conclude that ESL assumes starting with a blank page. In practice the requirements document may specify certain aspects of the design, the design goals may be similar to an existing design, there may be a company preference for a certain technology or supplier and certainly a desire for IP reuse. The ESL methodology must lend itself to exploiting legacy information by supporting import and generation of models at lower levels of abstraction.


Gartner Dataquest estimated the size of the market for fiscal 2000 at around $70 million and projects it to exceed $300 million by 2006. The ESL arena is populated by a number of startups such as Summit Design, CoFluent, Vast Technology and CoWare that are profiled below.




Summit Design


Summit Design is the oldest having begun in 1991 as SEE Technologies, the Israeli development arm of Daisy Systems. Summit Design had a successful IPO in 1996 and in 2001 merged with Viewlogic Systems to form a new public company called Innoveda. In April 2002, management and investors bought Summit Design from Innoveda and relaunched Summit Design as an independent company.


During a phone conversation Guy Moshe informed me that Summit Design now has 76 employees, is profitable and had revenues of $14 million in 2003. He claims 900 customers with an installed base of 26,000 seats. The key modules in Summit Design's offering are Visual Elite, FastC, System Architect and Virtual CPU.


Visual Elite is the next generation version of Visual HDL which was a HDL/graphical design entry tool allowing designers to use any existing VHDL or Verilog code by turning it into easy-to-understand graphics, including block diagrams, state diagrams, and flow charts. Visual HDL translated the verified design into a simulatable and synthesizable HDL format. With Visual Elite users can graphically create design units in C/C++ or SystemC while being able to link to other units created in HDL.


Summit provides FastC an ultra-fast verification platform based upon SystemC/C++ modeling. The FastC coding style allows designers to work at the RT level in much the same way as they do with Verilog or VHDL, while taking advantage of the performance of natively compiled code. With FastC's static scheduling technology, the scheduling and dependencies are determined during compilation. The simulation engine executes the design structure and behavior while enabling links to SW-oriented objects such as RTOS. Working with Visual Elite's interactive debugging tools, users can set breakpoints, single-step and trace signals and data structures in a graphical waveform regardless of source
language. Summit provides a FastC-to-
RTL-HDL mapping that allows designers to automatically take any FastC unit and generate an equivalent, synthesis-ready HDL representation.


With System Architect users can build, simulate and analyze architectural prototypes of HW and SW systems. System Architect allows tracing data transactions through tokens that can carry performance and data information throughout the system with various distributions models such as uniform and Poisson. A library of parametric API functions facilitates the modeling of microprocessors, bus elements, memories and I/O devices. Users can analyze system performance characteristics such as data path latencies, component utilization and data throughput. Users can accurately determine utilization of data processing memory and communications interconnects and visualize performance issues such as
response times and bottlenecks.


Once the allocation between SW and HW blocks has been determined, Summit's Virtual-CPU may be used to efficiently link the logic simulator and the SW program execution. Virtual-CPU offers a complete environment for co-verification of hardware and software. Virtual-CPU includes a software execution environment that runs the embedded system software as if it were running on the target CPU. This is coupled with a logic simulation of the embedded system hardware, which responds to bus cycles as if they were initiated by the target CPU. The target processor is replaced by the interaction of a bus-functional model of the processor and a virtual processor running the system firmware. The software
side can run in host-code execution mode in a workstation process or in target-code execution mode within an Instruction Set Simulator. Virtual-CPU supports a wide range of solution support packages, including: processor solutions, bus solutions, RTOS solutions and development board solutions




VaST Systems Technologies


I spoke with Graham Hellestrand, founder and CEO of VaST Systems Technology. Graham was an Australian engineering professor at the University of New South Wales, when he founded VaST in 1997. The company is now located in Sunnyvale. There have been three rounds of funding, the last was a $6 million round in May 2003. The product line has been in the market for about four years. The firm has 33 people, 45 if you count a captive distributor in Japan.


VaST's sees the system design process being driven by a Virtual Prototype System. A Virtual Prototype System consists of one or more Virtual Processor Models (VPM), a model of the communication internals and models of peripheral devices. The skeleton of a virtual prototype is defined by the hierarchy of interconnection (bus) structures and the articulation points (bus bridges) between buses. Buses provide the communication and infrastructure support for the work and storage engines of the control system. The work engines are both processors and peripheral devices. VaST employs proprietary Communication and Infrastructure Fabric (CIF) to enable the systematic modeling of interconnections:
from simple single wire connections, to complex parallel buses and intercommunication networks. They have recorded performances exceeding 1.2 million transactions per second on a cycle-arbitrating bus built using CIF. VaST claims that this is 10-100 times faster than the nearest competing bus technology. A VPM is a complete behavioral model that directly executes code. VaST has developed a library of ~25 models of the most common processors. Additional processor models are available as a service.


CoMET is VaST's Design Environment for the concurrent design of hardware, software, mechanical and DSP systems. CoMET enables the specification, architectural exploration and modeling, design, development and verification of fully executable Virtual System Prototypes. A virtual system prototype, or network of system prototypes, built using CoMET, incorporating one or more VaST high-performance virtual processor model and bus model is both fast and cycle-accurate. CoMET fully supports the design and development of complex peripheral devices. Bus models (such as, PCI, Ethernet, CAN, etc.) may be designed at the signal or transaction level, bus bridges and peripheral devices may be designed
at any level of functional and timing accuracy.


The Nova simulation engine is a next-event driven simulator with an ultra-efficient kernel, optimized for simulating complex multi-core virtual prototypes connected by arbitrating multiplexed buses and complex bus bridges to many peripheral devices ranging from memory to PCI and CAN buses. From the simulation, users can extract detailed performance data such as bus utilization, cache hit and fill rates, code procedures consuming the most time, and platform power dissipation.


METeor is VaST's interactive Real-time and Embedded Software Development which executes Virtual System Prototypes created using CoMET Environment. Candidate virtual system prototypes can easily be distributed for use by all members of the development team. Software is developed on a virtual prototype of the target system rather than on a host system. The VPM will execute the identical binary code used on the real target hardware. VaST's virtual processor modeling technology supports the development of an operating system and its device drivers within a target virtual prototype, as well as porting existing OS's to target virtual prototypes.


The designed Virtual Prototype may be used as a Golden Reference Design for quantitative architectural evaluation, and to drive both the development of a synthesizable register transfer description of the virtual prototype, and the pre-silicon development of software.




CoWare


In speaking with Armstrong Kendall, Director of Product marketing, I learned that CoWare started in 1996 having spun out of IMEC (Interuniversities Microelectronics Center) in Leuven, Belgium. IMEC is a world leading independent research center in nanoelectronics and nanotechnology with research focused on the next-generation of chips and systems. Under a long-term agreement, CoWare has exclusive rights to patented IMEC technology. Their first product was introduced in 1998. In 1999 the company launched the Open SystemC Initiative (OSCI) along with Synopsys. Coware Chairman Guido Arnout is currently the president of this organization. In September 2003 Cadence and CoWare formed an
alliance that included joint development, cross-licensing, a coordinated go-to-market and standards strategy, and a Cadence equity investment in CoWare. Under this agreement CoWare will focus on ESL as a front end to Cadence's Incisive Verification platform. Also as part of a special licensing agreement Cadence transferred its Signal Processing Worksystem (SPW) group to CoWare. The firm now has ~180 people. CoWare is a private company that has raised over $30 million through VC and corporate investors including ARM, Cadence, STMicroelectronics and Sony.


CoWare's System Verifier simulation kernel provides "Always-On" SystemC performance improvements over the OSCI reference simulator, enabling faster simulation of events, signals, time, and wait calls automatically. System Verifier supports mixed-levels of abstraction for all phases of system design: algorithmic, transactional, cycle accurate, and pin accurate. In addition, System Verifier includes an optional SystemC source code optimizer. System Verifier is tightly integrated with Transactional Bus Simulators and transaction-level Processor Support Packages (PSPs). The PSPs integrate vendor-supplied ISSs with the System Verifier kernel, and provide vendor-supplied software support
tools such as compilers, debuggers, and linkers.


The System Designer enables transaction-level SystemC designs to serve as a functional verification prototypes, or "test-beds" for embedded software, "divide and conquer" refinement verification, and full software-hardware integration. System Designer's analysis tools provide textual and graphical views to analyze SoC architectures including items such as cycle accurate performance, throughput and bottlenecks, bus switching and cache usage, system response, and processor loading. System Designer's HDL simulator interface supports SystemC co-simulation with VHDL and Verilog simulators.


The Platform Creator brings a graphical environment to assemble, configure and optimize SoC platforms at the transaction level in SystemC. It provides drag-and drop partitioning between the functional specification and the platform, including creation of low-level software drivers and transaction-level interconnect by Interface Synthesis. The Advanced System Designer automates the import and integration of VHDL and Verilog blocks with a SystemC transaction-level system. At any stage of refinement, a mixed SystemC and HDL netlist can be created, complete with everything necessary to co-simulate with ConvergenSC's range of supported HDL simulators.


The ConvergenSC Model Library includes a range of processor models from leading vendors, transaction-level bus models and RTL bus generators for common bus specifications, and peripheral models.




CoFluent


In conversation with company co-founder and general manager Stephane Leclercq I learned that the French company was started last year and has about 10 employees. However, the technology is 3rd generation from French research lab. The CoFluent approach is based upon a top-down design or application-driven process known as Co-design Methodology for Electronic Systems (CoMES aka MCSE - equivalent French acronym) developed by Jean-Paul Calvez a professor at the University of Nantes and now the company's chief technology officer. The modeling begins with two distinct viewpoints, the functional and the executive.


The functional model defines the logical architecture in terms of hierarchical structures. All functions in a model execute independently, in full parallelism and asynchronously from each other. The communication links can be via shared variables, synchronizations, and message queues. The functional model is technology independent.


The executive structure defines the physical architecture based upon active components (microprocessors, DSPs, memories, I/O devices) and their interconnections. Inter-related processors and shared memories communicate via communication nodes and signals. Communication nodes represent processor-to-processor and processor-to-memory communication links: point-to-point, bus or router. Signals are inter-processor synchronization events, e.g. interrupts.


A specific architectural configuration is derived by mapping the functional model onto the executive model. This entails a hardware/software partitioning and an allocation. This joining of the logical and physical models into a system architecture is referred to as the “Y” design approach.


The functional model is fully timed. Timing attributes are defined for execution time of operations, read/write times for shared variables, send/receive times for messages and signal and wait times for events. This enables the system's behavior to be simulated over time using a high-speed event-driven engine that directly executes algorithms of operations on the development host platform at optimal speed. By default, the functional model is executed in an ideal environment (fully paralleled functions, immediate communications, and unlimited resources). Time constraints can be added to elements of the behavioral model in order to assign more realistic execution or communication times.
In the architectural model, additional time constraints are added based on chosen hardware and software elements of the architecture.


With CoFluent Studio, designers create models at the message-level above the address-mapped transaction. Communications are atomic and mutually exclusive. Simulation at this level of abstraction is 10x the speed of transaction based simulation and 1000x speed of RTL simulations. CoFluent claims that their simulations deliver accurate performance data with an estimated error margin around 5%.


The company's CoFleunt Studio is a System Design Environment (SDE) that provides designers with the means to graphically define and architect a high-level model of their design. The model is automatically translated into SystemC 2.0 and co-simulated with CoFluent Studio for behavioral verification and prospective performances analysis. Designers can view execution timeline, evolution of system variables, and indices such as CPU utilization, response time, and throughput. CoFluent Studio's models and generation technique are the basis for the efficient co-simulation engine using a C++ or SystemC translation of the whole application. The same technique is the basis for the hardware/software
co-generators: VHDL generator for the hardware and C generator for the software.




Afterword


I asked each of the vendors why ESL has not gained greater acceptance before now and why they expected this to change in the future. A number of reasons were cited.


The first generation of ESL offerings (from other vendors) was flawed and did not live up to the hype. Guy Moshe commented that the first version of SystemC was seen as a threat to Verilog and VHDL and consequently received a negative reaction. There were limited tools supporting the language. Although most current vendors are relatively small, their technologies are third generation.


Whenever a product involves a change in methodology, particularly if it crosses organizational lines, there is a missionary aspect to the sale. ESL fits nicely into the way system architects and software engineers work but not so with hardware engineers. The former group is concerned with functional accuracy and an environment where they can rapidly code, compile, execute and debug software. Hardware engineers are more concerned with cycle accuracy and comfortable with different languages, tools and levels of abstraction. They see the ESL approach as requiring two different models, behavioral and RTL. If a vendor claims to be able to automatically synthesize RTL from the behavioral
level
of abstraction, there are concerns about accuracy and optimization. This feeling has always occurred whenever a new level of abstraction is introduced. Over time designs created with the new tools begin to approach the quality metric of existing techniques. Whatever differences persist is more than compensated for by an order of magnitude increase in productivity. It is up to each firm to judge where on this timeline ESL currently sits.


Perhaps the major obstacle to widespread adoption has been one of market timing. Heretofore, companies have been able to deliver designs without ESL tools. As salespersons frequently say “No pain, no sale”, the elimination of a pain/problem being the strongest incentive for a prospect to take action. As the complexity of designs increase, as the windows of opportunities shrink, as the amount of software content rises and so forth, the need increases for an approach that compress the overall development process and which supports concurrent development of hardware and software. Companies recognize there is a huge opportunity loss, if they miss the narrow market window.
They understand the need for this type of solution. ESL vendors are now seeing a lot of interest.


Graham Hellestrand sees product development in a transition from being silicon centric to software centric. He cites the change in hardware/software development costs of 80%/20% in 2G cell phones to be nearly reversed in 3G cell phones. He believes that we're at an inflection point, architectural driven design is a true revolution in hardware design. Stephane Leclercq observes that designs for telecommunication and multimedia systems now can involve 10 to 20 processors. This level of design complexity is simply beyond the comprehension of the human mind.


I titled this editorial ESL - Chapter 1 because there several areas and vendors that I wanted to cover but ran out of room. Consequently, I will have a second chapter in an upcoming issue.




Letter to the Editor - from Dave Dopson


In your description of 64 bit computing, you said that a 32bit processor could address 4.3 billion bits bits or 4.3GB. Each address correspondes to a byte rather than a bit and so I believe what you meant to say was 4.3 billion BYTES or 4 Gigabytes (since 1KB = 1024B).


Also, a lack of addressing space will not lead to disk thrashing, but rather more annoying design issues as the OS will have to divide the memory into different segments which must be selected before access. Disk swapping would actually occur in the reverse situation of a virtual address space that was larger than physical RAM.


Reply: Dave thanks for picking up on this. A 32 bit number does indeed map into 4.3 gigabytes. The original “4.3 billion bits or 4.3 Gigabytes of memory” had either one too few or one too many “bits”. As for thrashing, this occurs when the "Working Set", i.e. set of active pages, is larger than physical memory. When a process needs to bring something into memory, it is forced to swap out something that it will likely need again. The system ends up spending more time moving pages in and out of memory rather than anything else and is said to thrash. I don't think my article implied that the size of the address space was the cause of thrashing.




Weekly Industry News Highlights


Mentor Graphics Updates VStationTBX Verification Accelerator, Delivering Full Language Support for System C


EDA Partnership Delivers Yield Optimized SIP; Prolific, Circuit Semantics, & Legend Design Technology deliver standard cells for high volume applications


EDA Consortium & VSI Alliance Expand Semiconductor Intellectual Property Revenue Reporting


TSMC Qualifies Cadence Encounter RTL Compiler for Next-Generation Reference Flow; RTL Compiler Synthesis Delivers Nanometer Results


Synopsys and TSMC to Optimize RTL-to-Wafer Design Process


Synopsys Galaxy Test Solution Sets New Benchmark in Performance and Quality for Deep Submicron Designs


Synplicity Speeds FPGA Verification with Breakthrough Incremental Debug for Xilinx Devices


Synplicity Extends Capabilities of Certify Prototyping Software, Delivering the Ability to Prototype the Largest ASIC Designs


TSMC Validates Magma's Capacitance Extraction Accuracy for 0.13-Micron Designs


Applied Micro Circuits Corporation Announces Definitive Agreement to Acquire Intellectual Property and a Portfolio of PowerPC 400 Products From IBM, Signs Power Architecture License


TSMC Says IC Industry Success Starts with Design Collaboration; Shrinking Market Windows Require New Design Model




More EDA in the News and
More IP & SoC News



Upcoming Events...




--Contributing Editors can be reached by


You can find the full EDACafe event calendar here.


To read more news, click here.



-- Jack Horgan, EDACafe.com Contributing Editor.