[ Back ]   [ More News ]   [ Home ]
May 12, 2003
Show me the money
Please note that contributed articles, blog entries, and comments posted on EDACafe.com are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
Peggy Aycinena - Contributing Editor

by Peggy Aycinena - Contributing Editor
Posted anew every four weeks or so, the EDA WEEKLY delivers to its readers information concerning the latest happenings in the EDA industry, covering vendors, products, finances and new developments. Frequently, feature articles on selected public or private EDA companies are presented. Brought to you by EDACafe.com. If we miss a story or subject that you feel deserves to be included, or you just want to suggest a future topic, please contact us! Questions? Feedback? Click here. Thank you!

With the help of the folks at VitalCom PR, I had a chance to listen in on a roundtable discussion with three executives from EDA - Steve Wang, Vice President of Marketing and Co-founder of Axis Systems, Inc., Vess Johnson, President and CEO at Silicon Metrics Corp., and Jeff Jussel, Vice President of Marketing at Celoxica. The topic on the table was money, in particular where are the profits today in EDA and where will they be tomorrow. The answers these fellows came up with were in some ways predictable and in some ways quite surprising.

Wang - “Our customers right now are in emerging markets - we're selling to the embedded market, embedded processors as well as embedded operating systems. There are lots of emerging applications on the automobile front. On the consumer front, it's wireless and multi-media. Dataquest is projecting numbers for software revenue growth in 2003 to be over 2002, as much as $700 million dollars, which will closely match the 2001 level. Demand has returned in the embedded market and the growth there is very healthy. Sony, for instance, is purchasing quite substantially, investing more and more in EDA tools. And certainly Sony is very healthy from a company standpoint.”

“One area that continues to be difficult is the communications sector. Investment is not quite as heavy there because there's still a lot of over capacity. Some of those customers who used to have multiple projects are now trying to focus on a single project. From that standpoint, they've really scaled back. They used to tape out a chip and move on, but now they're actually staying around to see the thing through to production.”

Johnson - “I agree with that assessment. We sell into foundries, COTs, IDMs. The most activity is in the larger IDMs. We see them in a period of retooling, putting in place new design flows for 130 nanometers and 90-nanometer effects. The major push is for IDMs to look at new technology and try to make decisions on whether to build or buy. [We see them] leaning towards buy rather than deviate from their core competency to build internal tools.”

“But the rest of the industry is struggling at 130 to 90 nanometers. Maybe they're resorting back to 130 nanometers, but the problems still persist. Still the push for the retooling is to get ready for the new wave. Certainly the larger companies like IBM are pushing ahead into the 90-nanometer arena, but other companies may be waiting to see. The transition to 90 nanometer will worsen problems.”

Jussel - “It's a back-to-basics economy. Not as good as it was in the 1990's, but not as terrible either. Companies now must realize that budget freezes and cuts only work so far. They have to return to focusing on product features. Some companies are moving to the cutting-edge at 90-nanometer technology. Others are taking advantage of 180 and 130 nanometers where there are lots of cost/performance advantages, where they can wring additional profits out of established technology.”

“In any market there are going to be some industries that are doing better than others. We've definitely seen examples of that. One area that is doing very well is defense. A lot of our defense companies are investing at the system level to take advantage of 180 and 130 nanometer capabilities. We take hardware/software co-design and target it into FPGAs and [we think] the defense guys may decide to go there rather than pursue an ASIC. The FPGA guys should be feeling confident about their future. It will become more cost effective to try to make it on an FPGA.”

Question - “Somebody in EDA is making money on something out there. Who is it? What are they making and how are they doing it?”

Wang - “The companies making money are those providing value to customers, which [in turn] generates repeat orders. You can sell one or two tools, but you can't keep re-selling without convincing a company. If you look at Wally Rhine's presentation [at the EDAC panel in February], EDA is interesting. A lot of the small companies are Number One in their own sector and there are lots of small sectors. So you can carve up the industry and make money in lots of ways.”

“The ESL [electrons system level] space is an example. A lot of design tools have been focused on the design side, but not on the verification side. There are a whole lot of C-level people who were focusing on getting from design to the RTL level. But for ESL types of design, there's been too much focus on design. Over time, people have figured out that the problem is verification. That's going to be next wave for ESL.”

“From an industry standpoint, the climate has changed to more of a top-down approach. Sales [strategies] have changed, so we want to approach it from a very high level. Companies want to make sure that purchase orders go up to Board-of-Directors level. And we've had a couple of situations where the PO has stopped at the CEO level. Previously, companies set their budgets on a yearly basis, but now it's being changed to a quarterly basis.”

Johnson - “I think that if you bring value to the customer - issues that are on the top ten list of problems - you'll succeed. Products that are on the fringe don't do as well. In a down economy, you have to get back to basics. We know that the closer you are to the customer the more you understand the problems. I've been in EDA for a long time. EDA tends to ride the ups and downs of the market fairly well. When the market is down, a lot of our customers tend to go into retooling and prepare to emerge a lot stronger. During these times, they make the build-versus-buy decision by looking at what they can get off the shelf.”

“I don't think EDA has never been down. Look at the Q1 2003 earnings from the public EDA companies. They're not great, but EDA doesn't have the sharp downturn [that characterizes the rest of the semiconductor industry]. It's a more gentle wave. That's just the nature of our market.”

“Are our sales cycle longer? I wouldn't be surprised if you're seeing a 25% to 50% increase in the cycle due to instability in the market. Typically you try to position your sales force to get as many high-level contacts as possible and to remember that the strategy should always be very focused on execution.”

Jussel - “It always helps if you're one of the big guys in EDA because the mainstream products always remain key. Our strategy is to fit into that flow without disrupting, while bringing in a new technology that the customers can't live without. In ESL, it's not a problem of design versus verification. [Vendors] haven't failed because they were doing design tools. They failed because they included one-off languages without a user basis, where there was no ability to tie into the design flow or the verification system. The key to ESL is to tie into the existing flow and provide the ability for hardware/software verification.”

“The Dataquest market numbers indicate you have to deal with all levels [in a customer's organization]. In most companies, that may not mean the CEO or the Board, but you [always] have to go to the CAD tool [users] and develop demand in the engineering team by proving improvements in their productivity. At same time, you have to work with the CAD department and work to prove how you'll work into their design flows. It's a lot more complex right now because people don't have design budget to try things out. So you have to deal with all the players at once. But, once you're in there, it can be very satisfying.”

Question - “What about using the web and on-line sales strategies?”

Jussel - “That's not something that we're going to address at this point. There are markets where you can get access over the web periodically, but most customers want to know they will own the code. For the companies who want to include you in their design flows, they want to know that they can design you in. [On-line sales] fit better where people have money for one-off use, but for us [the strategy] is less appealing. And, you need to distinguish between on-line licensing versus web use of a tool.”

Johnson - “That is one area that we've looked at for things likes peak-use licensing. We have customers whose only demand is access to greater capacity, who will support a pay-per-use basis. Also, sometimes you can win a technical benchmark, but suddenly the CEO won't sign off on a purchase. So we might aim to try to help out an engineering team on a rental basis and give them the same support as we would with a perpetual license. I [certainly] don't think it damages our image to have products available through e*ECAD [for instance]. But that's not a cheap [sales] solution and not all of our tools lend themselves to that use model.”

“Actually, it's more cost effective for a customer if they buy perpetual licenses rather than try to work through rental value. Although we have design groups in large [customer] companies who have frozen capital expenditure budgets and we're trying to meet their needs, the mechanism we really support is licensing.”

Wang - “In this climate, we're very focused on major accounts. It's not like two years ago when we had lots of different types of customers. {Product access] over the web provides an evaluation vehicle to customers. We tried it a couple of years ago to let people play around with it, but the major problem was security of the design with on-line applications. People had to port designs of to different sites and then run software out there. From that standpoint, people felt insecure. In the current business climate and with our major accounts, we need the flexibility of all these [models] - subscription, perpetual licensing, leasing. It makes access over the web a more distant

Question - “What about Linux?”

Wang - “We see Linux applications. Remember the idea that you could use compute farms for big designs and incorporate other compute resources. Well Linux will allow people to open up their computing resources, which is the reason we're porting all our stuff to Linux. Linux is open source and, by the same token, can provide some high performance. But [overall] the operating system is not a big thing, especially where we see mixed environments which includes Linux.”

Johnson - “We support it. It's here. The cost of ownership of a Linux box versus other proprietary workstations is quite significant. It's going to be a change in the industry. We support HP and Sun. For us, the software ported across those platforms quite easily and the support impact for us is quite minimal. For customers, that decision is mostly made at the corporate level. But I do see the move to Linux based on the cost of ownership. But huge capital investments have been made into other design platforms.”

Jussel - “For us, we don't support Linux. We're providing system-level design tools that target FPGAs and we work on PC platforms. As far as we're concerned, it's definitely a corporate decision. We're committed to Windows platform and the onus is on Linux [to prove itself]. In this economy, anything new needs corporate approval for our customers - the budgets are tied up at the corporate level. In the past, you would have seen engineers with their personal preferences [being met]. Quite a few moved to Windows and we're happy to meet their needs.”

Question - “Gentleman, would you like to make a closing statement?”

Johnson - “We are investing in R&D internally. Morale is fairly high here and we're very focused. We know what we need to do. We need to go out and execute, generate revenue, and have a healthy future. We want to build for that future. We're tying to take a balanced approach to all of this. We're tying not to be so focused on today, that we're not looking to the future. We're a fairly small company compared to a Synopsys, so for us it's tougher to communicate to the masses. Meanwhile, we go out of our way to communicate to our employees. In fact, we almost over-communicate. Everyone in the company knows exactly where we are with our marketing strategies and our customer
awareness initiatives. And of course, we're [constantly] communicating with our customers.”

Wang - “Our company is a technology driven company. We've set a goal of having one major technology introduction each year. And we've done that each year. We have the investment for going forward. And in today's market, it's much easier to get talent. People are changing the way they do things. They want to see the impact of a new tool, so from our standpoint, [the process] is very customer driven. We want to unleash new technology into the customer's hands. Some of our engineers deal directly with customers and we see that as motivation for our engineers because they get a good feeling when they see our products in use.”

Jussel - “Our technology is out of Oxford University and we've got over 700 university partners who we can engage with as R&D partners. 25% of our engineering workforce is in R&D with responsibility for incremental leaps in technology. And then we're always looking for external technology. We think that even in this somewhat difficult market, there are many exciting opportunities ahead.”

(Editor's Note: Thanks to Vess Johnson, Jeff Jussel, and Steve Wang for a fascinating hour of conversation.)

Industry News - Tools & IP

Accelerated Technology, the Embedded Systems Division of Mentor Graphics Corp., announced that its Nucleus RTOS was selected by the National Radio Astronomy Observatory (NRAO) for employment in Phase I of the expansion project for the Very Large Array (VLA). The VLA, described as “the most scientifically productive and widely-used telescope in the world,” is an extremely sensitive radio telescope that collects radio waves naturally emitted by celestial bodies.

These radio signals are recorded and then processed by astronomers to produce a radio image of those bodies as seen by the telescope. (Hopefully Celestial Bodies are getting royalties for those downloads.) Accelerated Technology says its Nucleus PLUS kernel and Nucleus NET TCP/IP protocol stack will be implemented in the first phase of the VLA expansion project. The Accelerated Technology news release specifies, “The project will use modern electronics and computer technology to increase the capability of the VLA tenfold in all scientific aspects.” (As opposed to ancient electronics, perhaps?)

Aldec, Inc. announced the Riviera-IPT unified, assertion-based hardware acceleration platform that the company says “maximizes simulation performance and accelerates ASIC and FPGA design verification by 10x - 50x over traditional methods.” The platform is based on Aldec's VHDL and Verilog common kernel mixed-language simulator and hardware acceleration technology, and is targeted at multi-million gate SoCs built on 0.18-micron technology and below. The company also says that “Riviera-IPT enhances debugging capabilities and accelerates verification by supporting assertions in both a software and hardware environment.”

Aldec says the high-level language of assertions contains declarative constructs for capturing and verifying design specifications throughout the design cycle. Using assertions in conjunction with a traditional testbench produces faster verification results and improves overall coverage. Assertions are portable and can be added to the design outside the Unit Under Test, or can be embedded directly in the design during coding with Riviera-IPT, according to Aldec. Riviera-IPT then compiles assertions into hardware, along with selected design sections. The assertion compiler in the platform can produce module checkers in the form of RTL code added to the synthesizable portion of the design.
assertion checks are used at both the behavioral (dynamic) level in the software simulator, and at the structural (static) level in the hardware accelerator.

The hardware-based assertion monitors consist of two principal parts: a logical sequence of signals to be observed and the desired response when the assertion violation is detected. Once assertions are implemented and verified, they remain part of the final design and can be used as real-time protocol checkers to detect violations during normal device operation. Riviera-IPT should enable designers to verify and optimize their designs in smaller, more manageable blocks. Each block is verified in software by Riviera-IPT's event-driven simulator, so the module is debugged before it's synthesized and placed in the hardware accelerator. After the verified module is placed in the hardware board,
it remains “connected” to the remainder of the design residing in the software simulator. Ultimately, the majority of the design blocks, including assertions, reside in hardware, while the behavioral testbench and SystemC components remain in software. The interface between the components in hardware and software is managed through Riviera-IPT's Design Verification Manager (DVM).

The product includes an IEEE VHDL, Verilog and EDIF common kernel simulator, the DVM, a hardware accelerator board that can handle up to 12 million FPGA gates, and an interface to a SystemC compiler. Synplicity's SynplifyTM logic synthesis can be added to complete the design flow.

Barcelona Design Inc. announced that the company has entered into a multiyear, multimillion-dollar agreement to supply its synthesizable PLL products to Matsushita Electric Industrial Co., Ltd. Under the terms of the agreement, Barcelona will port its Miró Class Clocking Engine to three Matsushita CMOS processes, including the 130-nanometer node, and provide the ported engines along with the Prado Synthesis Platform along with a large number of PLL instances. The companies say this will meet the majority of Matsushita's clock generation and synchronization PLL requirements over the next several years, and will enable a “radical change” in how Matsushita
creates PLLs for its SoC projects by saving time in the synthesize of full-custom PLLs.

CoWare Inc. has introduced the ConvergenSC product family, which the company says is EDA's “first system-level design solution using a common infrastructure for both design and verification, built expressly for SystemC.” ConvergenSC (pronounced “convergency”) aims to integrate what the company calls “the multi-disciplinary requirements of SoC designs.” CoWare says they have produced a SystemC-based environment in which to create optimal, differentiated SoC designs: “Embedded software developers will be able to validate their software on a model of the silicon using the SystemC Transactional Prototype created using ConvergenSC.”

Mark Milligan, Vice President of Marketing for the company, said, “With ConvergenSC, CoWare is first to market with a SystemC solution specifically tailored to making the best design decisions and validating them in a complete system environment including software.”

The first product announced in the ConvergenSC family is System Designer, which addresses the convergence of system architecture design and verification with “high-speed simulation, architecture analysis, and design implementation on a single SystemC-based infrastructure.” System Designer includes: creation of a SystemC transactional prototype for SoCs containing multiple processors, complex busses, memories, custom logic and embedded software, and a SystemC-compliant simulation architecture, which the company says runs up to five times faster than the Open SystemC Initiative (OSCI) SystemC reference simulator. The company adds that ConvergenSC is fully compliant with the OSCI
standard and that any IP models created to SystemC specifications will run correctly within the ConvergenSC environment.

CoWare says that SystemC transactional prototypes allows SoC architecture decisions to be made early in the development process - decisions such as hardware/software partitioning, processor selection, co-processor design, and bus architecture. Multiple architectures can be evaluated in the system environment to determine an optimal configuration. The SystemC transactional prototypes execute fast enough to boot larger RTOSs, such as embedded Linux, and can be used for software development, comprehensive system integration, and hardware design verification.

In related news, CoWare introduced the ConvergenSC AMBA Transactional Bus Simulator (TBS), which the company describes as “an off-the-shelf solution for using the AMBA 2.0 on-chip bus specification in SystemC that offers engineering teams a means to differentiate by design.” The new simulator is based on CoWare's transaction-level simulation technology, and aims to speed development of SoC designs based on ARM's on-chip bus specification. CoWare says the design industry is converging on transaction-level modeling (TLM) as a standard abstraction
level for SoC design and that unlike RTL (the standard used for hardware implementation), TLM provides the simulation performance required for system-level integration and the cycle accuracy needed for optimal architecture design.

The company also says that in today's SoC designs, the on-chip bus dominates communication, but engineers are often required to develop their own models, a time-consuming, error-prone process that can take months to produce a fast and accurate model of the complete bus protocol. CoWare says that the ConvergenSC AMBA TBS provides a “verified, transaction-level bus model that is both hundreds of times faster than RTL and fully cycle accurate.”

Electronics Workbench launched Multicap 7 and Multisim 7 schematic capture and simulation tools for professional-level PCB design. The company says that Multicap 7 is designed to perform schematic entry without simulation and is a capture program suitable for pure schematic entry, driving simulation, or feeding PCB layout. Multicap 7 includes modeless operation which the company says eliminates the need to switch between part placement and wiring modes, autowiring, efficient part placement from logical parts bins arranged on the desktop, rubber-banding on part moves, bus vectors which allow users to wire wide buses of up to 128 bits in one step, and an integrated database.

Multisim 7 is a simulation tool for verifying circuits and correcting errors, and includes an integrated version of Multicap. The company says that Multisim is “the world's most popular SPICE simulator,” and that Multisim 7 builds on the legacy with capabilities that are “ rare in today's shrink-wrapped EDA market.” These new features include circuit wizards that self-generate circuitry to meet user-defined parameters for filters and 555 Timers, automatic SPICE model makers, integration with National Instruments Labview for acquisition of real signals and comparison to simulated results, as well as other enhancements.

Ian Suttie, Vice President of Sales and Marketing, said with obvious confidence, “Multisim 7 and Multicap 7 are completely devoted to the requirements of professional mainstream designers [and] form the foundation of our future integrated design solutions. We have more users around the globe than any other supplier of shrink-wrapped EDA software. They have spoken and we have responded with innovative functionality simply not available anywhere else.”

LogicVision, Inc. announced the availability of a “silicon-proven solution that provides automated test and repair.” LogicVision, in cooperation with MoSys, Inc., qualified the combination of Mosys' 1T-SRAM family of high-density embedded memories with LogicVision's IC memory BIST and Built-In Repair Analysis (BIRA) support. The companies said a design containing multiple 1T-SRAM instances, BIST, and BIRA was fabricated and verified an end-to-end memory test and repair process.

Mentor Graphics Corp. announced that Silicon Graphics, Inc. (SGI) is the first customer of Mentor's new verification package, which includes access to Mentor's VStation emulation system, as well as Mentor's consulting services. SGI says it will use the package to “accelerate development schedules and mitigate the risk of silicon re-spins.” Mentor Consulting says it will work with SGI to develop a customized emulation environment and infrastructure, and will provide on-site support during the project.

Nassda Corp. announced HANEX, a circuit-level timing and crosstalk analysis tool for custom digital designs at 130 nanometers and below. The tool analyzes timing and determines critical delay paths using a hybrid of dynamic and static methods to find nanometer parasitic effects on circuit behavior. The company says they believe HANEX is the first hybrid analysis tool able to automatically identify critical paths, including the impact of crosstalk effects on signal timing for custom CMOS digital designs with millions of elements. The tool intends to fill the gap between static analysis methods and requirements for more accurate timing analysis in the nanometer design domain.

The company says that dynamic simulation relies on large numbers of vectors to verify circuit performance. While dynamic simulation provides detailed accuracy, engineers have struggled to create efficient sets of vectors to ensure comprehensive coverage or reveal worst-case operation of circuits. However, the migration to nanometer processes requires that these problems be resolved.

Static-timing analysis (STA) methods were created to address verification complexity and to eliminate the need for simulation vectors, using exhaustive search techniques across all possible paths to reveal timing violations. Although STA delivers results relatively quickly, engineers often waste effort investigating false paths - timing problems which would not be encountered in the actual circuit. STA is also proving less effective in analyzing high-performance nanometer designs because it cannot account for the timing impact of dynamic nanometer effects such as crosstalk. Because STA estimates these effects using coarse approximations, it often yields unreliable results and limits
full-timing optimization.

Nassda says HANEX provides an integrated verification tool that resolves these issues at both the pre-layout and post-layout stages. The tool uses a hybrid analysis method to find critical delay paths in combinational, latch/flip-flop, and dynamic logic, by simultaneously simulating entire critical paths and taking into account voltage-dependent capacitance, Miller capacitance, and non-linear input slopes. HANEX also verifies set-up and hold timing for sequential logic and uses dynamic-accurate clock network analysis to provide slack information, which reduces the reporting of false paths.

The company says the tool's crosstalk analysis features will produce a more realistic assessment of circuit behavior than pure STA or dynamic methods. HANEX also uses its hybrid capabilities to provide accurate clock network timing simulation by automatically identifying and tracing the clock network starting with user-defined clock sources. After it back-annotates interconnect parasitics (from any third-party extraction software) to the associated clock network, HANEX simulates the entire network dynamically with precise fan-out loading, and uses clock arrival time and slope at every clock sink for timing verification.

Obsidian Software, Inc. announced an additional product in the company's RAVEN family - a random test generator used for functional verification of proprietary processors. RAVEN SE (Standard Edition) supports licensable soft cores, running on Linux and Solaris, and is available now for the ARMv4T architecture, which includes the ARM Thumb instruction set, and is implemented in the ARM7T family of processor cores. Future product announcements are expected for the RAVEN product family.

The company argues the validity of the new product offering by pointing out that soft core licensing is complex, and that the types of licenses, support tools, and services offered vary between vendors. But regardless of the type of license, licensees need a way to verify a design. Some vendors offer a static test suite, a simulator, and other supporting verification tools; others do not. A static test suite provides compatibility or conformance testing. However, it is not designed to catch difficult corner cases.

The company says that RAVEN SE can be used to create a static test suite, or it can supplement an existing test suite to find difficult corner cases that a static test suite can't find. Since corner cases are difficult to find and costly to repair, processor designs, whether proprietary or licensable, can be verified using an instruction stream generator. A dynamic instruction stream generator is used most often because it can track the state of the machine and build complex and dense tests.

Pulsic Ltd. announced the release of its Lyric Physical Design Framework, a physical design tool suite that aims to provide a “flexible, high performance auto/interactive routing and ECO placement solution for all IC design types, including complex analog, custom, mixed-signal and SoC design. From an initial placement of an IC design, the user is able to apply Lyric's new auto-interactive tools to analyze and optimize the placement, route the entire design extremely quickly, and effectively handle any ECOs required at any stage of that process.” The company will be demonstrating all of this at DAC.

Silicon Canvas, Inc. announced that Divio has selected Silicon Canvas' Laker for its current MPEG-4, DV, and MJPEG designs, and for its future multimedia chip designs as well. Divio says it will use Laker post place-and-route, to view and fix DRC and LVS violations.

Synopsys, Inc. announced that Xilinx, Inc. has validated HSPICE for use in its 10 Gbps
RocketIO X Multi-Gigabit serial I/O design kit. HSPICE is a component of the analog mixed-signal (AMS) toolset within Synopsys' Verification Platform and supports directly measured scattering-parameters (S-parameters) analysis, considered critical in the modeling and simulation of system-level signal integrity effects in high-frequency designs. The RocketIO design kit supports the Virtex-II Pro X FPGA architecture with integrated 10-Gbps RocketIO X transceivers.

TriCN announced an agreement to license its TriDL (Digital Dynamic Deskewing Link) SerDes technology to General Dynamics Advanced Information Systems (GDAIS). TriCN says TriDL provides 2.5 Gbps throughput for GDAIS' chip-to-chip communications applications and is built on IBM's 0.13-micron process. The company also says the technology offers savings in power and chip area over comparable analog tools, and eliminates interface performance degradation through the use of dynamic skew compensation.

True Circuits, Inc. announced a line of spread-spectrum and low-bandwidth phase-locked loop (PLL) analog hard macros with the company's LockNow! Technology. The company says its Spread Spectrum PLL allows the spread-spectrum functionality to be included in the ASIC rather than requiring a separate part on the system board, and is designed to multiply an input clock by an integer or fixed-point number with a frequency spreading capability suitable for applications that require spread-spectrum clock sources
to satisfy FCC requirements for RF emissions. The company's Low Bandwidth PLL is designed to address excessive jitter from system clocks originating from lower-quality crystals, and generates high-speed clocks required for processors and chip interfaces that require low-jitter performance.

Industry News - Devices

Denali Software, Inc. announced that NTT Electronics Corp. (NEL) is shipping an HDTV CODEC that incorporates the Denali Databahn memory controller core. The NEL VASA (Versatile and Advanced Signal Processing Architecture) is a single-chip MPEG-2 HDTV CODEC, which interfaces with external DDR-SDRAM memory running at clock rates up to 200MHz (400MHz data transfers). The 60-million-transistorVASA chip was developed jointly by NTT Cyber Space Laboratories and NEL to deliver what the companies call “advanced image processing capabilities for HDTV video applications based on MPEG-2 standards.” Because huge volumes of calculations are required for processing video data,
in the past,
several dedicated chips had been used for encoding (compression) and decoding (decompression) HDTV video. The companies says that NEL's new single-chip product should allow for smaller, lower cost HDTV systems, as well as applications in digital cinema, stereo 3D TV, multi-angle TV, and other large-screen video applications.

Wavecom SA and Atmel Corp. announced the release of a jointly-developed GSM/GPRS baseband processor. The processor is the first result of a long-term technology partnership that has been established between Atmel and Wavecom, capitalizing on Atmel's expertise as a designer and manufacturer of complex SoC and Wavecom's leadership in the design and development of integrated wireless communications products. The partnership will include development and manufacturing of wireless communications products for 2.5G mobile communications and later generations.

Xilinx, Inc. announced availability of the 644 MHz SFI (SerDes Framer Interface)-4 Single Data Rate (SDR) Low Voltage Signaling (LVDS) and XSBI (10 Gigabit 16-bit Interface) interfaces optimized for use in the Virtex-II or Virtex-II Pro FPGAs, together with Xilinx(R) RocketPHY(TM) 10Gbps physical layer transceiver family. Take a breath. The company says these ready-to-use interfaces enable system designers of 10 Gigabit Ethernet, OC-192 SONET/SDH, and 10Gbps Fibre Channel applications to “seamlessly interface” between the Xilinx Virtex-II Pro FPGA family and the RocketPHY devices. Xilinx says it has already demonstrated the interoperability of the SFI-4 and XSBI
interfaces across a range of ASSP devices, in addition to Xilinx Virtex-II Pro and RocketPHY devices

Coming soon to a theater near you

HOT Chips 15 - Probably one of the better-kept secrets in the industry, this annual conference brings together designers and architects of high-performance chips, software, and systems. Presentations focus on real, up-to-the-minute developments. This is the 15th year that the symposium will offer a forum for engineers and researchers to highlight their leading-edge designs. Organizers say, “The atmosphere provides for a wealth of networking opportunities with individuals who are making a difference in the high-performance chip arena. Three full days of tutorials and technical sessions will keep you on top of the industry.” It's all happening from August 17th to the 19th
on the Stanford
University campus in Palo Alto, CA. If you can think past DAC, think
HOT Chips.


Karen Bartleson has been chosen the 2003 winner of the Marie R. Pistilli Women in EDA Achievement Award. The award is named for former DAC organizer Marie Pistilli and is presented annually to an individual who has visibly helped advance women in the EDA industry. Bartleson is Director of Quality and Interoperability at Synopsys and brings enthusiasm and joi de vivre to the world around her. The award committee agrees. In giving the commendation to Bartleson, they cite her enthusiasm for technology, the “unique perspective [she has] gained [throughout] her career,” and her active involvement in numerous industry-wide committees. Bartleson will receive the award at DAC
2003 during the
Monday afternoon
Workshop for Women in Design Automation.

Gary Smith, EDA Analysts at Dataquest, said, “Karen is a remarkable woman and one who sets an example for us all. She is successful in her career as a technologist and visible industry spokesperson, and is a mentor and teacher to women and men throughout the EDA industry.”

Jan Willis, Vice President of Strategic Third Party Programs at Cadence Design Systems, said, “Karen is an excellent choice both for her industry-level accomplishments on interoperability and for promoting the industry. And she's well-known as the woman who can explain EDA to just about anyone.”

Bartleson has been active in the EDA and semiconductor industries for 23 years. She joined Synopsys in 1995 and is responsible for initiatives that further EDA tool interoperability and quality programs to enhance customer satisfaction. During her time with Synopsys, she has led the in-Sync, university, corporate quality and compute platforms marketing programs, while also representing Synopsys in the EDA standards arena. Bartleson developed the Technology Access Program (TAP-in) for interoperability and pioneered the Synopsys EDA Interoperability Developers' Forum. In addition, she developed and continues to teach a course entitled “Chips and EDA for Dummies,” which is targeted
at people
who are not dummies, but who come to their work in EDA without a technical background. A version of Bartleson's course is now a
DAC workshop.

Prior to joining Synopsys, Bartleson was the CAD manager at United Technologies Microelectronics Center. Earlier, she began her career at Texas Instruments as a software engineer in the design automation department, helping develop simulation tools. Bartleson has a BSEE from California Polytechnic State University at San Luis Obispo, CA.

In accepting the award, Bartleson said, “In college, I was fascinated by electronic engineering and computer science. EDA combined the best of these worlds into an extremely rewarding career for me. I know the best is yet to come as we develop the technologies of the future, and I encourage all women to join us.”

Congratulations, Karen!

Cadence Design Systems, Inc. announced it has acquired yet another company - K2 Technologies, Inc. K2 specializes in mask data preparation and an automated approach to design finishing, including reticle design synthesis, wafer layout, fracturing, jobdeck generation, generation of mask/wafer fab documentation, and physical
verification technologies. Cadence says the acquisition is part of the company's Design for Manufacturing (DFM) strategy and that K2's technology will be ported to the OpenAccess unified database.

Aki Fujimura, Corporate Vice President and General Manager of the Cadence DFM business unit, said, “At 90 nanometers and below, you can no longer separate design from manufacturing. Acquiring K2 is an important element of our strategy to integrate manufacturing data into the design process to ensure customer success. Another benefit is that our software designers will have even greater visibility into the manufacturing process. This insight will help them improve our tools so that the transition from design to manufacturing is as seamless as possible for our customers.”

Cadence will retain K2's employees, who are located in Salt Lake City, UT, and Dallas, TX. The terms of the acquisition were not disclosed.

Also from Cadence - The company announced the appointment of Leslie Rechan as Senior Vice President and General Manager of Worldwide Field Operations for North America. The company says he will join the team of Cadence regional general managers located in Europe, Asia Pacific, and Japan. Rechan has 20+ years' experience in IT from previous work at IBM and Onyx Software. At IBM, Rechan held positions in field sales, systems engineering, services, development, and general management across worldwide markets. At Onyx, he served as President and COO. Rechan has a B.S. from
Brown University and an M.S. in management from the Kellogg School, Northwestern University.

Icinergy Software announced that it has selected The LogicWorks to represent the company in the Eastern Region of the U.S.

LogicVision, Inc. announced it has named DI Corp. as the exclusive distributor for LogicVision products in Korea. Under the agreement, DI will sell LogicVision's Embedded Test product line and provide marketing and technical service in Korea. LogicVision cites Gartner in saying the Asia/Pacific region has become the largest semiconductor market with growth of $54.8 billion in 2002. Mukesh Mowji, Vice President of Sales and Marketing at LogicVision, said, “[We] believes the Asia/Pacific region presents a highly compelling market opportunity, given that many of the world's top semiconductor companies are located in Korea.”

Monterey Design Systems announced the opening of a research and development center at Viasphere Technopark in Yerevan, Armenia, under the name of Monterey Arset. The company says the center will employ over thirty scientists, many of whom have advanced degrees in EECS. Jacques Benkoski, President and CEO of Monterey Design, said, “We were very impressed with the quality of technical talent available in Armenia and decided to take advantage of that talent pool by opening this R&D center. With the help of Viasphere, we were able to staff the center very quickly with top-notch researchers, and we plan to aggressively expand our activities there in the future."

In the category of ...

Issues at DAC 2003 - SystemC versus SystemVerilog

The following comments were received from Brett Cline, Vice President of Marketing at Forte Design Systems.

“Cliff Cummings' May 5th Letter to the Editor in
EDA Weekly brought to light some of the current confusion surrounding SystemC. Cliff is right regarding SystemC's scope: It is not intended to create gate-level or RTL design descriptions. SystemC is meant to permit a single language to be used for specification, architectural analysis, testbenches, and behavioral design.”

“Many designs today start off as complex algorithms; nearly all of those algorithms start off in C or C++, and those designers are using SystemC because it is close to their starting point. So why not just use SystemVerilog's C interface to do high-level design? Because you can't.”

“C/C++ algorithms have no set timing, nor can they be immediately mapped to hardware blocks because of their lack of a protocol interface. The SystemVerilog C interface specifically lacks the basic capabilities for hardware design such as concurrency, hierarchy, and interconnect -
it's 'just plain old C!' SystemC adds the prerequisite high-level design functionality such as hierarchy, cycle accuracy, and bit accuracy. And, by the way, there is a free simulator at

“So, 'Where's the beef?' Cliff [Cummings] asks. It is common to observe 10x-100x improvements in simulation performance for algorithms written in SystemC versus an RTL model. However, even greater value comes from adopting a high-level design flow, with SystemC used in conjunction with behavioral synthesis technology. This approach significantly increases designer productivity, both by reducing time-to-RTL and by speeding verification. Behavioral synthesis, which has advanced significantly since first generation tools like BC, lets design engineers focus on designing hardware - not coding RTL - and SystemC provides the right abstraction level for that.”

(Editor's Note: Forte is sponsoring an open session on
SystemC at DAC 2003 on Wednesday, June 4th, to discuss SystemC pet peeves, desired improvements, and desired features. As an OSCI [Open SystemC Initiative] committee member, the company says it will compile and forward the comments collected. Additionally, you'll probably want to attend the Tuesday
SystemC Technology Symposium moderated by Gary Smith and sponsored by OSCI, with presentations from WHDL, Doulos, STMicroelectronics, and Texas Instruments. It looks like the principle
spokesmen for SystemC will be out in force at that one. Meanwhile, give the SystemVerilog fraternity a chance to argue their case and check out the range of
events that Accellera will be hosting at DAC 2003, including a SystemVerilog Workshop on Monday, a breakfast co-hosted with Axis Systems on Tuesday, and a membership meeting on Wednesday.)

Issues at DAC 2003 - Power

A lot of people are suggesting that power will be one of the primary issues on the table at DAC this year. Andrew Yang, Chairman and CEO for Apache Design Solutions, Inc., and Keith Mueller, Vice President of World Wide Sales and Marketing, offered up these comments in a recent phone call as to why that may be so.

Yang explained the physics behind the power issue: “The primary limitation we're seeing [as process technologies shrink] is physical power analysis - a problem that used to be a secondary problem at larger process technologies with their higher voltage supplies, lower cell densities, and lower frequencies. Traditionally, people have thought of 'power' as 'RTL power estimation' in the design stage or 'static power analysis' in the verification stage. Today, however, the real missing link in the existing nano-scale design flow is examining the power impacts on an SoC caused by physical implementation decisions made throughout the design stage - for example, power grid density/size,
capacitance location/size, packaging, and so forth. With the 1V power supplies and higher frequencies in 90-nanometer designs, dynamic physical effects such as the inductance noise of the power grid and package parasitics need to be considered, along with simultaneous-switching effects.”

“There is a power network or power grid on an IC, which consists of a mesh network. With nanotechnology, you may dedicate up to 3 or more layers of metalization purely for the delivery of power. In wire-bond designs, an inadequate power supply mesh will typically cause static IR drop problems toward the center of the chip, since power is delivered at the periphery of the chip. It's true that with flip-chip packaging, static problems can be almost eliminated because power is distributed in 'planes' across the full chip. However, local dynamic hot spots can still occur due to simultaneous switching, powering down a block in low-power designs and then turning it back on, or other such
The resulting voltage drop caused by this instantaneously changing current cannot be seen by static analysis tools. This problem becomes critical with nanotechnology because the power/noise ratio diminishes, which may impact yield, operational frequency, and even functionality.”

Meanwhile, Mueller explained how design strategies have evolved in dealing with power: “For many years, there have been tools available to get power estimations up-front without physical knowledge of the circuit through RTL power analysis and prediction. Then there were early vector-based
dynamic solutions that were only usable on small blocks of the design. Following on that, tools for cell-based static power analysis were developed, which allowed you to consider the full-chip physical layout, but only as a resistive network using average currents based upon the estimated toggle rates of various blocks. This technology has been part of the standard sign-off flow for designs for many years. But now speeds are much higher, voltage supplies are much lower, and capacitance and inductance are much more important in determining yields and power grid integrity. [However], static tools can't look at the impact of decoupling capacitance, on-chip inductance, or package

“The other main issue with the current design flow is that this power analysis is typically done in the verification stage of the design. If you find anything wrong at this stage, it is extremely painful to fix. The result is that designers must over-design the power grid just to avoid any potential surprises during verification. But when you over-design, there is a die size and routing resource penalty for the power grid. It's an extremely complex problem to accurately analyze the full chip to determine the transient dynamic effects of simultaneous switching of outputs, and the impact of both intrinsic and intentional on-chip decoupling capacitance. Even at 0.18 micron, customers
have come
to us with simultaneous-switching design failures. Low-power designs require particularly thorough analysis, with large blocks switching on and off. Most people use some form of ad hoc on-chip solution such as the use of filler cells or blank space for decoupling.”

“We attack this problem by taking as much of the physical design information into [Apache] RedHawk, as is available to the designers at that particular point in the process. If there's only initial first-pass placement without routing, we can provide early feedback to the design team about power grid requirements and potential static and dynamic hot spots. As you refine the data throughout the design flow, the tool provides increasingly accurate results all the way through to final verification sign-off.”

(Editor's Note: Apache Design will be presenting a hands-on tutorial at DAC 2003, as will Sequence Design, Cadence Design Systems, IOTA Technology, Synplicity, Sigrity, Magma Design Automation, Mentor Graphics, Xilinx, and NPTest. The majority of those
tutorials will be addressing issues related to power and signal integrity. Meanwhile, you might also want to attend the day-long tutorial on Monday at DAC 2003 entitled,
“Design Techniques for Power Reduction” organized by Borivoje Nikolic from the University of California.)

Three rumors and a factoid

(Editor's note: The three rumors below hail from various anonymous sources. The factoid, however, hails from Dave Evans at Forte Design.)

Rumor No. 1 - “EDA people don't party as much as they used to at DAC because we're an aging population.”

Rumor No. 2 - “I hear they're floating the idea of restarting ISD Magazine.”

Rumor No. 3 - “There's 75 million square feet of vacant office space right now in Silicon Valley.”

Factoid - “No matter how much randomness critics may sense in the EDA process - different point tools, companies, etc. - the results we've achieved over time have been simply staggering.”

You can find the full EDACafe event calendar here.

To read more news, click here.

-- Peggy Aycinena, EDACafe.com Contributing Editor.