Open side-bar Menu
 What's PR got to do with it?

Posts Tagged ‘semiconductors’

Predictions 2013 – Olivier Coudert on the cloud and a new model for hardware startups

Monday, February 4th, 2013

 

Our next prediction comes from respected blogger, consultant and software architect, algorithm, EDA and cloud computing expert, Olivier Coudert.

 

Olivier Coudert

In 2013, one major semiconductor company will use the services of a third party to offload its computing resource requirements (for synthesis, simulation, signoff, shared project, or anything they deem important) to the cloud. This third party will work with EDA vendors and cloud providers to build virtual design centers, where customers are given the means to develop, test, and sign off their product. And when I say “cloud” I mean major players in the cloud computing market.

Some semiconductor companies have been feeling the pain of capital investment in datacenters they only need at peak hours. So those companies are getting smart and will work with third party companies to access virtual design centers, build on demand, and pay as-they-go.

Soon any startup will have access to the computing resources and the EDA software they need to focus on innovation without breaking the bank. A new model for hardware startups, which the VCs will love. You will no longer need $10M to fund a hardware company, just a few $100Ks.

(more…)

Predictions 2013 – Warnac the Magnificent has spoken

Tuesday, January 22nd, 2013

 

Remember Carnac the Magnificent from The Tonight Show?  Well, his son, Warnac the Magnificent, aka Warren Savage, will divine the future of the IP industry this week in our blog.    Warren is founder and CEO of IPextreme.

(more…)

Predictions 2013

Monday, January 14th, 2013

 

The world did not come to an end in 2012, so we can now breathe a sigh of relief and prognosticate about 2013. Or can we? Well, we can but what sort of world will it be for the EDA and IP industries in 2013? Should we even go there?

We think so. So we asked industry friends, associates, clients and media folks to ponder what industry-shattering events or breakthroughs we might see in EDA & IP this coming year.

We’ll be posting predictions from these industry visionaries over the next couple of weeks. We hope that you will find them as enlightening and entertaining as we did.

We’ll begin with some eye-opening predictions by blogger, author and industry expert, Paul McLellan.

2013 is all about lithography, EUV, the end of Moore’s law, 3D as a savior etc. Specifically:

• There will be a lot of discussion about the costs of 20nm since it is so much more than 28nm. It will be a very slow transition with some people going straight to 14/16nm (which is really 20nm with smaller transistors which is really 26nm with smaller transistors). Expect lots of discussion about the end of Moore’s law.

• EUV lithography will not become commercial during 2013 and so will miss the 10nm node.

• TSV-based 3D ICs will start to become mainstream. Memory on logic, and mixed digital/analog on interposer. Expect lots of discussion about “more than Moore” and how 3D is the new way for scaling.

• The death of a giant will finally take place. Nokia, still #1 only a year ago, will be dismembered. A consortium of Apple, Google and Samsung will buy the patents for billions. Huawei will buy the handset and base-station businesses for peanuts.

• Synopsys will acquire Mentor. EDA will otherwise be fairly boring with the big three being the only companies able to attack the upcoming problems that require dozens of tools to be updated, not just a new point tool inserted in the flow.

• If the IPO markets are open, Jasper, eSilicon, Atrenta and Tensilica will go public. If someone doesn’t buy them first.

(more…)

Stale IP – another view

Monday, November 26th, 2012

In my last blog, Harrison Beasley shared his views on stale IP.  This week we hear from Manoj Bhatnagar, Senior Director, Field Delivery and Support at Atrenta.

Liz:  Manoj, what is stale IP?

Manoj:  An IP may become stale because either its specifications have changed (e.g., USB 1.0 vs. 2.0 vs. 3.0) or there is a better implementation available (e.g., a graphics core is now running at 800Mhz instead of 500Mhz). Typically, people will use the latest version, and the older versions are no longer used.  So the stale IPs in this case will die a natural death. What is more challenging, however, is a specific IP developed for a specific project and, over time, no other project used it.  So the IP becomes stale. Most of my answers will apply to this type of stale IP.

Liz:  What’s so bad about it?

Manoj:  The main issue with a stale IP is the fact that nobody really knows the details about it. If I were to use that IP, I would be putting my design at risk because I am now adding some logic to my design for which I don’t have all the information and can’t find anyone who can provide that information either.

Liz:  How do we prevent it from being stale? 

Manoj:  One of the key things that can be done to prevent IP from going stale is to document the IP. I don’t know how many people still remember the TTL datasheets but when you looked at the datasheet, you got complete visibility into what that component did. The same concept can be applied to present day IPs, where you document various characteristics of the IP. For a hard IP, this may be the timing characteristics, physical profile, etc. while for a soft IP this may be timing constraints, clock domain information, testability profile and power profile.

(more…)

Who’s getting hit by the double whammy of verification?

Thursday, October 25th, 2012

 

Piyush Sancheti

In a recent article written by EDA industry watcher Ann Steffora Mutschler, Atrenta’s VP of Product Marketing Piyush Sancheti pointed to the curse of the verification double whammy for engineers:

“For verification engineers and for designers, this is a double whammy,” noted Piyush Sancheti, vice president of product marketing at Atrenta. “If you ask a digital design or digital verification team, they will tell you that low-power design and the introduction of analog/mixed-signal components on what used to be a simple digital chip is a significant verification challenge. For verification engineers what this means is your finite state machines or your control logic just got that much more complicated. If you go from 2 domains to 20 domains, your verification complexity just increased an order of magnitude.”

We caught up with Piyush in the Atrenta hallway and asked him to elaborate on his statement.  Here’s what he said:

Ed:   So what is the double whammy and why should we care?

Piyush: With the onset of A/MS and low power requirements, digital design teams now have to contend with two new foreign entries to their previous monolithic design environment.

Ed:  And they are…?

Piyush:  New logic blocks that are completely foreign to digital designers and the implementation of power management techniques like power & voltage domains. Voltage domains allow the timing critical portions of the design at a higher voltage (overdrive), and the rest at a lower voltage (underdrive). Power domains, on the other hand, allow one to turn off the power on entire blocks of the design when not in use.

Ed:  Haven’t digital designers always needed to be conscious and conscientious about power?

Piyush:  Not to the extent they must be these days.    Here’s the challenge – say you are designing a chip for a smart phone.  When you are watching a YouTube video, you don’t need the phone function, so you want to make sure that the phone functions are off.  What’s the result?  You’re saving power, or in consumer terms, preserving battery life.   But, if the smart phone gets a call, you have to be sure the phone function turns on instantly, without adversely impacting your video viewing experience.  So designers have to make sure the domains turn off and on in perfect harmony, almost like conducting a symphony.

So what’s the problem?  New power management logic that designers are not used to has been thrust on them rapidly and recently.  They need to get up to speed fast.  This is not an easy job. Not only that, but you now have very complex finite state machines that switch these functions on and off seamlessly.

Ed:  So what’s the solution?

Piyush:   A comprehensive methodology for functional and structural verification.

Ed:  Can you elaborate?

Piyush:  These complex finite state machines must be verified exhaustively for functional correctness. You need to make sure that the various functions on your smart phone wake up and shut off in a timely manner without adversely impacting the device behavior, and ultimately the user experience. With structural verification you need to make sure that the perimeter of the voltage and power domains are properly secured. When you have signals crossing one voltage domain to another, you need voltage level shifters. Similarly, you need isolation logic between power domains, to ensure that signals don’t float to unknown values when a domain is powered off.

Ed:  So what sort of tools and methodologies do you see out there to meet the double whammy challenge?

Piyush:  Well, of course, I’m most familiar with the Atrenta platform.   There are undoubtedly other ways to go about this job.  But from what I see, SpyGlass Power is being used by many large chip and system companies for static signoff of power and voltage domains. SpyGlass Advanced Lint enables exhaustive finite state machine verification using formal techniques. And with our recent acquisition of NextOp Software, we now have BugScope to ensure dynamic verification (simulation) is covering all the corner cases that are now part of your design because of this increased complexity.

Ed:  So your final words of wisdom?

Piyush:  Verification of modern day SoC designs is a daunting task. But like any complex problem a systematic approach using a combination of static and dynamic verification techniques will help you reach your device ambitions faster.

 

 

 

–        Note:  Lee PR does work for Atrenta.

On Power Awareness in RTL design analysis: Update to a Brian Bailey-designated Top Ten Atrenta article

Wednesday, October 10th, 2012

 

Narayana Koduri’s article on Power Awareness in RTL Design Analysis was the second of two Atrenta articles that EE Times editor Brian Bailey named as the ten most-read contributed articles published in EDA Designline.    Along with number one article Understanding Clock Domain Issues  by Saurabh Verma and Ashima Dabare, Atrenta appears to be the only company with two articles in this top ten list.

So even though his article appeared in July 2012, we asked Narayana to give us an update on what he sees as the power awareness challenges.  Here’s what he had to say.

Ed:  Narayana,   your article sure addressed a hot topic in SoC design. And judging by the readers you got, the design community liked what you had to say.  What can you add to your July 2012 article?

Narayana:  Thanks Ed.   From what I can see, due to aggressive low power requirements, power domains are being implemented in a growing number of SoC designs to reduce both leakage and dynamic power.

Ed:  How so?

Narayana:  UPF or CPF can be used to define the power intent to capture information such as power domains, level shifter requirements, isolation cells, retention cells, power switch cells, etc. These specifications will help implementation tools and verification tools to deal with the power intent properly.

Ed:  So how best to deal with power intent?

Narayana:   RTL tools that verify aspects of the SoC such as clock domain crossings, testability, timing and routing congestion need to be aware of the power intent. If not, the verification is not complete and this may lead to design failures or a costly re-spin of the design. This need for power-aware verification is driving new requirements in the EDA tool flow.

NOTE:  For an update on Understanding Clock Domain Issues see our blog of October 3.

 

 

 

 

Note:  Lee PR does work for Atrenta

 

Top article on EE Times: authors give an update to CDC challenges

Wednesday, October 3rd, 2012

Ashima Dabare

Saurabh Verma

 

 

 

 

 

 

 

On the heels of EE Times editor Brian Bailey naming their article “Understanding clock domain issues” the number one article on EDA Designline, we checked in with authors Saurabh Verma and Ashima Dabare on what they see as developments and new challenges since they wrote their 2007 article. Here’s what they said.

Ed: It appears that your article got twice the number of views as the number two article. Congratulations on the EE Times recognition!

Obviously, CDC was an important design issue in 2007 and it certainly is today. What would you say to designers today?

Ashima: CDC design is evolving and so are the synchronization techniques and verification tools. Since we have written this article we have witnessed new challenges posed to CDC verification tools.

One that comes to mind is evolving synchronization styles. In addition to clever variations of synchronization techniques introduced by designers trying to meet their design objective or schedule, new architectures such as those required for a network on a chip (NoC) have been introduced which in turn require verification tools to re-invent themselves.

Recently CDC tools have introduced generic synchronization verification techniques that do not rely on the structure of the synchronizer and analyze clock domain crossings at the protocol level allowing them to better recognize synchronizers, reduce “noise” and improve root cause analysis.

Saurabh: Also, global chip design dictates blocks and IPs to be designed in various geographical locations. The person doing CDC verification is rarely the designer. CDC verification tools are now challenged with providing root-cause analysis of CDC problems to people who have little knowledge of the block.

I also see as a fact that design size is fast growing and so are the number of clocks and clock domains. Combined with the move toward global chip development, flat CDC verification of large SoCs would be a painful exercise where bugs can easily slip through.

The divide and conquer approach seems to be the best possible approach. To begin with, the lower level blocks should be analyzed and CDC issues, if any, should be fixed at the block level itself. Once all the individual blocks are CDC clean, their abstract models can be plugged in and the complete design can be analyzed for CDC issues at the interconnect level.

Ed: So how would you sum up what CDC design needs in 2012?

Ashima: With the ever increasing complexity of design styles, robust CDC verification is indispensable to enable successful chips in the first silicon attempt!

 

Note:  as near as we can tell, Atrenta is the only company to place two articles in Bailey’s top ten. Narayana Koduri’s Power awareness in RTL design analysis came in as ninth most read. We’ll catch up with him next week, so stay tuned.

 

 

Note: Lee PR does work for Atrenta

Ajoy Bose and Jim Hogan on the systems customers now defining SoC design

Tuesday, September 4th, 2012

 

Jim Hogan

Ajoy Bose

 

 

 

 

 

 

 

Atrenta CEO Ajoy Bose and EDA visionary and investor Jim Hogan spoke at a recent National Institute of Technology (NIT) meeting on the momentous changes we see in who controls chip design these days. Clearly, systems companies like Apple define – even dictate – what they want from their silicon vendors..and these systems customers certainly want a lot more than they did ten years ago.

Jim tells us why we have to care:

Video Part 1

Video Part 2

Power Point Presentation

Ajoy shows us how to care:

Video Part 1

Video Part 2

Power Point Presentation

 

 

 

 

 

Lee PR does work for Atrenta

Renesas on SpyGlass Physical

Monday, August 20th, 2012

 

 

Yasushi Ozaki, Director of Engineering Department overseeing product design and Development, at Renesas, spoke at the Atrenta Technology Forum First inYokohama.  This is Tech-On’s coverage of his presentation and his evaluation of SpyGlass Physical, which is an EDA tool for estimating chip area and logic depth at the RTL stage:

http://www.nikkeibp.co.jp/article/news/20120720/316569/

 

 

 

 

Lee PR does work for Atrenta

 

Gary Smith on NextOp, now Springsoft

Tuesday, August 7th, 2012

Yesterday we heard from Jim Hogan on the NextOp acquisition.  Today Gary Smith chimes in on NextOp and the recent Springsoft buyout.

Ed:  What do the Atrenta acquisition of NextOp and the Synopsys acquisition of Springsoft mean to EDA?

Gary:  Technology wise the Atrenta acquisition means that the Silicon Virtual Prototype is becoming a reality.  Business wise it could be the start of the roll-up in the middle.

Springsoft was always a possible roller-upper but generally thought of as a long shot because of theirTaiwanheadquarters.  Springsoft certainly makes Synopsys stronger, especially with the Laker analog product, but doesn’t affect the SVP or the RTL sign-off tool market.  Debug is just being rolled up into the simulator.

Ed:  What sort of new day does it herald for EDA?

Gary:  With the creation of the SVP we now have the RTL sign-off established. This then is the breakpoint between design and implementation, just as the gate-level netlist was in the past.  This will free up a large group of designers, and enable a new larger group of designers, which in-turn will cause the explosion of new systems development.

Ed: What’s the significance?

Gary:  Growth, opportunity, money; the usual stuff.

 

 

 

 

Lee PR does work for Atrenta

 

 

 




© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise