Nose around the design automation industry a bit and you’re sure to find mention of the goal to “shift left.” Basically the idea is to try to solve problems and add value earlier in the design cycle. Engineers usually first stitch together basic functional blocks of whatever they are building before moving on to higher level system integration and software tasks. Turns out this isn’t a bad metaphor for conference planning. Like chips and ICs, conferences work best when the essential elements (in this case, marquee presenters and core technical content) are in place early. I can safely report this is more or less true now for DAC 52—which is slated to be simply amazing when it’s finally “launched” this summer.
Following a very successful DVCon in San Jose two weeks ago, next week we travel a few miles up the road to the Santa Clara Convention Center for the Synopsys Users Group (SNUG) Silicon Valley event. This will be our third year in a row exhibiting at this show, and it has become one of our favorites. We will also be speaking for the first time ever, and we’ll fill in all the details shortly. But let’s start by looking at why this show stands out and why we enjoy it so much.
SNUG actually has quite an interesting history. It began in 1991 as a way for Synopsys users to discuss common problems and solutions, meet with technical experts from the company’s R&D and AE teams, and learn about new products and features. Unlike many single-vendor conferences, SNUG has been driven largely by the users. They choose the papers to be presented and make many of the key decisions on how the event is run. Synopsys of course provides support in many ways.
Based on research findings from schools and sports, Dr. Kageyama concluded that high expectations from teachers and coaches correlate positively with an individual’s learning and growth, helping improve confidence and making the most of one’s ability.
The blog resonates with me because I am a parent, always seeking ways to help my daughters reach their maximum potential. But It also reminds me of a common practice I see in the industry regarding formal verification adoption.
Last time, I wrote about a “multi-core” project that I was working on 30 years ago. To be fair, it was actually “multi-CPU” rather than “multi-core”, but many of the challenges were similar, as was the initial design decision to take the approach of distributing the processing capacity. It is interesting to draw a comparison between the system that we were developing all those years ago and modern ideas for multi-core design. A common approach is to use one core for real time functionality (running an RTOS like Nucleus perhaps) and another for non-real-time activity (maybe running Android or Linux).
Using multiple CPUs (or cores) presents a variety of challenges. One is the division of labor, which was reasonably straightforward in this case. Another is communication between the processors …