Open side-bar Menu
 What's PR got to do with it?
Ed Lee
Ed Lee
Ed Lee has been around EDA since before it was called EDA. He cut his teeth doing Public Relations with Valid, Cadence, Mentor, ECAD, VLSI, AMI and a host of others. And he has introduced more than three dozen EDA startups, ranging from the first commercial IP company to the latest statistical … More »

System-design Evolution Follows the Data

 
April 30th, 2014 by Ed Lee

In a recent blog entry we asked Chris Rowen, Cadence Fellow and Tensilica Founder, to share with us what EDA and IP (as an industry) need to do in 2014 to serve its user base better.  The following is a follow-up blog by Rowen explaining how.

System-design Evolution Follows the DataC_Rowen_Pic June 26 2008 Blk Shirt1_thumbnail

When last we chatted in this forum, I responded to a question Ed Lee proposed to this as part of the Predictions 2014 series: What do EDA and IP (as an industry) need to do in 2014 to serve its user base better?

My answer, simply put, was the EDA industry needs to move beyond EDA. And there I proposed some broad ways to think about why that needs to happen.

Here, I want to explore one way to think about how we do that—how we evolve not only the notion of electronic design automation but system design in general. And the best way, I think, to understand that transformation is by focusing on data.

Along the data path

We’re in the era of big data today, but data itself transcends the popular conversations around cloud computing and services. To understand where design is headed, we need to examine the role of data and how we compute, transport and store it at all levels of electronics design.

By understanding how data is treated in those three ways, we will be able to follow the applications, follow the energy, follow the cost, and follow the opportunities to transform people’s lives.

Much of design today is composed of systems within systems, each sharing three attributes: They’re data intensive, distributed and energy limited. At the same time, a system is no longer just this chip or that chip. The system that we’re all dealing with is, in fact, comprised of our devices, plus the wireless infrastructure, plus the cloud.

By following and analyzing the data along this path, we better understand not only how interconnected the whole ecosystem is but how design is changing and how increasingly energy considerations influence it.

Holistic view

In the cloud, the needs of the system to manage and manipulate data are highly specialized. Here, scale is key: Massive compute farms are appropriate because they can best amortize, compute and access cycles across vast amounts of data. And because of their scale, they are typically located in areas where energy is affordable.

At the server level, a key consideration with data is how it’s transferred between processors and how it’s stored. Are some types of data best shunted to and stored in rotating media or solid state media? Today, server architects are blending the two for optimal performance and cost reasons and this greatly influences system design.

At the SoC level within servers, lies another set of data-driven design challenges, many of these entwined with the intimate relationship of the processor to cache memory.

Here, the movement of data between chip and caches can be 50-75% of energy consumed in many systems. Architects need to understand how to amortize that and to manage their design around that fundamental cost.

As we follow the data, we start to reconsider the traditional approaches to designing systems at all these levels and understand more clearly what works and what might not in the years ahead.

The MPU’s evolving role

Consider the role of the microprocessor. The microprocessor and cache memory have always been excellent at handling unknown and myriad applications. (Cache, in a sense, helps the system guess better what it needs to do next). Today, however, we design more and more around known types of data (video, audio, etc.), and this traditional approach—so effective for managing different kinds of software—can be suboptimal.

We now care a lot about the data-intensive tasks where we have some significant opportunities to really handle data in a better, more structured way.  One of those ways can be dedicated logic. Here, the energy efficiency—the MOPS/mW—can be as much as a thousand times better than doing the same computation on a general-purpose CPU.

This is appropriate for a finite set of applications, in which little data is being managed because accessing the data—fetching and decoding—is costly from an energy standpoint. Performing data-intensive computation in dedicated logic, you might find that you are spending all your energy doing the same memory accesses that you would do in all these other kinds of processors and you’d lose the advantage.

Three approaches

So, contemporary SoC design has responded to challenges in three broad ways:

  • CPUs
  • Programmable data processors
  • Hardwired RTL

Now, add to this the rise of data plane processing, which allows architects to select or describe the key attributes—the instruction set, the interface of the processor—and at a high level use a processor generator which creates the complete hardware design and creates the complete software development environment, compilers, debuggers, simulators, RTOS ports, everything needed to instantiate and program so you can generate any set of processors you need.

This puts enormous flexibility into the hands of designers and architects and is all about driving data-centric processing.

Follow the data

To sum up, we see that real systems are deep pipelines of computation from sensor to cloud, so we really need a system view of the energy, the computation, and the application driving them.

This informs how we will continue to architect our electronic systems—from devices to servers and beyond—in the coming months and years.

Tags: , , , , , , , , , , , , , , , ,

2 Responses to “System-design Evolution Follows the Data”

  1. Avatar Kev says:

    “Hardwired RTL” ?!?

    EDA isn’t going anywhere until it can get past RTL – it’s a horrible abstraction level for doing anything. Likewise SMP processing architectures.

    What’s new here apart from shrinking old stuff onto chips with a bit of tweaking?

  2. Avatar Bill says:

    The last lines of Chris’ blog stated how to look at System level:
    “To sum up, we see that real systems are deep pipelines of computation from sensor to cloud, so we really need a system view of the energy, the computation, and the application driving them.

    This informs how we will continue to architect our electronic systems—from devices to servers and beyond—in the coming months and years.”

    This is not prescribing new tools (or languages or…) that must be developed/used but how End Users, EDA and IP suppliers might start to look at various applications and the types of products/services that could be offered to aid End Users. If EDA/IP suppliers do not supply what is required, End Users can determine if they want to build and support “something” internally. If the return on investment is high enough for any participant, new commercial or internal products will be developed.

    Another area that Chris might discuss is the development and adoption of new languages: solo vs group/committee pros and cons.

Logged in as . Log out »




© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise