Open side-bar Menu
 What Would Joe Do?
Peggy Aycinena
Peggy Aycinena
Peggy Aycinena is a freelance journalist and Editor of EDA Confidential at She can be reached at peggy at aycinena dot com.

Imperas: Test it now or Recall it later

February 20th, 2014 by Peggy Aycinena

These are good days for virtual prototyping vendor, UK-based Imperas. The company will be making appearances this coming week at Embedded World in Nuremberg, at DVCon in San Jose the following week, and at CDNLive in Santa Clara the week after that, as well as several events in the UK in this same time frame. Imperas has a lot to talk about, including an announcement involving MIPS, a division of Imagination Technologies.

Per CEO Simon Davidmann in a recent phone call: “We’re small, self-funded and growing, with revenues last year up 65 percent. [Even better], the type of customers we’re seeing are tier-one semiconductor and embedded systems companies. We want to help people build better software. No one builds a chip without simulation, and we believe software development should be done like that as well.”

I asked about the competition. Simon answered, “It’s true, other people have models in the same space as ours – companies like Synopsys, Cadence and ARM – but we tend to cooperate with them. Our real competition is legacy breadboards, and kick-it-and-see techniques, rather than proper methodologies.

“For most complex SoCs, many people try to develop software with simulation at the RTL level, or with a hardware-accelerator box, but those approaches don’t get the throughput of software and performance they need. And with a prototype, they don’t get the controllability and observability. That’s why most of our competition is the legacy mindset in the customers.”

It’s always hard to get engineers to move on from their existing methodologies, I noted.

Simon responded, “Most companies have a particular way of working, a way [that’s always] worked. Then something goes nightmarishly wrong, and now they’re ready to go with a new solution.

“For companies trying to put a multicore ARM processor down, for example, we get some models going for them and, lo and behold, our tools find bugs in their software – some quite serious – that the customers were unaware of them, because the software seemed to work. Yet still, people try to keep going with their same methodology, while more and more complex systems are failing.

“These methodologies are ad hoc and not well thought out, even though the challenges have changed. And so it stays, until something goes seriously wrong and causes significant losses. In the chip business, however, no one can commit to a $50 million program and then say, ‘We think it will work.’

“They have to test it! They have to have tools and approaches to quantify their quality. Now today, we’re seeing more enlightened customers who know they need better solutions.”

It all seems so reactive, so not proactive, I said, fixing things after a disaster, rather than solving issues before they become critical.

Simon agreed and described those disasters: “First, there’s the slow death disaster, which is not violent.

“In one such case, a company wanted to evaluate our technology and when we went to their offices, we actually tripped over their breadboards lying all over the floor. We got a model of their platform up and running, and we simulated it. However, they ended up not becoming a customer. After two years, they still couldn’t debug their multi-core software on all of that hardware, because there was no visibility, and yet still they didn’t want to make the move to do it in simulation.

“That’s the slow death disaster and it takes several years – battling away, throwing away human resources, doing things the way they have always done it, consuming a lot of resources rather than spending some money that would actually advance what they’re trying to do. It’s quite frustrating and sad when we see those companies and know they could do it so much better!

“The second disaster type is when the product fails in the field – the manufacturer has to recall cars because of the software, or medical instruments because of security issues. It’s always a failed project that prompts the change, and again it’s quite frustrating and sad to see. But often these companies still continue to be too conservative, trying to just to throw people at the problem.”

So how do companies know when it’s the right moment to spend money and move forward, to swap out the old way of doing things for the new?

Per Simon, “It’s quite complicated. Most often they don’t have the methodology to quantify what they’re doing. They might have tools that check for errors, that find coding errors, but they don’t have a way in the embedded world to quantify what’s really going on.

“I’m not saying we have all the answers, but our backgrounds are in tools for hardware design and advanced verification. We understand that in hardware you really need to know how to test chips, including code coverage and functional coverage in RTL, and you need to push things into corner cases to expose complex failure modes.

“In developing and testing software, there’s a similar mathematical problem – you have to exercise things in certain ways. Most programmers understand coverage. They know it’s important to see what hasn’t been exercised, more than just simple code coverage. They know using good simulation with advanced tools means they’ll stand a much better chance of finding bugs.

“We preach the old adage: The more you simulate, the better the quality of your software. If you have six months to test your software – if you do it with models, simulators and tools that run slowly – you’ll find, say, 10 bugs. But if you run things 50x faster, you’ll find 50 or 60 bugs in the same elapsed time. Our models, simulators, and tools are faster and more sophisticated, and allow you to find more bugs in the time you have, and allow you to get them fixed in your product before it ships.”

Simon cautioned, “In software, it’s easy to be misled and believe things work. People show you one path through the software, but it’s not exhaustively testing all the combinations.

“For example, when you’re running a system on something like Linux, and your software runs as several processes or threads, often a different loading affects the scheduling of when process A or  process B runs – and you might get different, better, worse, correct, or incorrect results. In that case, you’ve got to have tools that will move the scheduling around during testing, so you can control the different combinations, actually pushing the software into its operational corner cases, which is a good verification methodology.

“When you have hardware and a prototype, however, you can not do that. Only when it comes to using simulation can you really start controlling/manipulating what’s going on, and that’s when you can do smarter and different things.”

Simon noted that companies like MIPS understand the importance the message: “MIPS has been working with us on their latest generation of cores, which now have models running unbelievably fast on quad-core x86 PCs. We’re using our parallel simulation technology to improve the speed of their simulations – and now they’re using our technology.”

Imperas is clearly on a roll, they’re seeing significant traction in the market, and sensing that their message is being heard. No small part of that success is based on Simon Davidmann’s articulate evangelism for the technology:

“What we’re focusing on at Imperas is the usage of simulation – our tools are built on top of it. Fundamentally, you can’t develop quality modern advanced software on a prototype. You need a simulation!”


Per the Press Release …

Imperas has added support for models of Imagination Technologies MIPS processors to QuantumLeap, the company’s parallel simulation performance accelerator. QuantumLeap leverages Imperas new synchronization algorithm to provide the fastest virtual platform software execution speed available today on standard, multi-core PC host machines. The Imperas technology – simulation plus processor core models – provides the MIPS ecosystem with the fastest software simulation solution in the industry. Performance of over 16 BIPS has been achieved with QuantumLeap.

Tony King-Smith, EVP of Marketing for Imagination Technologies, is quoted: “We are delighted to be working with Imperas to deliver the fastest Instruction Accurate simulation solution for our many MIPS partners. We have been impressed how Imperas simulation technology significantly outperforms other commonly-used solutions. Faster simulation results in more tests being run, and therefore higher quality software being developed – and that is good news for our extensive MIPS ecosystem community.

“Since acquiring MIPS, Imagination has committed to working more closely with innovative partners like Imperas to deliver superior CPU modelling solutions. As a result, we are confident our MIPS licensees and many software ecosystem partners will have access to the best tools in the industry, enabling them to create the best possible software and products.”


Tags: , , , , , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *



CST: eseminar
TrueCircuits: IoTPLL

Internet Business Systems © 2018 Internet Business Systems, Inc.
25 North 14th Steet, Suite 710, San Jose, CA 95112
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise