October 02, 2006
MathWorks: Simulink HDL Coder
Please note that contributed articles, blog entries, and comments posted on EDACafe.com are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
| by Jack Horgan - Contributing Editor
Posted anew every four weeks or so, the EDA WEEKLY delivers to its readers information concerning the latest happenings in the EDA industry, covering vendors, products, finances and new developments. Frequently, feature articles on selected public or private EDA companies are presented. Brought to you by EDACafe.com. If we miss a story or subject that you feel deserves to be included, or you just want to suggest a future topic, please contact us! Questions? Feedback? Click here. Thank you!
On September 18th MathWorks introduced Simulink HDL Coder, which automatically generates synthesizable HDL code from models created in the company's widely-used Simulink and Stateflow software. The product produces target-independent Verilog and VHDL code and test benches for implementing and verifying ASICs and FPGAs.
The company is well known in EDA circles. However the size ($350 million) of the company may be a bit surprising.
I had an opportunity before the announcement to talk with Ken Karnosfky, Director of Signal Processing and Communication at MathWorks.
Would you give us a brief bio?
I have been at MathWorks for a little over 12 years. In college it was a combination of systems engineering and liberal arts. I have always kind of straddled the technology world and the world of English literature. I went to work for a couple of companies. The first company was in the area of signal processing and speech recognition. After that I joined a company that did data analysis software product. Initially it was an R&D type of world. Then I was in application engineering and customer consulting, moving gradually into product management and product marketing.
You are Director of marketing for Signal Processing and Communications. What application areas does that cover?
A variety of industries. It tends to be communication, audio, video, radar and tracking, and defense. The automotive industry is looking at techniques for active safety systems and entertainment. There is a lot of medical electronic imaging equipment as well. In terms of frequency and commonality, it is communication, multimedia: audio and video.
Would this include consumer products?
Absolutely both chips and final products like multimedia players. The best example would be PC with surround sound and multimedia. We also have customers making portable types of consumer products and companies well known for audio and home theater equipment.
What is the annual revenue for MathWorks?
Last year it was about $350 million. There are 1,400 employees.
How much of that was in Signal Processing and Communication?
It is a little bit fuzzy for a variety of reasons. I can't say exactly but it is somewhere between 25% and 30%. It is probably a bit higher. The reason I hesitate is that there are people who use signal processing techniques in other applications. There are engineers who are collecting data of some sort, filtering it and doing data reduction and correlation which are contained in our signal processing tool box product but they are not really doing DSP design.
What was the motivation, the backdrop, for developing Simulink HDL Coder?
In the product development process there are two sets of engineers. First there are people who do system design and modeling and algorithmic development. Simulink and MATLAB are the leading tools for doing that. Then there are the engineers who are responsible for the implementation of hardware and embedded software, primarily relying on hardware description languages and C code. Bridging those two world is increasing seen as necessary to accelerate the development processes so that you can deal with the complexity in hardware/software systems. We are hearing from our customers about the need to automatically generate hardware and software implementation from these models and to verify
implementation of systems and components against those models.
Simulink and MATLAB are well established for designing, simulating and validating the system model. We have mature technology that is widely adopted for the automatic code generation on the embedded software side through our Real-Time Workshop products and extensions that target specific microprocessors and DSPs as well as what we call linked products that provide verification and debugging interfaces to downstream tools that work with those processors. On the hardware side our initial entry into the hardware design and verification flow is a product called Link for ModelSim which is a co-simulation interface between Mentor Graphics' ModelSim HDL simulator and our MATLAB/Simulink
product. It is used to verify hardware component implementation within the context of an overall system model and being able to reuse those models to verify the hardware.
Model based design is not the exclusive province of MathWorks. There are many companies that are participating in providing a path from system level models and algorithms into implementation and testing. These include FPGA, processor and DSP vendors, vendors of downstream tools for EDA and ESL, board vendors who provide prototyping and development solutions and testing, verification and software and hardware companies. If one Goggles MATLAB OR Simulink AND (HDL OR RTL OR Verilog), one gets a million hits. This result indicates a high level of interest in the design community of combining these two worlds of MATLAB and the hardware implementation and verification.
There are current options for hardware implementation from Simulink models with model based design. However, they tend to be very specialized. For example Xlinx and Altera have created add-ons to Simulink, essentially a way to provide a Simulink front end to help development with their IP cores. You can model these in the Simulink environment, specify parameters for implementation and then automatically generate the implementation. Those have been very well accepted and are quite popular and valuable. However, there are some limitations. The first is that they require specific block libraries. If you think of the engineering process going from the initial specification and
exploration which might be a floating point model, refine that down to a fixed point model. Once you are satisfied with the results, having to replace that model with something specialized to a specific hardware implementation has a couple of ramifications. One is that you have to maintain two sets of models. Testing and keeping them in synch is more difficult. There is opportunity for more bugs to creep in as a result. The additional problem is that these tools are not well integrated with the software design flow at least not for software for embedded processors. Large proportion of designs involving FPGAs involve hardware and software, whether the embedded processor is on the FPGA
or co-processing with a DSP or general purpose processor. They are also target specific. The design is really locked into a particular implementation architecture. If you want to migrate to a different chip from the same vendor or from a different vendor, then you have to start from scratch. If you want to go from an FPGA prototype to an ASIC, you encounter a similar type of problem. Finally, these tools don't leverage the full range of model based design capabilities that MathWorks offers in terms of verification, validation and other aspects of multi engineering simulation. That's the backdrop.
What does Simulink HDL Coder do?
This product generates synthesizable VHDL and Verilog directly from the Simulink models using standard blocks in Simulink as well as finite state machines that are modeled by our Stateflow product. Stateflow is an optional add-on to Simulink. This represents finite state machines and control logic. Stateflow models or diagrams can be represented as blocks within a Simulink model. So you can have both data path and control logic within the same model. With this product you can generate HDL code for both the data path and control logic elements of the chip. The code is target independent and portable IEEE standard VHDL or Verilog. That means that the design is in a sense future proof.
You can develop IP and maintain it at this level of abstraction, not tied to a specific target architecture. Then retarget it to something new on you next design project. This requires no modification of software or any kind of specialized blocks. You have the ability to maintain one reference design or one truth. So there is no question about what the design intent is. When you later test the implementation, you still have that one reference design that relates all the way back to the original specification and requirements. The output, the generated HDL code, is correct by construction. The Simulink model is bit exact and cycle accurate with respect to the generated code.
The impact we see from this product comes from the fact that now engineers can generate hardware and software from the same Simulink model. There is a large base of algorithmic IP that has been developed in the Simulink environment. Sometimes it is already been used in a software environment but customers are looking for a way to implement that on an FPGS or hardware accelerator or an ASIC environment. This allows them to do that. They can migrate the hardware and leverage what they have already done. The code is direct generation of the hardware description language from the Simulink executable specification. There has been a lot of talk over the last couple of years about moving up
in the level of abstraction and providing ways to start at a high level and looking for ways to do executable specifications and then produce the design that does exactly that but it does not introduce a radical shift in terms of the type of tools that people use. We are simply connecting the standard and most widely used tools for system and algorithmic design with standard language for hardware implementation. No intermediate steps. No intermediate languages; C code for processors and HDL code for hardware. We see this as dramatically accelerating the development process both the design and verification. One of the reasons it does that is that it bridges this abstraction barrier
between the system and algorithm designers who tend to think more in mathematical models and the hardware designers who think in hardware description language and logic gates and the like. It allows the handoff for designs from one stage of the project to be much more productive and unambiguous.
What are the key features of Simulink HDL Coder?
It synthesizes both the HDL and Verilog. There are about 80 Simulink blocks for data path in V1.0. Control logic is handled by the Mealy and Moore state machines that are modeled in the Stateflow product. These are the typical finite state machines elements that are used in hardware implementations. The code is technology independent, device independent, bit exact and cycle accurate implementation of the model.
In addition to synthesizable code we also generate the HDL testbench. The engineer has already created the testbench in the model and that testbench can be generated in VHDL or Verilog - the formats that will work with popular HDL simulation tools. As a matter of fact because of standard IEEE languages, we work with all the leading HDL synthesis and simulation tools. We provide automation scripts that can be tailored and customized to automate the subsequent steps in the flow, for example to kick off a synthesis process with a downstream synthesis tool.
A particular use case we are seeing is customers want to do design space exploration, generate some designs, run that through the synthesis, get some results in terms of clock rates and gate counts and then if it is not meeting the requirements they can go back and quickly iterate. They can run multiple designs in a very short period of time. One way we support that is providing a choice of hardware implementations for the various block supported in the product. If you have a vector multiplication or filter, we have parallel implementations, serial implementations and sometimes in between clock gated or pipelined implementation. In a sense by selecting these, you can turn the knob a bit
between the high speed versus more area versus a little bit more latency and slower speed versus small with regard to area. When you do that design space exploration, the specification of those choices is maintained in a separate control file which can be associated with the model. You can save that and reuse it.
We also handle existing or legacy or externally developed HDL. This is handled through a black box import mechanism that defines the interfaces so that the HDL does not touch the internal workings of it. You can bring that in through Link for ModelSim block. It can be co-simulated with the rest of the design and verified in the system context. When we generate the HDL code for the blocks supported with this product, we also generate the interface code for any of these co-simulation blocks so that at the implementation the legacy code can be included.
Finally the code contains comments that indicate which part of the model generated the code. You can trace back from the implemented HDL back to the source model. That supports debugging and dealing with questions that might arise during the development.
The 80 Simulink blocks. How does that map into the world of possibilities? Can this product synthesize virtually any product under the sun?
Any product under the sun is an unrealistic expectation for a version one product. We did an extensive beta test and gathered requirements from our customers. They are in a variety of industries. They tend to be doing signal processing intensive type of applications such as wireless communications, some multimedia, some video processing. There are also some feedback control types of applications. We have asked those customers that we have been working with what kind of building blocks they use to build their IP in those applications and prioritized them in that way. They tend to be fundamental blocks going from an adder or delay element or a flip flop up to things like filters. There
are certainly more complex algorithms that can be designed in hardware. But we have had feedback from customers that although they would like that in the future and that would make some tasks easier, they typically build up their own IP out of lower level elements for those types of applications. It is really their kind of secret sauce. We think there is a good range of applications. It is not everything under the sun. There will be a steadily expanding set of blocks in terms of scope and greater complexity pf algorithms over time. We think we have laid the foundation so that a pretty broad range can be supported.
If most but not all of a design is covered by these 80 blocks, how do they get synthesizable code for the entire design?
First of all if they have an existing design or something they are developing, the tool will give them a report. There is essentially a compatibility checker that will tell them if they have any blocks that are not supported. If a complex algorithmic block is not supported, one approach is that they can sometimes break it down into smaller components and construct an equivalent block out of blocks that are supported. The other approach is that they could potentially hand code that portion and bring that in through the black box mechanism and include it that way. These are the ways they can work around the limitations.
Simulink has links to ModelSim. You said the product works with a variety of simulators and synthesis tools. Is the link to ModelSim more than a marketing relationship? Is there some advantage to using ModelSim?
There are two aspects to it. The black box import and co-simulation style of verification is supported only with ModelSim today. There are obviously other popular HDL simulators that would make sense to do in the future. In terms of exporting the testbench and having the code work with any HDL simulator is independent. Another style is co-simulation style where you are using the model as the testbench for the generated code. That's one mechanism. Many customers may be using this tool to generate components or subsystems but not the entire design at least initially. They mean to integrate that with other code derived from other means, typically hand coded. In that environment because
out code is completely compatible and works with those simulation tools: Mentor, Cadence, Synopsis.
Would you describe the workflow with and without this new product?
Although things are beginning to change and in many ways have changed in embedded software design flows in a number of markets. In terms of hardware design there is typically a divide between system algorithm designers and hardware designers who implement in ASICs and FPGAs. The system designers typically convert the algorithms, they test them, they make sure they are working functionally correct and create a design document. The design documents get thrown over the wall to the hardware designers who need to implement them. They have to interpret this and hand code the HDL and also hand code the testbench which can be an order of magnitude more code than the actual code going into the
Another challenge with this traditional process is that typically the components are simulated separately. The overall chip design is divided up into teams. Three engineers here and there could be a dozen or more there each doing their own part, doing component level testing. They can not really do system level testing until they gather a full prototype chip in place which is pretty late in the process. Another issue is that the specification is typically maintained separately from the design. Sometimes the specification is not updated at all.
With Simulink HDL Coder and model based design we have one environment for the system, software and hardware; a totally executable specification that can be refined as the actual basis for the design and can be maintained. There is a sense of one truth, one reference that evolves and that gets elaborated throughout the process unlike paper documents. When you do that, you can apply certain system level metrics to determine whether the implementation has degraded the design in any way from the specification.
The elaboration takes you from an idealized floating point model with idealized timing to something that is bit exact and cycle accurate to how the hardware is intended to work. Design space exploration where you can control the options and reuse them. Overall this design methodology promotes IP reuse access projects and technologies. If you want to switch vendors or go from and FPGA to an ASIC or even take functionality that used to run on a DSP and put it into hardware you can do that without completely going back to the drawing board.
I talked about the limitations of FPGA vendor products. They are still quite a powerful tool and very useful when customers want to use their IP cores directly in the design. You can take our generate code and their generated code, import them through the black box mechanism and create something which includes the entire design.
A number of board vendors are working with this tool now to come out development solutions that will allow a complete flow from the MathWorks product down into an FPGA board where you can test in real time and get rapid feedback.
One important aspect of model based design is the use of the model itself as the testbench. Once you have tested the model components to be bit exact and cycle accurate and once you have done the implementation whether it is automatically generated or even hand coded, you can use that model to test it rather than rewriting additional testbenches in the hardware description language. When you do that, you can apply certain system level metrics to determine whether the implementation has degraded the design in any way or varies from the specification.
The functional verification actually runs faster, the simulation runs faster because only that particular component is running at the RTL level of detail. Everything else is at the higher system level and therefore runs much faster. Another benefit is that multiple engineers working independently can verify their components not only at the component interface level but at the system level by plugging it back into the overall system model. The generated HDL code can also be used with downstream third party tools, for example other methods for verification.
What was the feedback from your beta test?
We covered the world with our beta test with 60 customers from a variety of industries such as semiconductor, wireless, automotive and aerospace. There were both ASIC and FPGA projects. We are getting very strong and consistent feedback that the product is a critical element for the acceleration the development process because it is automating this aspect of the design process.
Simulink itself has a better environment than C code which is still used as a starting point for system design. In terms of sharing information among team members especially when they are in different locations, you can understand the behavior and design intent of colleagues much better using a Simulink model than by looking at their C code.
The person from Agere Systems cited in the press release said that they see the most important benefit to be the ability to perform multiple iterations on the design done very quickly. Each individual block element may not be optimized to the extent a hardware designer might be able to realize but the overall system they can try different architecture, explore various options and arrive at something that meets the requirement much more quickly because they can experiment and explore so rapidly.
There are 60 beta sites but only one is named in the press release.
We have a couple of others we hope to have. Getting through the approval process can sometimes be time consuming. We are hopeful that we will have approval from their corporate PR guys at few others.
What is the pricing and packaging fro HDL Coder?
The product is available immediately and on the three platforms (Windows, UNIX and Linux) that are typically used for FPGA and ASIC design. The price is $15,000 for an individual perpetual license which includes the first year of maintenance service. In subsequent years you pay a percentage of the initial price for maintenance. This service includes technical support and updates.
Individual license means node locked?
Yes. We also have a concurrent license that can be shared across a network. In some cases we have annual options. Typically this is for larger configurations. We generally do not offer that on an individual license basis. But when there is a site or work group, we can offer that as an option. The specific price would depend upon the configuration. Each customer has to do their multiyear cost of ownership calculation to see what is best for them.
The required products include MATLAB and Simulink and the products that enable fixed point computation within each of these products. There are also a number of recommended products that add functionality either in terms of additional algorithmic blocks and in the case of Stateflow finite state machines or in the case of Link to ModelSim this co-simulation or black box import capability.
What would be the cost for a perpetual license of the required products?
That would be $6,700 for those four products, so a total of $21,000.
What is the usage model of HDL Coder relative to Simulink? Does one bounce back and forth or does one only occasionally launch HDL Coder?
It depends upon the task. If you are working on a particular component or subsystem, if you are trying to explore the architecture and get the right design for that subsystem, you would tend to stay in Simulink most of the time. You would use the automated scripts to invoke the synthesis tool to gain insight into the final implementation in terms of area, clock rates and the like. But you don't need to interact with the synthesis tool to do that. You are getting the results out to determine whether it is good enough and whether you need to iterate and try a different design approach. That would tend to be staying in Simulink.
On large projects when you are integrating multiple components there are likely to be different engineers some of whom would tend to spend more of their time in the HDL simulation environment and others who would stay in the Simulink environment. During the course of a project it is going to be back and forth. During the design of a particular component I would expect it would be heavily staying within Simulink.
Could say three engineers who were using Simulink on a daily basis effectively share a single copy of HDL Coder or would they each need their own copy?
It depends on where they are in their project cycle. They may need multiple copies of Simulink and fewer copies of this product. That's one possible scenario where this product would be used less frequently. It might be used all the time during certain phases of the project. But there are other phases where it would be used less frequently. We have seen that with our C code generation product where there are phases where you are doing the initial specification and simulation work, you are not really doing code generation until you get to the prototyping phase. That's likely to happen with this as well. There are going to be cycles in this project which will require more use of this
product. It depends upon what the design looks like. If there are several engineers working on components, digital hardware components, then you might need one for each of them. If some are doing hardware, some doing software, and some doing analog or RF design, then only a subset will need it. Sorry I can't give an exact rule of thumb.
How would you summarize?
The key points in terms of impact of this product are: First being able to use a single Simulink model for both hardware and software. The ability to do design work and develop IP that is technology and device independent. Direct generation of hardware description language from the Simulink executable specification bridging the most commonly used system design tools with the standard hardware description languages. Connecting not only the tools but facilitating communication between the different sets of engineers.
Who is the competition for MATLAB and Simulink in the market you are responsible for?
The primary competition is C code. A lot of people are writing their own C code. If the origins are in the R&D department and algorithm developers, then there are a variety of math packages that are sometime used. As you get more into system level design there are a few tools like Signal Processor Design that used to be called SPW. There is also Ptolemy and a few others that are used for some aspects. The Agilent products tend to be more in the RF end. The SPW product is focused a bit on some of the wireless applications. It doesn't have the breadth of multiple domain capability that Simulink does.
Does this new product compete against a new set of players?
There are some products that convert C to hardware description languages. What they are trying to do is provide a higher level design environment. From our perspective and from what we have heard from our customers, the productivity of working in MATLAB and Simulink is much greater than working in C. We do not consider C to be a higher level of abstraction than hardware description languages. But there are tools that are trying to provide some level of higher abstraction for hardware concepts. These specialized tools for Simulink are in some sense doing something similar but they are complementary in providing something specialized for a particular target technology.
How about ESL products as possible competition?
The C to HDL tools we consider to be competing with Simulink HDL Coder. The other ESL tools generally seem to be addressing the architectural aspects of system on a chip design such as what is the right platform architecture for a chip that is going to contain a couple of processors, a DSP, high speed IO and memory controller. How do I architect that so I get the right tool set and acceptable size and cost? However, what they are not addressing is the behavioral and functional design aspects of those products. Some of them are called virtual platform prototypes and that general category. We see these as pretty complementary. They are solving a different problem than we are addressing.
Those are the two most significant categories of competition. Then there are a bunch of other specialized niche players. Beyond that it is hard to tell what ESL is. There a lot of companies jumping on the bandwagon, calling everything ESL.
Do you see this new product as having a quantum leap potential in terms of revenue?
Quantum leap might be a bit strong but we view it as very strategic and certainly a growth driver for the company. We look at what happened when we added automatic code generation for C code for embedded processors. That transformed Simulink for the embedded community from a discretionary research and advanced technology concept tool into something that could be made an integral part of their development process. Now you see companies like some of the ones cited earlier where that is the way they develop software. That's the track we are on with this product. With version 1.0 that's not going to happen over night. But we see that it has potential to do the same thing for us with
systems that contain hardware and software.
The top articles over the last two weeks as determined by the number of readers were:
Pyxis Technology Expands Executive Team; Adds Industry Veterans Pyxis announced the addition of Joe Hutt and John Ennis to the company's executive team. Hutt, vice president of engineering, and Ennis, vice president of sales, will be reporting to CEO Naeem Zafar. Hutt most recently served as vice president of technical sales for Magma Design Automation, after a three-year tenure as vice president of engineering. Prior to Magma, Hutt was in charge of the Advanced Technology Group at Synopsys. Ennis comes to Pyxis from a position as senior director of business development at
Clear Shape Technologies, Inc. He has managed sales teams at Cadence and held vice president of sales titles at Avant! Corp.; Anchor Semiconductor, Inc.; and Circuit Semantics Inc.
Stanford Releases Latest Documentary, "The Microprocessor Chronicles" in Ongoing Historical Series of Silicon Valley Available on DVD, the documentary recounts the history of the microprocessor and provides a behind-the-scenes look into the strategic moves and missteps of the leading companies, with an emphasis on Intel. Some of the issues examined are: Why didn't Intel patent the microprocessor? How did Intel convince IBM, Compaq, HP and others to place "Intel Inside" on their computers? How was Intel selected for IBM's personal computer? These are just of few of the questions examined in
this revealing oral history as told by the people that brought this technology to the world.
Co-produced by Stanford University Libraries, Walker Research Associates and Panalta Inc., The Microprocessor Chronicles is available for purchase from
at $49.95 USD per copy. All profits from the sale of The Microprocessor Chronicles will go to Stanford University to support continued research and chronicling of the history of the semiconductor industry.
Magma Introduces FineSim SPICE Circuit Simulator; Newest Offering in FineSim Product Line Provides for Natively Parallel Circuit Simulation While Maintaining the Highest Level of Accuracy
FineSim SPICE can be parallelized over multiple CPUs for linear speedup and higher capacity. By providing increased speed and capacity while maintaining full SPICE accuracy, FineSim SPICE allows designers to simulate advanced circuits, such as PLLs, ADCs, DACs and gigahertz SERDES, that they previously would not even attempt using slower traditional SPICE simulators.
Mentor announced the donation of its Power Configuration File (PCF) specification and guidelines for modeling power-aware cells to Accellera. Mentor's PCF specification includes power-aware modeling guidelines for power aware cells, such as retention registers and latches, isolation cells, level shifters and retention memories. The charter of the UPF (Unified Power Format) Technical Subcommittee is to deliver an industry-wide standard for low power design. The UPF standard is expected to be produced by January 2007
The topology router technology from Mentor Graphics consists of two applications. First is a topology planner, used by the engineer to plan and optimize the bus systems and sub-system interconnects on the PCB. It complements component placement by allowing the engineer to plan logic pathways optimizing performance and layout. Traditionally, the engineer would sketch these placements and bus paths out on paper to provide the board designer the necessary guidance to route the board.
The second application, the topology router, automatically routes the bus interconnects, accurately following the pathways defined by the engineer. The resulting routing has the structure and quality usually provided only by experienced designers performing hand routing. The result is a significant decrease in design cycle time, increased designer productivity and quality of the board, as well as optimized performance. And since the bus paths are permanently stored with the design database, the PCB can be incrementally modified or reused in future products without re-entry or re-routing of the buses.
Other EDA News
Other IP & SoC News
You can find the full EDACafe event calendar here
To read more news, click here
-- Jack Horgan, EDACafe.com Contributing Editor.
For more discussions, follow this link
- October 09, 2008
Reviewed by 'Mike'
I read your articles and I appreciate the depth to which you probe the various subjects you cover. In your recent article in which you interviewed Ken Karnosfky, Director of Signal Processing and Communication at MathWorks, I believe you missed uncovering HDL Coder's (HDLC) most significant competition which I think your readers would benefit knowing about.
I should mention I am a sales manager for Synplicity, and my geographical responsibility is the central US. Synplicity has a product in the exact space as HDLC, called Synplify-DSP. It interfaces to Simulink and it outputs vendor and technology independent, synthesizable RTL. It has been on the market for 2 years and is doing well. EDACafe carried the tool's introductory announcement (http://www10.edacafe.com/nbc/articles/view_article.php?section=CorpNews&articleid=123486) which included a quotation from Ken Karnosfky in support -see next paragraph below.
"DSP designers are increasingly targeting FPGAs for implementation of high-performance DSP designs," said Ken Karnofsky, marketing director for DSP and communications products at The MathWorks. "Synplicity has delivered sophisticated tools for users to generate high-quality RTL code from Simulink that not only delivers impressive QoR, but leverages the comprehensive DSP simulation and analysis already built into Simulink."
Synplify-DSP is the only tool offering true synthesis of the Simulink model (automatic folding, pipelining, multi-channelization, polyphase decomposition) and we uniquely offer technology specific RTL output, which is optimized for high quality of results and can be retargeted with the push of a button. We have several public success stories to date (http://www.synplicity.com/literature/success/pdf/ss_ssg06.pdf for one). The tool has come a long way since it's introduction and we continue to improve the tool with each release, recently adding M-code control logic description support, vector support, and RTL import support for example. A simple Google of Synplify-DSP will yield many related, informative links if you are interested.
I know the focus of your article was HDLC and it is good to raise awareness for this tool specifically, and DSP development flows in general. There are several options for the designer to consider and Synplify-DSP is certainly one of them.
Thanks for listening. I'll keep reading.
Central Area Sales Director
5 of 7 found this review helpful.