Graham is Sr. Director of Marketing at Real Intent. He has over 20 years experience in the design automation industry. Occasionally he writes blogs for the Dominion of Design. The views and opinions expressed in this blog are his alone and not those of his employer.
Algorithmic Memory Eases the Transition to Next-Generation Process Node
March 12th, 2012 by Graham Bell
Badawi Dweik, Director of Product Marketing, Memoir Systems discusses how new memory technology can ease the transition to the next generation silicon process node.
Next-generation performance means different things for different applications. For high performance computing, faster processor clock speeds may be the ticket. For mobile computing, energy efficiency is paramount. For feature-rich consumer electronics, size matters and packing more functionality into a smaller form factor is the order of the day. SoC architects use a wide variety of techniques to increase application performance. However, the ultimate route to breakthrough performance, by any measure, is next-generation semiconductor process technology. Many designers would like to take advantage of the latest process node technology, but are forced to wait 6 to 12 months until the memory IP portfolio is fully developed and validated for the new process node. A new memory technology called Algorithmic Memory® can greatly ease the burden of migration and enable a broad memory portfolio much earlier.
Algorithmic Memory cores are created by adding logic to existing embedded memory macros enabling them to operate much more efficiently. Within the memories, sophisticated algorithms intelligently read, write, and manage data in parallel using a variety of techniques such as buffering, virtualization, pipelining, and data encoding. These techniques are woven together and operate seamlessly to create a new memory capable of processing an order of magnitude more Memory Operations Per Second (MOPS). This increased memory performance has been mathematically proven, and implementations of Algorithmic Memory have been exhaustively formally verified to demonstrate higher performance for all memory access patterns. The increased memory performance capacity is made available to the system through additional memory ports such that many more requests can be processed in parallel within the same clock cycle. The resulting solutions appear exactly as standard multi-port embedded memories.
Algorithmic Memories can be drop-in replacements for existing embedded memories and can easily integrate into an ASIC design flow. Furthermore, because Algorithmic Memory technology is implemented in RTL, it works across any process node or any foundry. Using Algorithmic Memory cores makes it possible to quickly create a comprehensive memory portfolio using just small number of physical memories. For example, multiport memories that have the ability to process up to 10X more MOPS than the underlying single port memory can be generated. The clock speed of the single-port memory macros determines the maximum frequency at which the Algorithmic Memory will run. Regardless of whether the base physical memory is sourced from the foundry or third-party IP suppliers, the higher the clock speed of the base memories, the higher the maximum attainable clock speed will be for the resulting Algorithmic Memories.
Algorithmic Memory technology can also be used to lower memory area and power consumption without sacrificing performance. There is a significant area and power penalty when a higher performance memory is built using circuits techniques alone. With Algorithmic Memory technology, a lower performance memory—which typically has lower area and power—is combined with memory algorithms to generate a new memory. This Algorithmic Memory achieves the same MOPS as a high performance memory built using circuits alone, but uses significantly less area and power.
Until now, using customized multiport memories has always been challenging and problematic because of the cost and the amount of time required for designing and validating the new memory. This is no longer the case with Algorithmic Memories. Algorithmic Memory cores can be generated rapidly by combining existing memory macros with previously verified algorithms and thus requiring no further silicon validation.
Algorithmic Memory meets the memory performance needs of next-generation applications. It is process, node, and foundry independent, and applies to a variety of SoC implementations such as ASICs, ASSPs, GPPs and FPGAs. Algorithmic Memory addresses the challenge of memory performance at a higher level and allows system designers to rapidly create customized memory solutions that are optimized for a specific application. New and customized memories can be designed and generated in a few days and require no further silicon validation. The resulting memories use a standard SRAM/DRAM interface with identical pinout and integrate seamlessly into a standard ASIC design flow, easing the adoption of next-generation process node technology.
Badawi Dweik is Director of Product Marketing at Memoir Systems, a company specializing in Semiconductor Intellectual Property (SIP). Badawi has over 15 years of memory industry experience in the areas of design, product applications and marketing. Prior to joining Memoir, Badawi held several management positions at ARM, Epson Electronics and Kentron Technologies. Badawi started his career at IBM Microelectronics, where he worked on advance DRAM developments for EDO, FPM, SDRAM and DDR. He has served as a member of the EIA Standardization Committee at JEDEC since 2000. Badawi holds a B.S. in Electrical Engineering (Magna Cum Laude) from Northeastern University and a Masters in Business Administration from Regis University.
Badawi can be reach at email@example.com.
Category: Memoir Systems