August 15, 2005
Please note that contributed articles, blog entries, and comments posted on EDACafe.com are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
GDDR is primarily supplied by Samsung. They have about 85% of the GDDR market. There are efforts by Hynix and Infineon at this time to produce GDDR3. These are still very small players, about 5% of the market each. Micron is also trying to come up with some GDDR offering.
Other than NVIDIA and ATI who are some of the possible end users of XDR2?
Basically anybody who produces graphics chips. NVIDIA and ATI make up a huge percentage of the market. Their combined total dwarfs any other market shares. But we clearly have partnership with companies like S3 and SGI and want to continue those relationships. We feel that XDR2 could bring value to their products as well. Those are the ones off the top of my head but anyone who plays in the graphics space.
This product is initially targeted at graphics industry although there are some applications in the consumer electronics and networking space
Then as memory technology progresses, it would become something that is more mainstream, something that could penetrate into the consumer electronic and networking markets. At least for XDR1 we are looking to get into main memory as well for applications like servers and PCs.
What market share does XDR1 command?
This is a unique technology. There are currently no products shipping in volume with it. The cell processor is the first announced product that has it and that is coming to bear currently. Sony is releasing the first cell based product in the spring of 2006. IDC estimates that by 2009 over 800 million units of XDR1 will have shipped primarily driven by game console and consumer electronics. We have a number of customers in the CE space as well but at this point all of them are still confidential.
Some of the technologies in XDR2 such as differential signaling are contained in other Rambus' products.
Differential signaling was essentially invented for XDR. It is also used in FlexIO processor interconnection which is a high speed logic to logic interconnect. It came from the same base core technology development as XDR. It was the incarnation of XDR that help interconnect two chips, kind of like a front side bus would if you look at an Intel Northbridge connect to an Intel CPU. Something like that would be a good application for FlexIO interconnect. The cell processor also integrates FlexIO to enable multiple cell processors to talk to each other as well as peripheral chips like graphics synthesizer and Southbridge.
Rambus is the only vendor using differential signaling.
XDR is the only memory technology currently using differential signaling for its data. There are some generations of DDR that use differential signaling for their strobes. Currently DDR still uses single ended data.
What are the advantages of the Rambus approach?
add more emi shielding adds cost to their overall product line. By the mere fact that it is 200 milivolts, it's also much lower power from the interface standpoint. On the silicon side the power distribution becomes easier because the way differential signaling drivers and receivers work. It is not like you are constantly switching on and off massive amount of current from the power supply. It is basically a constant current drop from the driver and receivers. So it is easier to design.
You are optimistic about XDR2's future based on the parallel with XDR1 in that applications other than those you are initially targeting will come on board.
Absolutely. It is not that we are not targeting those applications at the onset, we just see the initial applications, the initial adopters to be in the graphics arena similar to XDR1. The value doesn't currently exist for consumer electronics and networking but just from an historical perspective we would expect graphics vendors to pick up the technology first.
Does incorporating this technology present any challenges to EDA tools?
Any challenges on the manufacturing side?
We are still evaluating it from a technology standpoint. Currently there is nothing we have to publicly announce about the different technologies or the manufacturing differences between XDR1 and XDR2. Over the next year or two it will be something we are looking at internally.
I thought it would be helpful to give a few details about micro-threading, a DRAM core innovation developed to increase memory system efficiency and to enable DRAMs to provide more usable data bandwidth to requesting memory controllers, while minimizing power consumption. See figure below comparing traditional memory core to micro-threaded core.
Most DRAM cores divide their memory storage into discrete banks that can be accessed concurrently. Banks are typically split across both rows of the DRAM die. Since DRAM pins are also split across the two halves, each half-bank delivers its data to the pins that correspond to its half. Each time a row within a bank is accessed, the DRAM core dedicates resources on both sides of the DRAM.
A typical DRAM core component has eight independent banks. One bank consists of an “A” half connected to “A” data pins and a “B” half connected to “B” data pins. The two bank halves operate in parallel in response to row and column commands. A row command selects a single row within each bank half, and two column commands select two column locations within each row half. Each group of four bank halves (a “quadrant”) has its own set of column and row decoder circuits. However, these resources are operated in parallel, with each transaction utilizing two diagonal quadrants and not using the other two quadrants.
on each link during a column access. With 16 data links, the column granularity is 32 bytes. The row granularity is 64 bytes.
You can find the full EDACafe event calendar here.
To read more news, click here.
-- Jack Horgan, EDACafe.com Contributing Editor.
Be the first to review this article