Everything is going mobile – smartphones, digital cameras and video recorders, tablet computers, media players, game consoles, and the list goes on… These products are required to perform numerous tasks, including handling a wide variety of sensors such as microphones,image sensors, magnetic compasses, 3-axis accelerometers, and sophisticated touch screens. They are also used to capture and play high-definition audio, capture and process images and videos, display high-definition video and graphics, and use Wi-Fi and/or 2G/3G/4G to provide full access to the Internet and to support GPS navigation and location-based services.
Of course every product is different, so for the purposes of these discussions let’s consider a “generic” battery-powered system containing – amongst other things – an application processor, some solid-state memory, sensors in the form of a digital camera and a microphone, output devices in the form of a display screen and a loudspeaker, a baseband IC, and an RF chip. In some cases, many of these functions – excluding peripheral components like the sensors and output devices – may be combined in a single System-on-Chip (SoC) device. Alternatively, one or more SoCs may be used to augment the capabilities of an off-the-shelf application processor. Ultimately, the product will need to employ some sort of chip-to-chip communication mechanism; also sensor-to-chip and chip-to-display communications.
When many people hear the term intellectual property (IP) in the context of silicon chips, their knee-jerk reaction is often to think of “cool” things like microprocessor (ARMTM, MIPSTM) and digital signal processor (DSP) cores. In addition to these cores, however, the design engineers working “in the trenches” know that some of the most important – and numerous – IP cores that they build into their SoCs are used to implement interface functions.
Over the course of time, a profusion of interface standards evolved, such as the UART protocol, I2C, I2S, SPI, SDIO, and so forth. Also, a variety of parallel interface standards associated with camera sensors and display devices appeared on the scene. The result is a morass of confusion. For example, designers of a mobile device may have to handle as many as five competing and proprietary physical-layer (PHY) interfaces for any given system function.
Having multiple standards negatively affects interoperability, thereby limiting the options available to the product developer. It would typically not be possible to replace an existing sensor with a different, more attractively priced component, for example, because the two devices will almost invariably be based on different interface standards.
In the case of parallel interfaces, which typically involve more than 10 signals in the case of camera sensors and 20 or more signals in the case of displays, supporting multiple busses can lead to routing congestion. There’s also the expense, size, and weight involved with parallel connectors. And another consideration is reliability, because each signal and solder joint is a potential cause of failure.
And yet one more factor to consider is that as the silicon chips used in mobile devices are implemented in new technology nodes, the sizes of the silicon dice shrink, which means they can be encapsulated in smaller, lighter packages. However, these packages will support fewer input/output (I/O) pins, which makes parallel interfaces even less attractive.
In 2003, in order to address all of these issues for mobile devices, a consortium of companies formed the MIPI Alliance. The goal of MIPI (www.mipi.org) is to define a suite of interfaces for use in mobile and consumer products, where these interfaces reduce cost, complexity, power consumption, and EMI while increasing bandwidth and performance. MIPI addresses the following system elements:
- Graphics sub-systems (cameras and displays)
- Storage sub-systems
- Radio sub-systems
- Power management sub-systems
- Low-bandwidth sub-systems (audio, keyboard, mouse, bluetooth)
It’s important to note that MIPI does not imply a single interface or protocol. Instead, MIPI embraces a suite of protocols and standards that address the unique requirements of the various subsystems. Furthermore, as opposed to the multiple physical layers associated with conventional interfaces, MIPI interfaces, when required, are layered on top of only two physical layers: the D-PHY or the M-PHY. The following discussions introduce the main MIPI elements that are already in deployment or are soon to be deployed. Also discussed are some considerations with regard to selecting MIPI IP.
** End of Part 1 **
Prakash Kamath is the Vice President of Engineering at Arasan Chip Systems (www.arasan.com). Responsible for almost 200 engineers worldwide, Prakash has 29 years of extensive Design and Management experience. Prakash has successfully contributed to establishing Arasan’s “Total IP Solution” and has been instrumental in achieving Arasan’s leadership position with regard to Solid-State Storage and MIPI IP solutions. Prior to joining Arasan in 2002, Prakash has held several design and management positions in companies like AMD, National, and Chips & Technologies. Prakash holds a BS degree from the University of Madras, India, and an MS degree from the University of California, Santa Barbara, USA.