IP Showcase Peggy Aycinena
Peggy Aycinena is a contributing editor for EDACafe.Com DesignWare EV IP: Convolutional Neural Networks at Core of CapabilitiesApril 1st, 2015 by Peggy Aycinena
Early Monday morning, Synopsys announced several new bits have been added to their impressive bucket of IP blocks, a new family of DesignWare processors targeted at vision applications. With an honorable pedigree – descent from the ARC technology that came to Synopsys via the 2010 acquisition of Virage Logic – the processors announced on March 30th are designed to be embedded in SoCs, specifically to meet a growing need to digitally “distinguish smiles from frowns, faces from cars, baby carriages from trees or dogs, and even sky from ground.” These needs were articulated in a March 26th phone call with Synopsys Senior Manager of Product Marketing Mike Thompson, who enthusiastically explained, “The vision market will grow dramatically over the next several years. The next 10-to-15 years will be seen as a paradigm-shift period in how we interact with technology.” That’s why he’s delighted Synopsys will surpass other players in driving that shift: “There are already a few vision processors available [on the market], and they are largely programmable. We took a slightly different approach, however, with the new DesignWare EV Processors we’ve developed. “These new cores are fully programmable, but also have object detection processors with specialized processor information designed to [run] a convolutional neural network executable [see definition below], which is defined as an algorithm that’s designed to find gesture in an image or frame. These are far more accurate and deliver higher accuracy than even human [vision] capability.” “We have two processors in the announcement,” Thompson continued, “both very specialized and very programmable to recognize any object or gesture, giving performance exactly like hardware.” Comparing and contrasting current capabilities in facial recognition, I mentioned Facebook is one application that already knows how to pick out the faces in an image field. He responded, “Yes, that’s true, but with [our new processors] we’re doing far faster precision face recognition than your camera or Facebook. What we’re doing is making it possible to have higher-performance capabilities in smaller sizes that can be embedded into SoCs, which then go into cheap products.” “And what products would those be?” I asked. “Oh gosh,” Thompson responded, “thermostats for starters. The NEST thermostat already has face recognition built-in. And your TV set; electronics in the TV have to be very low cost. [The new DesignWare processors] means your remote control will eventually disappear, [replaced] by a much more natural interface between you and your TV. “Then there are video games. The [new processors] will help create extraordinarily accurate devices, which will give even better recognition of body position. And the ability to detect facial expressions in gamers will provide for a much more submersive experience.” Given that NEST was recently acquired, I asked if Google is building the new DesignWare EV vision processors into their thermostats. Thompson was honest, “No, no. The new processors won’t be released until June, with early access in May. But when they are available, a number of important customers [will be using them] for everything from security to tablets, from TVs to automotive applications.” “Great,” I said. “Any chance you could tell me what they’re going to cost?” “That’s heavily negotiated,” he said, “so the price is never available. These processors are by their very nature, very complex. So to make it easier, we will be providing several reference designs that can be used as frameworks. That, and full documentation, will be delivered to customers along with the kernels.” [See details below.] Thompson concluded, “You ask why someone would want to use these things. Well, they’re easy to use, are delivered with a complete development environment, the object detection runs seamlessly, and you get the highest accuracy for any vision processor on the market [along with] good quality of results. It does all of that, and the power consumption and area are lower than all other competitive offerings. “These new DesignWare EV processors are just better all the way around!”
Per Wikipedia: “In machine learning, a convolutional neural network is a type of feed-forward artificial neural network where the individual neurons are tiled in such a way that they respond to overlapping regions in the visual field. Convolutional networks were inspired by biological processes and are variation of multi-layer perceptrons, which are designed to use minimal amounts of pre-processing. They are widely used models for image and video recognition.”
30 March 2015 – Synopsys announced availability of the first products in the new DesignWare EV Family of vision processors. The EV52 and EV54 vision processors are fully programmable and configurable vision processor IP cores that combine the flexibility of software solutions with the low cost and low power consumption of dedicated hardware. The EV Processor Family is supported by a comprehensive software programming environment based on existing and emerging embedded vision standards including OpenCV and OpenVX, as well as Synopsys’ ARC MetaWare Development Toolkit. The OpenCV source libraries available for EV Processors provide more than 2500 functions for real-time computer vision. The processors are programmable and can be trained to support any object detection graph. The OpenVX framework includes 43 standard computer vision kernels that have been optimized to run on the EV Processors, such as edge detection, image pyramid creation and optical flow estimation. Overall, this combination of high-performance hardware optimized for vision data processing and high productivity programming tools makes the EV Processors an ideal solution for a broad range of embedded vision applications including video surveillance, gesture recognition, and object detection. The EV Processors implement a convolutional neural network (CNN) that can operate at more than 1000 GOPS/W, enabling fast and accurate detection of a wide range of objects such as faces, pedestrians and hand gestures at a fraction of the power consumption of competing vision solutions. The EV Processors include multiple high-performance processing cores that can operate at up to 1 GHz in typical 28-nanometer process technologies. The EV Processors also implement a feed-forward CNN structure that supports a programmable point-to-point streaming interconnect for fast and accurate object detection, a critical task in vision processing. The EV Processors are designed to integrate seamlessly into an SoC. They can be used with any host processors and operate in parallel with the host. The EV Family includes support for synchronization with the host through message passing and interrupts. In addition, the EV Processor memory map is accessible to the host. These features enable the host to maintain control while allowing all vision processing to be offloaded to the EV Processor, reducing power and accelerating results. The EV Processors can access image data stored in a memory mapped area of the SoC or from off-chip sources independently from the host through the ARM AMBA AXI standard system interface if required. John Koeter, Synopsys VP of Marketing for IP, is quoted: “Synopsys’ new DesignWare EV Processor Family delivers state-of-the-art object detection accuracy with 5X better power efficiency, along with comprehensive vision libraries and a robust software programming environment. This combination enables design teams to integrate embedded vision functionality into more systems faster with much lower power consumption than existing solutions.”
RelatedTags: ARC, ARC MetaWare Development Toolkit, ARM, Convolutional Neural Network, DesignWare, DesignWare EV Processors, Google, John Koeter, Mike Thompson, NEST, OpenCV, OpenVX, Synopsys, Virage Logic This entry was posted on Wednesday, April 1st, 2015 at 5:14 pm. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. |