Open side-bar Menu
 EDACafe Editorial

Archive for November, 2019

GPU-accelerated Arm servers; Intel’s new GPUs; a datacenter processor startup; native graph processing

Friday, November 29th, 2019

Another tech giant joins the AI race: Sony Corp. has reportedly launched Sony AI, a new R&D organization focusing on artificial intelligence. And AI is the major factor behind most of the recent news concerning processors of any kind. Many of these announcements came from the SC19 event (International Conference for High Performance Computing, Networking, Storage, and Analysis) recently held in Denver, CO.

Nvidia enables GPU-accelerated Arm-based servers

“Just a few years ago, an Arm-based HPC was difficult to fathom for many. Today, Arm is becoming established as the compelling platform for AI and HPC innovation to build upon.” This website post from Chris Bergey, SVP & GM of the Infrastructure Line of Business at Arm, explains why adding GPU acceleration to Arm architectures is now a hot topic. Earlier this year Nvidia announced that it will bring its Cuda-X software platform to Arm; now the GPU leader is introducing a reference design platform – consisting of hardware and software building blocks – to quickly build GPU-accelerated Arm-based servers for high performance computing applications. As a result, supercomputing centers, hyperscale-cloud operators and enterprises will soon be able to combine Nvidia’s accelerated computing platform with the latest Arm-based server platforms.

Nvidia reference design platform for GPU-accelerated Arm servers. Image credit: Nvidia

New GPUs from Intel

Ponte Vecchio (the Italian for ‘old bridge’, and the name of a famous Florence landmark) is the code name of a new category of Intel’s discrete general-purpose GPUs based on the company’s Xe architecture. The new GPUs are designed for HPC modeling, simulation workloads and AI training. Ponte Vecchio will be manufactured on Intel’s 7nm technology and will leverage Intel’s Foveros 3D and EMIB packaging innovations. It will also feature multiple technologies in-package, including high-bandwidth memory and Compute Express Link interconnect.

A new challenger in the datacenter processor market

With so many startups addressing AI acceleration, the announcement of a startup focusing on ‘regular’ datacenter processors sounds interesting. Even more so if the startup plans to compete directly against the datacenter market leader, Intel. This is, reportedly, the case of Nuvia, a company based in Santa Clara, CA, and led by industry veterans. The startup sets out with “the goal of reimagining silicon design to deliver industry-leading performance and energy efficiency for the data center.” According to Nuvia, achieving this target will require “a step-function increase in compute performance and power efficiency.” As of today, the company has not released any details about its future products. The company has recently closed its Series A funding round raising $53 million. Investors include Capricorn Investment Group, Dell Technologies Capital, Mayfield, WRVI Capital and Nepenthe LLC.

Startup bets on native graph processing

Formerly known as Thinci, the AI acceleration startup Blaize has emerged from stealth and introduced its “graph-native” proprietary SoCs, along with a software development platform. This approach is based on the assumption that “all neural networks are graphs”, as explained in a press release. According to Blaize, its Graph Streaming Processor (GSP) architecture enables concurrent execution of multiple neural networks and entire workflows on a single system. The architecture uses a data streaming mechanism where non-computational data movement is minimized or eliminated, thus reducing latency, memory requirements and energy demand. Non-neural network functions such as image signal processing can also be integrated and represented as graphs, thus achieving a 10-100x boost in processing efficiency. Supported by a US$87M funding from strategic and venture investors, Blaize claims engagements of early access customer since 2018 in automotive, smart vision and enterprise computing segments.

Combining x86 cores and AI coprocessor

Definitely not a startup, having been founded in 1995, Austin-based Centaur Technology is also taking part in the AI acceleration race with an SoC that combines eight new server-class x86 CPU cores with a 20 TOPS coprocessor optimized for inference applications in server, cloud and edge products. This unique approach is targeted at applications that require both x86 compatibility and AI acceleration, offering the benefits of integration compared to using an off-chip accelerator. Benefits cited by the company include lower cost, lower power, higher performance, and the possibility to avoid the use of specialized software tools. Eliminating the need to move data to an off-chip accelerator yields extremely low latency on inference tasks, as demonstrated by MLPerf benchmarks. Centaur Technology submitted audited results for four MLPerf inference applications in the Closed/Preview category, obtaining the fastest latency score of all submitters for image classifier benchmark Mobilenet-V1. Additionally, as stated in a Centaur press release, the company was the only chip vendor to submit scores for GNMT translation from English to German.

More AI chips news

More AI chips made news over the past couple of weeks. Graphcore’s Colossus GC2 was the subject of a Mentor announcement revealing that this 23.6 billion transistors chip was verified using Mentor’s Questa RTL simulation flow, in conjunction with Questa Verification IP solution for PCI Express. Gyrfalcon introduced its new Lightspeeur 5801 AI accelerator, highlighting its power efficiency for edge applications. With 2.8 TOPS of performance and using only 224 mW of power, the Lightspeeur 5801 achieves a 12.6 TOPS/W ratio.

Getting rid of lithography?

After so many processor announcements, let’s finish with some news from academic research in nanotechnology. What if crystal structures in electronic devices could be synthesized with the desired shape without using lithography and etching? A team of researchers from Johns Hopkins University have shown that this is possible – at least for certain transition-metal dichalcogenide (TMD) crystals – using a method that involves pre-treating the silicon surface with phosphine. With this technology, the team has built MoS2 (molybdenum disulfide) ‘nanoribbons’ and was able to control the width of these structures between 50 and 430 nanometers by varying the total phosphine dosage during the surface treatment step. Potential applications cited by the researchers include optoelectronic devices, energy storage, quantum computing and quantum cryptography.

Intel and the future of neuromorphic technologies

Friday, November 22nd, 2019

The Intel Neuromorphic Research Community (INRC) created by Intel to support its Loihi chip has now its first corporate members: Accenture, Airbus, GE and Hitachi. These four companies are planning to explore the potential of neuromorphic technology in a wide range of applications: Accenture Labs is interested in specialized computing and heterogenous hardware for use cases such as smart vehicle interaction, distributed infrastructure monitoring and speech recognition; Airbus, in collaboration with Cardiff University, is looking to advance its existing in-house developed automated malware detection technology, leveraging Loihi’s low-power consumption for constant monitoring; GE will focus on online learning at the edge of the industrial network to enable adaptive controls, autonomous inspection and new capabilities such as real-time inline compression for data storage; Hitachi plans to use Loihi’s potential to recognize and understand the time series data of many high-resolution cameras and sensors quickly. The Intel Neuromorphic Research Community has tripled in size over the past year; members are now more than 75, mostly universities, government labs, neuromorphic start-up companies. These organizations have developed the basic tools, algorithms and methods needed to make Intel’s neuromorphic technology useful in real-world applications. Community members have published several papers on academic and scientific journals; some of these works can be accessed from the INRC website. Now Intel is building on this basic research to win the interest of large corporations. “We are now encouraging commercially-oriented groups to join the community”, said Mike Davies, director of Intel’s Neuromorphic Computing Lab, in the announcement’s press release.

Mike Davies, director of Intel’s Neuromorphic Computing Lab (Credit: Tim Herman/Intel Corporation)

(more…)

IR drop analysis; die-to-die connectivity; testing AV chips; ADAS devices and more weekly news

Friday, November 15th, 2019

Several interesting announcements are making news this week, both from EDA-IP vendors addressing the requirements of next generation chips, and from chipmakers targeting advanced automotive, consumer and industrial applications.

Addressing the IR drop analysis issues

Due to the effects of highly resistive lower metal layers, one of the challenges posed by the design of advanced high-speed chips at 7 nanometers and below concerns the IR drop analysis. In these designs, timing is dependent on IR drop and vice-versa, making IR drop analysis a key signoff technology. To address these issues, Cadence has integrated two of its preexisting products: the Tempus Timing Signoff Solution and the Voltus IC Power Integrity Solution. According to Cadence, the resulting tool – called Tempus Power Integrity Solution – allows to significantly lower IR drop design margins without sacrificing signoff quality, thus improving power and area. Early use cases cited by Cadence demonstrated that the new solution correctly identified IR drop errors, avoiding silicon failure prior to tapeout and improving the maximum frequency in silicon by up to 10%. Other benefits include a proprietary vectorless-based algorithm to identify critical paths most likely impacted by IR drop.

(more…)

Mobileye’s growth; AI-enabled MCUs; 5G superchips; open source silicon; and more industry news

Thursday, November 7th, 2019

Automotive and 5G are making news this week as usual, while a couple of updates involve the positioning technology landscape. A new class of devices joins the open source world, and MEMS productions continues to grow. Let’s take a look at these and other weekly news, including acquisitions and events updates.

ADAS momentum drives Mobileye growth

In the third quarter of 2019, Intel-owned Mobileye achieved a 20 percent revenue growth year over year driven by continued advanced driver-assistance systems (ADAS) momentum. By the end of 2019, Mobileye will have shipped more than 50 million EyeQ chips since 2008. Today, the company powers ADAS systems in 300 car models with 27 OEM partners. Mobileye has also announced a Level 4 design win with Chinese electric automaker NIO. Collaboration with NIO includes the development of a robotaxi vehicle, that will be exclusively sold to Mobileye for global deployment of robotaxi-based ride-sharing services.

Cars in the Mobileye fleet of autonomous vehicles. (Credit: Walden Kirsch/Intel Corporation)

Infineon’s car MCUs to include Synopsys AI IP

Infineon’s Next-generation Aurix automotive microcontrollers will integrate a new high-performance AI accelerator called Parallel Processing Unit (PPU) that will employ Synopsys’ DesignWare ARC EV Processor IP. Already today, the Aurix MCU supports certain types of neural networks; the new PPU, however, is expected to take its real-time and AI capabilities to an entirely new level.

Samsung’s 5G superchips

Samsung Electronics has recently introduced two impressive chips for future mobile devices that will make intensive use of video, artificial intelligence and 5G communications: the Exynos 990 mobile processor and the 5G Exynos Modem 5123, both manufactured in a 7nm process technology using EUV lithography. The Exynos 990 includes an embedded Arm Mali-G77 GPU, in addition to a tri-cluster CPU structure that consists of two custom cores, two Cortex-A76 cores and four Cortex-A55 cores. The Exynos 990 also includes a dual-core neural processing unit and DSP that can perform over ten-trillion operations (TOPs) per second. The 5G Exynos Modem 5123 supports virtually all mobile networks, from 5G’s sub-6GHz and mmWave spectrums to 2G GSM/CDMA, 3G WCDMA, TD-SCDMA, HSPA and 4G LTE. In 5G, with up to 8-carrier aggregation (8CA), the modem delivers a maximum downlink speed of up to 5.1-gigabits per second in sub-6-gigahertz and 7.35 Gbps in mmWave, or up to 3.0 Gbps in 4G networks by supporting higher-order 1024 Quadrature Amplitude Modulation.

Virtualizing 5G radio access networks

Nvidia and Ericsson are collaborating on technologies that can allow telco operators to build completely virtualized 5G radio access networks (RAN). Virtualized networks promise a faster and more flexible introduction of new AI and IoT services, such as augmented reality, virtual reality and gaming. The collaboration seeks to create a complete virtualized RAN solution that is comparable with traditionally built RAN networks in terms of cost, size and energy consumption.

Silicon root of trust goes open source

Google has recently announced the first open source silicon root of trust (RoT) project. Dubbed OpenTitan, it is expected to deliver a high-quality RoT design and integration guidelines for use in server motherboards, network cards, client devices (e.g., laptops, phones), consumer routers, IoT devices, and more. Silicon RoT can help ensure that the hardware infrastructure and the software that runs on it remain in their intended, trustworthy state by verifying that the critical system components boot securely using authorized and verifiable code. The project name comes from Google’s own custom-made RoT chip, Titan, used in Google’s data centers. The OpenTitan project is managed by the lowRISC CIC, a not-for-profit company based in Cambridge, UK, and is supported by a coalition that includes ETH Zurich, G+D Mobile Security, Google, Nuvoton Technology, and Western Digital.

Growth of MEMS and sensors capacity continues

According to a new report published by SEMI, total worldwide installed capacity for MEMS and sensors fabs is forecast to grow 25 percent to 4.7 million wafers per month (200mm equivalent capacity) from 2018 to 2023. Growth will be driven by explosive demand across communications, transportation, medical, mobile, industrial and other Internet of Things (IoT) applications.

Positioning news

A new positioning wireless company has been founded as a spin-off of Acorn Technologies. Called PHY Wireless (La Jolla, CA), it aims to be a pure-play provider of positioning mobile solutions for carriers, chip makers, IoT module manufacturers, application service providers and end users. PHY Wireless’s algorithms allow for positioning to be device-based with limited network interaction, thereby reducing power and extending battery life. Also in the positioning market, Locix (San Bruno, CA) has selected Imagination Technologies’ Ensigma IP for use in its Locix LPS solution, a Wi-Fi-based, local positioning system for indoor and outdoor environments. Ensigma is an IEEE 802.11ac 2×2 MIMO Wi-Fi baseband and RF IP.

Acquisitions

Rambus has recently completed the previously announced sale of its Payments and Ticketing businesses to Visa. Micron Technology has acquired the Artificial Intelligence startup FWDNXT (pronounced “forward next”) to explore deep learning solutions, particularly in IoT and edge computing. With this acquisition, Micron aims to offer a comprehensive AI development platform. Ansys has entered into a definitive agreement to acquire Dynardo, a provider of simulation process integration and design optimization (PIDO) technology. AVX Corporation has completed the purchase of Chengdu OK New Energy (COKNE), after four years of collaboration on the manufacturing and development of supercapacitors. Marvell has completed its acquisition of Avera Semiconductor, the ASIC business of Globalfoundries. And following the acquisition that took place last March, Integrated Device Technology (IDT) will change its 39-year old name into Renesas Electronics America. Founded in 1980, Integrated Device Technology is among the oldest Silicon Valleys’ chipmakers.

Upcoming events

The Collaborative Robots, Advanced Vision & AI (CRAV.ai) Conference will take place in San Jose, CA, November 12-13. The International Test Conference will open its doors in Washington, DC, November 12-14. Also starting on November 12, but continuing until 15, is Productronica (Munich, Germany), this year co-located with Semicon Europa. The Edge AI Summit will run on November 20 and 21 at the wonderful Computer History Museum in Mountain View, CA.

AI acceleration takes center stage at the 2019 Linley Fall Processor Conference

Friday, November 1st, 2019

Innovative computing concepts challenging traditional architectures, new papers being published at a rate of 16 per day, startups attracting investors’ money, new chips hitting the market: the energies unleashed by neural network-based AI (artificial intelligence) spell exciting times for the IT and semiconductor industries. A good example of this climate was offered by the 2019 Linley Fall Processor Conference (Santa Clara, CA, October 23th and 24th), organized by the technology analysis firm Linley Group: the event attracted hundreds of attendees and required two parallel tracks – on day one – to accommodate all sponsors. Most speakers addressed AI-related themes, particularly the quest for new processing architectures boosting energy efficiency and speed as required by upcoming AI applications. Some speakers, however, touched other topics such as 5G and traditional architectures. Here is a quick overview of some of the presentations.

Linley Gwennap at the 2019 Linley Fall Processor Conference. Image credit: Marcus Araiza -Atlas Studios Bay Area


(more…)




© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise