Open side-bar Menu
 EDACafe Editorial

Archive for December, 2019

A closer look at Vitis, Xilinx unified software platform

Friday, December 20th, 2019

Last October EDACafe reported about the introduction of Vitis, the new Xilinx unified software platform that enables the development of embedded software and accelerated applications on heterogeneous Xilinx platforms – including FPGAs, SoCs and Versal ACAPs (Adaptive Compute Acceleration Platforms). This week we take a closer look at Vitis with the help of Ramine Roane, Xilinx’s Vice President of Software and AI Product Marketing.

Ramine Roane. Image credit: Xilinx

But first, let’s briefly summarize what this platform is about. Announced on occasion of the Xilinx Development Forum Americas, Vitis allows all developers – including software engineers and AI scientists – to co-develop and optimize hardware and software, using the tools and frameworks they already know and understand, without the need for hardware expertise. As stated in the announcement’s press release, with Vitis developers can leverage integration with high-level frameworks, develop in C, C++, or Python using accelerated libraries, or use RTL-based accelerators and low-level runtime APIs for more fine-grained control over implementation. Vitis is a four-layer stack architecture, with the third layer offering more than 400 optimized and open-source applications across eight libraries: Basic Linear Algebra Subprograms, Solver, Security, Vision, Data Compression, Quantitative Finance, Database, AI. These libraries enable to call pre-accelerated functions using a standard application programming interface (API). At the time of announcement, the fourth layer consisted of Vitis AI, which integrates a domain specific architecture (DSA) that configures Xilinx hardware for frameworks like TensorFlow and Caffe. More DSAs will be released from Xilinx and its ecosystem partners for applications such as video encoding, genome analysis, big data analytics, etc. On November 12th, at the Xilinx Developer Forum Europe, the company announced availability of Vitis and its open source libraries for immediate download, and free of charge. Also available for free download – since December 2nd, as announced on occasion of the Xilinx Developer Forum China – is Vitis AI, the AI inference development platform.
(more…)

Low power FPGAs with Risc-V cores; Risc-V-based GPUs; more AI chips; scaling towards 2nm

Friday, December 13th, 2019

Some interesting news this week came from two recent events: the 2019 IEEE International Electron Devices Meeting (IEDM) held in San Francisco from December 7th to 11th, and the Risc-V Summit that took place in San Jose from December 10th to 12th. As usual, AI is the underlying theme for many of these announcements, but other news concern future process nodes and the improving performance of power devices based on new materials.

Lattice and Microchip low power FPGAs with Risc-V cores

SiFive and Lattice will collaborate to enable easy availability of SiFive scalable Core IP for developers using Lattice’s low power FPGA product families, including Lattice’s new 28 nm CrossLink-NX FPGAs. The collaboration will address a diverse array of use cases and markets, from control plane processing in communications infrastructure to data path processing in edge applications. The CrossLink-NX family was designed using the new Lattice Nexus platform, which combines a 28 nm FD-SOI manufacturing process with a new FPGA fabric architecture optimized for low power operation in a small form factor.

And Microchip is opening the Early Access Program for its low power PolarFire system-on-chip FPGA, a platform that offers a hardened real-time, Linux capable, Risc-V-based microprocessor subsystem. Targeted applications include embedded systems at the edge in communications, defense, medical and industrial automation markets.

A Risc-V-based GPU

More news from the Risc-V Summit: Greek company Think Silicon announced a 3D GPU based on the Risc-V instruction set architecture, dubbed NEOX|V. According to the company, the usage of a common ISA between the main system CPUs and GPUs will allow new programming paradigms by dynamically balancing computation load between these processing elements. This will also enable a new class of SoCs providing benefits in terms of size, power and openness. NEOX|V offers a framework for integrating custom user instructions, and developers will be able to leverage the growing set of software tools in the Risc-V ecosystem.

Enflame targets AI training acceleration

Most AI startups focus on inference acceleration; Chinese startup Enflame Technology, instead, has announced a new deep learning accelerator for data center training. The 14 billion transistors chip is built with Globalfoundries’ 12LP FinFET process and uses 2.5D packaging for integration with high bandwidth memory (HBM2). Based on a reconfigurable chip design approach, Enflame’s Deep Thinking Unit features 32 scalable intelligent processors, arranged in four clusters. As stated in a press release, Enflame is focused on accelerating on-chip communications to increase the speed and accuracy of neural network training while reducing data center power consumption. The press release does not provide details about the processor architecture implemented by Enflame. The chip supports a broad range of data types: FP32, FP16, BF16, Int8, Int16, Int32, etc. It also supports the PCIe 4.0 interface and the “Enflame Smart Link” high-speed interconnection.

CEA-Leti’s RRAM neuromorphic chip

A new chip has joined the neuromorphic effort carried out by universities and research institutes around the world (see EDACafe overview): at the 2019 IEDM conference, French research institute CEA-Leti presented a fully integrated bio-inspired neural network, combining resistive-RAM-based synapses and analog spiking neurons. Researchers pointed out that, to date, demonstrations of RRAM-based spiking neural networks have been limited to system-level simulations calibrated on experimental data. Leti managed to integrate the entire network on-chip: no part is emulated or replaced by an external circuit. The test chip is fabricated in 130nm CMOS process, leveraging CEA-Leti’s expertise in manufacturing RRAM memories on top of CMOS wafers. The functionality of this proof-of-concept circuit was demonstrated through handwritten digits classification.

SEM cross-section of the RRAM cell monolithically integrated on the top of 130nm CMOS. Image credit: CEA-Leti

TowerJazz’s memristor-based AI core

Another AI-related announcement involving innovative technologies comes from Israel, where TowerJazz foundry and the Technion – the Israel Institute of Technology – have jointly developed a technological platform that utilizes memristor devices featuring analog memory storage and computing capabilities, enabling ultra-low power AI cores. The platform is based on TowerJazz’s commercial patented Y-Flash NVM on its well-established 0.18um CMOS technology. Single poly Y-Flash floating gate NVM transistors, originally designed for digital data storage, were converted into two-terminal analog devices operated in the energy-efficient sub-threshold regime. The analog memristors are tuned using optimized switching voltages and times to achieve 65 discrete resistive levels. According to the team, this platform enables several orders of magnitude lower power consumption compared to existing digital solutions, and is very cost effective as it can be implemented in less advanced technology nodes. Potential applications include IoT edge devices, fingerprint sensors, face and audio recognition.

Imec’s forksheet device pushes scaling towards 2nm

At the already mentioned IEDM conference, Belgian research institute imec presented the first standard cell simulation results of its “forksheet device” designed for sub-3nm logic technology nodes. Compared to gate-all-around nanosheet devices, the reduced n-to-p spacing results in a 10 percent performance increase. When combined with scaling boosters, the new device architecture will bring logic standard cell height down to 4.3 tracks, which – combined with cell template optimization – can result in more than 20 percent area reduction. According to imec, these results value the forksheet architecture as a potential solution to extend the scalability of nanosheet structures beyond the 3nm logic technology node. The process flow for the forksheet is similar to the one of a nanosheet device, with only limited additional process steps.

Layout of SRAM half cells for a) FinFET, b) gate-all-around nanosheet and c) forksheet. Image credit: imec

UnitedSiC’s low Rdos(on) SiC FETs

UnitedSiC is introducing four new silicon carbide power FETs, with Rds(on) levels as low as 7mohms. Of the four new UF3C SiC FET devices, one is rated at 650V with Rds(on) of 7mohm, and three rated at 1200V with Rds(on) of 9 and 16mohm. All are available in the TO247 package. The new devices combine a SiC JFET and a cascode-optimized Si MOSFET, a circuit configuration that can be driven with the same gate voltages as Si IGBTs, Si MOSFETS and SiC MOSFETs. According to UnitedSiC, the standard drive characteristics and packaging allow to use the new SiC FETs as drop-in replacements for less efficient parts in a wide variety of applications, with little or no additional design effort. For example, existing inverter designs could achieve a higher efficiency without reinventing their basic circuit architecture, by switching at the same speed. Target applications include electric vehicle inverters, high-power DC/DC converters, high-current battery chargers and solid-state circuit breakers.

Andes’ vector processing; Cadence to acquire AWR; Synopsys’ die-to-die IP; GaN growth; Imagination’s new GPUs

Friday, December 6th, 2019

Risc-V is making news this week, with the upcoming Risc-V Summit (December 10th to 12th in San Jose, CA) bringing several announcements regarding this open source ISA. Among the other recent news, some interesting updates concerning EDA, IP and power electronics.

Andes adds Risc-V Vector instruction extension

At the Risc-V Summit Andes will be unveiling details of its new AndesCore 27-series of CPU cores. The 27-series is the first licensable Risc-V core to deliver to a production licensee the Risc-V Vector instruction extension (RVV). The cores’ memory subsystem has also been re-architected to support the RVV requirements in terms of memory bandwidth and efficiency. The RVV is especially targeted to complex computation of large volume of matrix data required by emerging applications such as AI, AR/VR, computer vision, cryptography, and multimedia processing. According to Andes, the Risc-V Vector instruction extension differs from advanced SIMD architectures as it provides more flexibility, with scalable data sizes, flexible microarchitecture implementations, and a memory subsystem that can be optimized at the system level. One of the new cores, dubbed NX27V, contains a Vector Processing Unit (VPU) that allows an arbitrary vector length from 64-bit to 512-bit, and even 4096-bit by combining eight vector registers. Computation of integer, fixed point, floating point, and other AI-optimized representations can be any bit-width from 4 bits to 32 bits. Among other architectural innovations, the 27-series supports multiple outstanding memory accesses, so both the scalar and vector processors don’t have to wait for the data during cache misses. In addition, cache pre-fetches allow the memory to prepare data in advance of processor’s needs. Andes describes this new series as “ground-breaking”, claiming that its VPU has been “designed from the ground up to be a Cray-like full vectorization computation unit”, as opposed to “some advanced SIMD” offering only “incremental” performance growth over preexisting SIMD architectures. The 27-series processor beta release has been delivered to Andes’ first licensee in early December 2019; production database release is scheduled for Q1 2020. Initially available will be the 32-bit A27, the 64-bit AX27 (both tailored for applications running Linux) and the above-mentioned NX27V.

(more…)




© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise