Open side-bar Menu
 EDACafe Editorial

Archive for March, 2020

EDA 2019 results; EUV-based DRAMs; supercomputers; fast charging batteries; image sensors

Friday, March 27th, 2020

There is no need to stress the impact that the current pandemic is having on every aspect and activity of the semiconductor and ICT industry. Here we will only briefly mention a couple of pandemic-related news, then move on to some of our usual market and technology updates.

Analog Devices and Infineon withdraw their market outlooks

Analog Devices “believes it is prudent to withdraw the company’s outlook for the fiscal second quarter, ending May 2, 2020,” as quantifying and forecasting the business impact of COVID-19 has become increasingly difficult. The company will provide a further update during its second quarter earnings release and call in May 2020. Infineon, for its part, is withdrawing its outlook for the whole 2020 fiscal year, citing “low visibility”. Given the uncertainty regarding the severity and the length of the pandemic`s economic impact, the German chipmaker believes that “the specific implications on sales and earnings for the 2020 fiscal year cannot be reliably assessed or quantified.”

EDA industry 2019 results

According to a report recently released by the ESD Alliance, in 2019 the EDA industry revenue reached $10.2 billion – an 8.3 percent increase over 2018. In terms of four-quarters moving average, the fastest growing EDA category was PCB & MCM with 15.1 percent, while the only slowing category was “services” with a 10.9 percent decrease. On a regional basis – again in terms of four-quarters moving average – the year 2019 saw a significant increase in EMEA and APAC (8.6 and 13.6 percent respectively), while Japan reported a 6.7 percent decrease. The Americas, EDA’s largest region, remained almost stable in 2019, with a 2.4 percent increase. Companies that were tracked by the ESD Alliance employed 45,416 professionals in Q4 2019, an increase of 6.1 percent compared to the 42,790 employed in Q4 2018.

Samsung successfully using EUV lithography for DRAM manufacturing

Samsung has announced that it has successfully shipped one million of the industry’s first 10nm-class (D1x) DDR4 DRAM modules based on extreme ultraviolet (EUV) technology. The company claims to be the first to adopt EUV in DRAM production, to overcome challenges in scaling. EUV will be fully deployed in Samsung’s future generations of DRAM, such as D1a-based DDR5 and LPDDR5 which the company expects to begin producing in volume next year. To better address the growing demand for next-generation premium DRAM, Samsung will start the operation of a second semiconductor fabrication line in Pyeongtaek, South Korea, within the second half of this year.

Samsung Electronics Hwaseong Campus (Photo: Business Wire)

Hints about El Capitan supercomputer

Last March 5th Lawrence Livermore National Laboratory, Hewlett Packard Enterprise and Advanced Micro Devices announced the selection of AMD as the node supplier for El Capitan, one of the three exascale supercomputers that the US Department of Energy plans to deploy during the next few years. The announcement provided some details, stating that El Capitan will be powered by next-generation AMD Epyc processors code-named “Genoa”, and by next-generation AMD Radeon Instinct GPUs based on a new compute-optimized architecture for HPC and AI workloads. Also on March 5th, AMD’s Financial Analyst Day offered a preview of the company’s GPU roadmap, preannouncing a “compute-optimized GPU architecture” called CDNA which will evolve into second generation CDNA 2. Now microprocessor expert Linley Gwennap has pieced together the available information to infer something more about the DOE exascale supercomputer: “[AMD] confirmed that El Capitan will feature a fourth generation Epyc processor, code-named Genoa, implemented in 5nm technology. The system will likely include the second-generation CDNA GPU. Both Genoa and CDNA 2 implement a third-generation Infinity Fabric that coherently couples the CPU with up to four GPUs. We expect CDNA 2 will use AMD’s future X3D packaging technology to combine multiple chiplets with four HBM stacks on a single substrate.” Gwennap also comments on the choice of using a single vendor for both CPU and GPU, that also applies to another DOE supercomputer which will combine Intel Xeon processors and Intel’s future Xe GPU. “This move to single-vendor CPU+GPU combinations leaves Nvidia, which dominates the world’s top supercomputers, completely out of the picture for these big American systems.”, Gwennap observes.

El Capitan. Image credit: LLNL

Extremely fast charging EV batteries

A new Li-ion battery technology promises to enable electric vehicles to run 400 kilometers (almost 250 miles) on a five-minute charge. Developing such a technology is Enevate, a company based in Irvine, CA, reportedly using an innovative anode consisting of a porous film made mainly of pure silicon. According to the company, the new inexpensive anode material will lead to a 30 percent increase in the range of electric vehicles on a single charge. Enevate’s patented process creates the porous 10- to 60-µm-thick silicon film directly on a copper foil, and includes a nanometers-thick protective coating to prevent the silicon from reacting with the electrolyte. Enevate claims that its roll-to-roll processing techniques reduce cost compared to graphite anodes and allow high-volume manufacturing. Energy density can also be increased: the company has made battery cells reaching 350 watt-hours per kilogram. Enevate is reportedly working with multiple major automotive companies to develop standard-size battery cells for electric vehicles hitting the market in 2024-2025.

Benjamin Park, Enevate’s founder and CTO. Image credit: Enevate

Small global-shutter image sensors from STMicroelectronics

Global-shutter image sensors save all pixel data in each frame simultaneously, as opposed to “rolling-shutter” operation that captures pixel data sequentially one line at a time – not the best option with moving images. Now two new global-shutter image sensors from STMicroelectronics claim high speed and very small die size. The new sensors are the VD55G0 with 640 x 600 pixels and the VD56G3 with 1.5 Mpixels (1124 x1364), measuring 2.6mm x 2.5mm and 3.6mm x 4.3mm respectively. ST claims low pixel-to-pixel crosstalk at all wavelengths, ensures high contrast for superior image clarity. Embedded optical-flow processing in the VD56G3 calculates movement vectors, without the need for host computer processing. Key to these performances is ST’s advanced pixel technology, including full Deep Trench Isolation (DTI), that enables extremely small 2.61μm x 2.61μm pixels. The ST approach allows space-saving vertical stacking of the optical sensor and associated signal-processing circuitry on the bottom die. Integrated digital circuitry incorporates hardware features including an exposure algorithm, automatic defect correction, and automatic dark calibration. These new sensors are suited to applications such as Augmented and Virtual Reality (AR/VR), Simultaneous Localization and Mapping (SLAM), and 3D scanning.

Updates from Cadence and Synopsys; Intel’s 100 million neurons system; IoT cross-technology wireless communication; and more industry news

Friday, March 20th, 2020

Recent announcements from Cadence and Synopsys testify that concurrent satisfaction of multiple constraints is a key challenge for today’s chip design. More news this week concern advancements in Intel’s neuromorphic initiative, a research work on IoT cross-technology wireless communication, IC market data, and an event update.

New Cadence digital full flow

The new release of the Cadence digital full flow has been enhanced to further optimize power, performance and area results. One of the enabling innovations to this goal is what Cadence has dubbed “iSpatial technology”, which integrates the Innovus Implementation System’s GigaPlace Placement Engine and the GigaOpt Optimizer into the Genus Synthesis Solution. As Cadence explained in a press release, the iSpatial technology allows a seamless transition from Genus physical synthesis to Innovus implementation using a common user interface and database. Other enhancements include the addition of machine learning capabilities, that enable users to leverage their existing designs to train the new optimization technology to minimize design margins; and lastly unified implementation, timing- and IR-signoff engines. Engine unification enhances signoff convergence by concurrently closing the design for all physical, timing and reliability targets. According to Cadence, this allows customers to reduce design margins and iterations.

The new Cadence digital full flow. Image credit: Cadence

Synopsys shifts left RTL closure

Synopsys has introduced RTL Architect, a product aiming to “shift left” (move to an earlier phase in the design flow) the RTL design closure. According to the company, RTL Architect is the industry’s first “physically aware RTL design system”, which reduces the SoC implementation cycle in half and delivers superior quality-of-results (QoR). This is obtained through a fast, multi-dimensional implementation prediction engine that enables RTL designers to predict the power, performance, area, and congestion impact of their RTL changes. The RTL Architect system is built on a unified data model that provides multi-billion gate capacity and comprehensive hierarchical design capabilities. Synopsys’ PrimePower golden signoff power analysis engine is directly integrated with the new product.

Intel’s neuromorphic system reaches 100 million neurons

Intel has recently announced the readiness of Pohoiki Springs, its latest and most powerful neuromorphic research system providing the computational capacity of 100 million neurons – the size of a small mammal brain. Pohoiki Springs, a data center rack-mounted system, integrates 768 Loihi neuromorphic research chips inside a chassis the size of five standard servers. The cloud-based system will be made available to members of the Intel Neuromorphic Research Community (INRC).

Intel’s Pohoiki Springs. Image credit: Intel Corporation

Another recent advancement concerning Loihi is a research demonstrating its ability to “smell” and recognize ten hazardous chemicals. Training was carried out using a dataset from an array of 72 sensors detecting chemicals in the air. The research, jointly carried out by Intel Labs and Cornell University, demonstrated that Loihi is particularly efficient in this task, even in the presence of significant noise and occlusion: the chip learned each “odor” with just a single sample, without disrupting its memory of previously learned scents, and showed superior recognition accuracy compared with conventional state-of-the-art methods. Key to these performances is a neural algorithm derived from the architecture and dynamics of the brain’s olfactory circuits.

Energy bursts connect different wireless standards in IoT devices

Researchers at TU Graz (Graz University of Technology, based in Graz, Austria) have developed a solution that enables direct information exchange between commercially available IoT devices that use different wireless technologies (Wi-Fi, Bluetooth, ZigBee) but the same radio frequencies. Called X-Burst, the solution is based on the wireless transmission and reception of energy pulses (energy bursts) and leverages the ability of most IoT devices to generate and detect such pulses, irrespective of the wireless standard they use. Bursts contain data packets of varying lengths, where the information is encoded in the duration of the packets. The receivers monitor the energy level in the radio channel and can thus detect the packets, determine their duration and finally extract the information they contain. According to the research team, the solution enables communication between different wireless technologies without the need for expensive and inflexible gateways. The researchers concentrated primarily on data exchange in the license-free 2.4 GHz band. The solution also enables the system clocks of the various devices to be synchronized, allowing action coordination, and negotiation of radio frequencies to minimize cross-technology interference.

U.S. IC companies maintained global market share lead in 2019

According to a report from market research firm IC Insights, 2019 regional market shares of IDMs (companies operating wafer fabs), fabless companies, and total IC sales were led by U.S. headquartered companies. U.S. companies held 55% of the total worldwide IC market in 2019 followed by the South Korean companies with a 21% share, down six percentage points from 2018. Taiwanese companies, on the strength of their fabless company IC sales, held 6% of total IC sales, one point less than the European companies. As highlighted by the report, South Korean and Japanese companies have an extremely weak presence in the fabless IC segment and the Taiwanese and Chinese companies have a noticeably low share of the IDM portion of the IC market. Overall, U.S.-headquartered companies showed the best-balanced combination of IDM, fabless, and total IC industry market share. The report also provides year-on-year (2019 versus 2018) sales growth data on a regional basis. This part of the research shows that the South Korean-headquartered companies – primarily Samsung and SK Hynix – registered a 32% sales drop, the worst of any major country/region. This was driven by a collapse in DRAM and NAND flash memory IC sales in 2019.

Linley Spring Processor Conference goes virtual

The upcoming Linley Spring Processor Conference, scheduled for early April in Santa Clara, CA, will be held as a virtual event. Attendees connected through the Internet will be able to view live-streamed presentations and interact with the speakers during Q&A and breakout sessions. The virtual conference will be online from April 6th to 9th, morning only (from 9:00 am to 1:00 pm PDT). Topics will include AI for ultra-low-power applications; 5G and AI at the network edge; datacenter processors and accelerators; AI for embedded application; processor technology.

AI-optimized chip design; image sensor with neural network capability; nanoelectromechanical relays; latest acquisitions

Friday, March 13th, 2020

Last week we briefly addressed the theme of machine learning in chip design; this week a Synopsys announcement provides a significant update on this topic. Other news includes some interesting academic research work.

Exploring design space with artificial intelligence

Synopsys has introduced DSO.ai (Design Space Optimization AI), what it claims to be the industry’s first autonomous artificial intelligence application for chip design, capable of searching for optimization targets in very large solution spaces. DSO.ai ingests large data streams generated by chip design tools and uses them to explore search spaces, observing how a design evolves over time and adjusting design choices, technology parameters, and workflows to guide the exploration process towards multi-dimensional optimization objectives. The new AI application uses machine-learning technology invented by Synopsys R&D to execute searches at massive scale: according to the company, DSO.ai autonomously operates tens-to-thousands of exploration vectors and ingests gigabytes of high-velocity design analysis data – all in real-time. At the same time, the solution automates less consequential decisions, like tuning tool settings. The announcement press release includes a quote from early-adopter Samsung, testifying that Synopsys’ DSO.ai systematically found optimal design solutions that exceeded previously power-performance-area results. Furthermore, DSO.ai was able to achieve these results in three days – as opposed as over a month of experimentation when the process is performed by a team of expert designers.

New Xilinx adaptive compute acceleration platform

Xilinx has announced Versal Premium, the third series in the Versal ACAP (adaptive compute acceleration platform) portfolio. The Versal Premium series is built on a foundation of the currently shipping Versal AI Core and Versal Prime ACAP series. New and unique to Versal Premium are 112Gbps PAM4 transceivers, multi-hundred gigabit Ethernet and Interlaken connectivity, high-speed cryptography, and PCIe Gen5 with built-in DMA, supporting both CCIX and CXL. The new platform is designed for the highest bandwidth networks operating in thermally and spatially constrained environments, as well as for cloud providers who need scalable, adaptable application acceleration.

Image credit: Xilinx

Low power audio AI applications at the edge

Cadence has optimized the software of its Tensilica HiFi DSPs to efficiently execute TensorFlow Lite for Microcontrollers, part of the TensorFlow end-to-end open-source platform for machine learning from Google. This promotes rapid development of edge applications that use artificial intelligence and machine learning,removing the need for hand-coding the neural networks. According to Cadence, Tensilica HiFi DSPs are the most widely licensed DSPs for audio, voice and AI speech; support for TensorFlow Lite for Microcontrollers enables licensees to innovate with ML applications like keyword detection, audio scene detection, noise reduction and voice recognition, with an extremely low-power footprint.

Image sensor with neural network capability

Researchers from Vienna University of Technology (Vienna, Austria) have demonstrated how an image sensor can itself constitute a neural network that can simultaneously sense and process optical images without latency. The device is based on a reconfigurable two-dimensional array of tungsten diselenide photodiodes, and the synaptic weights of the network are stored in a continuously tunable photoresponsivity matrix. In other words, the sensitivity of each photodiode can be individually adjusted by altering an applied voltage, and sensitivity factors work like weights in a neural network. By creating the appropriate sensitivity pattern, the image sensor as a whole acquires the ability to perform some basic machine learning function. The experimental device is a square array of nine pixels, with each pixel consisting of three photodiodes; resulting currents (analog signals) are summed along a row or column, according to Kirchhoff’s law. The researchers demonstrated that the device could sort an image into one of three classes that correspond to three simplified letters, and thus identify which letter it is in nanoseconds. Throughput is in the range of 20 million bins per second. Practical applications of this interesting concept would require solving a number of problems inherent to the technology used in the research chip, such as difficult imaging under dim light, high power consumption, difficult manufacturing over large areas etc. With different sensors, the same concept could be extended to other physical inputs for auditory, tactile, thermal or olfactory sensing.

Image credit: Nature

Nanoelectromechanical non-volatile memories withstand 200°C

Nanoelectromechanical relays have emerged as a promising alternative to transistors for creating non-volatile memories that can operate in extreme temperatures with high energy efficiency. However, until now, a reliable and scalable non-volatile relay that retains its state when powered off has not been demonstrated. Now researchers from University of Bristol, in collaboration with the University of Southampton and the Royal Institute of Technology (Sweden), have come up with a new architecture that overcomes those limitations. As the team explained, part of the challenge is the way electromechanical relays operate. When actuated, a beam anchored at one end moves under an electrostatic force; as the beam moves, the airgap between the actuation electrode and beam rapidly reduces while the capacitance increases. At a critical voltage called the pull-in voltage, the electrostatic force becomes much greater than the opposing spring force and the beam snaps in. The Bristol team explained that this inherent electromechanical pull-in instability makes precise control of the moving beam, critical for non-volatile operation, very difficult. The new device, instead, is a rotational relay that maintains a constant airgap as the beam moves, eliminating this electromechanical pull-in instability. Using this relay, the researchers have succeeded in demonstrating the first high-temperature non-volatile nanoelectromechanical relay operation, at 200 °C. Potential applications include electric vehicles as well as zero-standby power intelligent nodes for the IoT.

Image credit: Dr Dinesh Pamunuwa

Acquisitions

Ansys has entered into a definitive agreement to acquire Lumerical, a developer of photonic design and simulation tools. With optical networks becoming increasingly important in data center architectures and other applications, Lumerical’s products enable designers to model problems in photonics, including interacting optical, electrical and thermal effects. Infineon Technologies will proceed to acquire Cypress Semiconductor; the Committee on Foreign Investment in the United States (CFIUS) has concluded its review of the planned acquisition and cleared the transaction. TE Connectivity, a provider of connectivity and sensing solutions, completed its public takeover of First Sensor, a German player in sensor technology. TE now holds 71.87% shares of First Sensor. Silicon Labs has entered into a definitive agreement with Redpine Signals to acquire the company’s Wi-Fi and Bluetooth business, development center in Hyderabad, India, and patent portfolio for $308 million in cash. The integration of the Redpine Signals technology is expected to accelerate Silicon Labs’ roadmap for Wi-Fi 6 silicon, software and solutions.

Machine learning in chip design; Ceva’s superfast DSP; Ansys results; latest acquisitions

Friday, March 6th, 2020

The growing role of neural networks in chip design has been a recurring theme over the past few weeks, in speeches or announcements involving a number of different subjects. Meanwhile, the new golden age of innovative processing architectures continues, spurred by 5G requirements. Other recent news includes more EDA vendors end-of-year results, and acquisitions in the semiconductors industry.

Machine learning to improve place-and-route in chip design

Better placement and routing in much less time and, ultimately, a dramatic reduction of ASIC design time: this is what machine learning promises to the chip designer community. ML-powered place-and-route was one of the key points of Google’s Jeffrey Dean keynote speech at the recent ISSCC. In a paper packed of interesting insights and data about machine learning evolution, Dean addressed this issue with concepts that are bound to raise attention. According to Dean, “placement and routing is a problem that is amenable to the sorts of reinforcement learning approaches that were successful in solving games, like AlphaGo. (…) By having a reinforcement learning algorithm learn to ‘play’ the game of placement and routing (…), with a reward function that combines the various attributes into a single numerical reward function, and by applying significant amounts of machine-learning computation (in the form of ML accelerators), it may be possible to have a system that can do placement and routing more rapidly and more effectively than a team of human experts working with existing electronic design tools for placement and routing”, Dean maintains. Google has been exploring these approaches internally, obtaining promising results; some of them have been described in this EETimes article. In his paper, Dean cites more potential benefits that chip design could get from machine learning: “The automated ML based system also enables rapid design space exploration, as the reward function can be easily adjusted to optimize for different trade-offs in target optimization metrics. Furthermore – he continues – it may even be possible to train a machine learning system to make a whole series of decisions from high-level synthesis down to actual low-level logic representations (…)”. According to Dean, this automated end-to-end flow could potentially reduce the time for a complex ASIC design from many months down to weeks, thus allowing the development of custom chips for a much larger range of applications.

Jeff Dean. Image credit: Google

Samsung adopts Synopsys’ machine learning-driven place-and-route solution

Major EDA vendors have already started adding machine learning capabilities to their product portfolio. Among them Synopsys, whose IC Compiler II place-and-route solution – part of the Synopsys Fusion Design Platform – includes machine learning technologies. Samsung has recently adopted Synopsys’ IC Compiler II place-and-route solution for its 5nm mobile system-on-chip production design, reporting – thanks to machine learning – up to five percent higher frequency, five percent lower leakage power and faster turn-around-time.

And Samsung’s Joydip Das, Senior Engineer at the company’s Austin R&D center, is chairing the new special interest group launched by Silicon Integration Initiative (Si2) to focus on the growing needs and opportunities in artificial intelligence and machine learning for electronic design automation. Other Si2 members participating in the SIG include Advanced Micro Devices, Ansys, Cadence, Hewlett Packard Enterprise, IBM, Intel, Intento Design, NC State University, PDF Solutions, Sandia Labs, Synopsys and the University of California, Berkeley.

Ceva unveils a superfast DSP

Ceva has recently announced what it claims to be “the world’s most powerful DSP architecture”, the Gen4 CEVA-XC, targeted at the most complex parallel processing workloads required for 5G endpoints and Radio Access Networks, enterprise access points and other multigigabit low latency applications. As stated in a company’s press release, the Gen4 CEVA-XC unifies the principles of scalar and vector processing enabling two-times 8-way VLIW and up to 14,000 bits of data level parallelism. The devices incorporate a pipeline architecture enabling operating speeds of 1.8 GHz at a 7nm process node using a unique physical design architecture for a fully synthesizable design flow, and an innovative multithreading design. This allows the processors to be dynamically reconfigured as either a wide SIMD machine or divided into smaller simultaneous SIMD threads. The first processor based on the Gen4 CEVA-XC architecture is the multicore CEVA-XC16, described by the company as “the fastest DSP ever made”. Architected with the latest 3GPP release specifications in mind, the CEVA-XC16 offers up to 1,600 Giga Operations Per Second that can be reconfigured as two separate parallel threads. According to Ceva, new concepts used in this device boost the performance per square millimeter when massive numbers of users are connected in a crowded area, leading to 35% die area savings for a large cluster of cores, as is typical for custom 5G base station silicon.

CEVA-XC16 block diagram. Image credit: CEVA

Ansys results

Ansys has recently reported fourth quarter 2019 GAAP and non-GAAP revenue growth of 17% and 18%, respectively, or 18% for each in constant currency. For fiscal year 2019, GAAP and non-GAAP revenue growth was 17%, or 19% in constant currency. In a press release, Ansys President & CEO Ajei Gopal stated that in 2019 the company extended its market and technology leadership and differentiated its multiphysics product portfolio both organically, as well as through strategic acquisitions, and expanded its partner ecosystem. “Our vision of making simulation pervasive across the product lifecycle is resonating with customers and partners”, he said.

Ansys has also expanded its product portfolio with the recent release of RaptorH, targeted at accelerating and improving 5G, three-dimensional integrated circuit and radio-frequency integrated circuit design workflows. RaptorH fuses features from two preexisting Ansys products: HFSS and RaptorX.

Recent acquisitions

UK-based Dialog Semiconductor – a provider of power management, charging, AC/DC power conversion, Wi-Fi and Bluetooth low energy technology – has announced the acquisition of Adesto Technologies. Based in Santa Clara, CA, Adesto is a provider of custom integrated circuits and embedded systems for the Industrial Internet of Things market. Mellanox Technologies, a supplier of interconnect solutions for data center servers and storage systems, has announced that it will acquire Titan IC, a developer of network intelligence and security technology. STMicroelectronics has signed an agreement to acquire a majority stake in Gallium Nitride specialist Exagan. Founded in 2014 and headquartered in Grenoble, France, Exagan is dedicated to accelerating the power-electronics industry’s transition from silicon-based technology to GaN-on-silicon technology, enabling smaller and more efficient electrical converters.




© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise