In a landmark announcement at Embedded World, Intel alongside its subsidiary Altera, unveiled a suite of edge-optimized processors, FPGAs (Field-Programmable Gate Arrays), and market-ready solutions, marking a significant stride in embedding artificial intelligence (AI) into the fabric of edge computing. These innovations are set to enhance AI’s reach across a multitude of sectors, including retail, healthcare, and industrial domains, thereby redefining the landscape of edge computing.
Dan Rodriguez, Intel’s Corporate Vice President and General Manager of Network and Edge Solutions Group, emphasized the transformative potential of these offerings, stating, “This next generation of Intel edge-optimized processors and discrete GPUs is a leap forward in empowering businesses to integrate AI seamlessly with compute, media, and graphics workloads.”
The newly introduced Intel® Core™ Ultra, Intel® Core™, and Intel Atom® processors, coupled with discrete Intel® Arc™ graphics processing units (GPUs), are engineered to propel innovation in AI, visual computing, and media processing. This leap forward promises to catalyze faster and more intelligent decision-making at the edge, emphasizing on-premise computing.
In particular, the Agilex™ 5 FPGAs are tailored for mid-range applications, boasting unparalleled performance per watt. These FPGAs, with AI integrated into their architecture, promise a new level of integration, low latency, and enhanced computing prowess, catering to intelligent edge applications.
Agilex 5 FPGAs for mid-range applications with best-in-class performance per watt target a broad set of applications, including video, industrial, robotics, medical and others. (Credit: Altera, an Intel Company)
Intel’s venture into AI-enhanced edge devices is built upon an impressive foundation of over 90,000 edge deployments. The introduction of Intel Core Ultra processors heralds a new era in image classification and inference performance, blending the prowess of Intel Arc GPUs and a neural processing unit (NPU) into a unified system-on-chip (SoC) solution.
In the rapidly evolving automotive sector, two technological giants, Arm and Intel, are making significant announcements, each promising to revolutionize the development and capabilities of AI-enabled vehicles. Here’s a detailed comparison of their respective announcements to provide insights into their offerings and the potential impact on the automotive industry.
Arm’s Automotive Innovations
Arm’s announcement, spearheaded by Dipti Vachani, SVP and GM of the Automotive Line of Business, focuses on a series of industry firsts designed to accelerate the development cycle of automotive technologies by up to two years. The key highlights from Arm include:
Introduction of Arm Automotive Enhanced (AE) Processors: These new processors are designed to bring Armv9 and server-class performance to automotive applications, specifically AI-driven use cases like autonomous vehicles and Advanced Driver Assistance Systems (ADAS).
Future Arm Compute Subsystems (CSS) for Automotive: Aimed at further reducing development time and cost, these subsystems promise maximum flexibility for high-performance automotive systems.
Virtual Prototyping Solutions: For the first time, Arm offers the ecosystem the ability to develop software on virtual prototyping solutions before physical silicon is available, greatly accelerating development cycles.
The next-generation AE processors, including the Arm Neoverse V3AE, Cortex-A720AE, Cortex-A520AE, Cortex-R82AE, and Mali-C720AE, are tailored for the automotive industry, offering improvements in AI capabilities, security, and virtualization.
By Bob Brennan, VP, GM, Intel Foundry Services, Customer Solutions Engineering
Bob Brennan
Artificial intelligence isn’t just driving headlines and stock valuations. It’s also “pushing the boundaries of silicon technology, packaging technology, the construction of silicon, and the construction of racks and data centers,” says Intel’s Bob Brennan.
“There is an insatiable demand,” Brennan adds. Which is great timing since his job is to help satisfy that demand.
Brennan leads customer solutions engineering for Intel Foundry, which aims to make it as easy and fast as possible for the world’s fabless chipmakers to fabricate and assemble their chips through Intel factories.
“We are engaged from architecture to high-volume manufacturing—soup to nuts—and we present the customer with a complete solution,” Brennan asserts.
Inviting New Chipmakers in by Turning Intel Inside Out
That contrasts with a foundry like TSMC, which offers research and development, wafer fabrication and selected advanced packaging. Intel Foundry offers those services and a lot more—going beyond construction and helping with testing, firmware (the software that makes hardware work) and the intricacies of the global semiconductor supply chain.
In the realm of desktop computing, speed and power are the twin pillars upon which the ultimate user experience rests. Today, Intel has once again affirmed its commitment to these principles with the announcement of its Intel® Core™ 14th Gen i9-14900KS processors, heralding a new epoch in desktop processor speeds.
Intel’s latest marvel, the i9-14900KS, bursts through previous boundaries by offering a staggering 6.2 gigahertz (GHz) max turbo frequency straight out of the box. This isn’t just another incremental step forward; it’s a giant leap that cements Intel’s status as the purveyor of the world’s fastest desktop processor. For the legion of PC enthusiasts, gamers, and content creators, this represents not just an upgrade but a transformation in what they can expect from their desktop systems.
Intel Core 14th Gen i9-14900KS Special Edition Unlocked Desktop Processor Provides Record-Breaking 6.2GHz frequency right out of the box – giving high-end PC enthusiasts the cutting edge power they look for in their desktops. (Credit: Intel Corporation)
“The Intel Core i9-14900KS showcases the full power and performance potential of the Intel Core 14th Gen desktop processor family and its performance hybrid architecture,” says Roger Chandler, Intel’s vice president and general manager of the Enthusiast PC and Workstation Segment. His words underscore a fundamental truth about today’s computing demands: they are evolving, and Intel is leading the charge in meeting these demands head-on.
In the ever-evolving world of artificial intelligence, performance and efficiency are paramount. The ability to train and deploy AI models quickly and cost-effectively has become a competitive advantage for organizations across various industries. Intel, a pioneer in the field of semiconductor technology, continues to push the boundaries of AI performance with its Intel Gaudi2 accelerator and 4th Gen Intel Xeon Scalable processors. In a recent development, Intel has achieved a remarkable 2x performance leap on the GPT-3 benchmark by implementing FP8 software. This achievement, validated through the industry-standard MLPerf training v3.1 benchmark, underscores Intel’s commitment to providing competitive AI solutions that can be deployed anywhere.
The Milestone Announcement
On November 8, 2023, Intel announced the groundbreaking results of its MLPerf training v3.1 benchmark for training AI models. These results encompassed Intel’s Gaudi2 accelerators and 4th Gen Intel Xeon Scalable processors equipped with Intel Advanced Matrix Extensions (Intel AMX). The standout performance came from Intel Gaudi2, which demonstrated an impressive 2x performance improvement thanks to the implementation of the FP8 data type on the v3.1 training GPT-3 benchmark. This accomplishment reaffirms Intel’s dedication to making AI accessible and efficient for a wide range of applications.
Sandra Rivera, Intel’s Executive Vice President and General Manager of the Data Center and AI Group, highlighted the significance of this achievement, stating, “We continue to innovate with our AI portfolio and raise the bar with our MLPerf performance results in consecutive MLCommons AI benchmarks. Intel Gaudi and 4th Gen Xeon processors deliver a significant price-performance benefit for customers and are ready to deploy today. Our breadth of AI hardware and software configuration offers customers comprehensive solutions and choice tailored for their AI workloads.”
An Intel Corporation lab in Hillsboro, Oregon, holds 24 powered-on Intel Xeon-based servers in a tank filled with synthetic non-electrically conductive oil. Immersion cooling is a method of managing heat from processors more effectively than by traditional air cooling. Intel is working with industry partners to develop solutions for today’s data centers and those in the future. (Credit: Intel Corporation)
Extending Moore’s Law means putting more transistors on an integrated circuit and, increasingly, adding more cores. Doing so improves performance but requires more energy.
Over the past decade, Intel estimates it has saved 1,000 terawatt hours of electricity through the improvements its engineers have made to processors. These advances are complemented by cooling technologies – fans, in-door coolers, direct-to-chip cooling – that further manage heat, conserve energy and reduce carbon emissions.
These cooling features require up to 40% of a data center’s energy consumption1. As Intel looks to increase performance in the future, improvements need to be accomplished in an energy-efficient way, and air cooling may not be the solution.
Researchers at Intel and QuTech, an advanced quantum computing research center consisting of the Delft University of Technology (TU Delft) and the Netherlands Organization for Applied Scientific Research (TNO), have successfully created the first silicon qubits at scale at Intel’s D1 manufacturing factory in Hillsboro, Oregon. The result is a process that can fabricate more than 10,000 arrays with several silicon-spin qubits on a single wafer with greater than 95% yield. This achievement is dramatically higher in both qubit count and yield than the typical university and laboratory processes used today.
This research was published in the journal Nature Electronics and is Intel’s first peer-reviewed research demonstrating the successful fabrication of qubits on 300mm silicon. The new process uses advanced transistor fabrication techniques including all-optical lithography to produce silicon-spin qubits, the same equipment used to produce Intel’s latest-generation complementary metal-oxide-semiconductor (CMOS) chips. The groundbreaking research is a crucial step forward in the path toward scaling quantum chips, demonstrating that it’s possible for qubits to eventually be produced alongside conventional chips in the same industrial manufacturing facilities.
Today, Intel contributed the Scalable I/O Virtualization (SIOV) specification to the Open Compute Project (OCP) with Microsoft, enabling device and platform manufacturers access to an industry standard specification for hyperscale virtualization of PCI Express and Compute Express Link devices in cloud servers. When adopted, SIOV architecture will enable data center operators to deliver more cost-effective access to high-performance accelerators and other key I/O devices for their customers, as well as relieve I/O device manufacturers of cost and programming burdens imposed under previous standards.
Today at the annual Conference on Neural Information Processing Systems (NeurIPS), two Intel-supported whitepapers on spoken language datasets are being presented. The first paper, The People’s Speech, targets “automatic speech recognition” tasks; the second is Multilingual Spoken Words Corpus (MSWC), which involves “keyword spotting.” Datasets coming out of each project contribute a sizeable volume of rich audio data, and each is among the largest collection available in its class.
The MSWC paper is co-authored by Keith Achorn, an AI frameworks engineer in Intel’s Software and Advanced Technology Group (SATG). Keith talks about his experiences on the project in a blog on the Intel Community site.
In a recent global study by Ponemon Institute,1 73% of IT decision-makers say they are more likely to purchase technologies and services from companies that proactively find, mitigate and communicate security vulnerabilities.
Intel is committed to product and security assurance and regularly releases functional and security updates for supported products and services. The Intel platform update (IPU) helps simplify the update process and improve predictability for Intel’s customers and partners. The updates provide security and functional improvements across Intel’s product portfolio.
“Security doesn’t just happen. If you are not finding vulnerabilities, then you are not looking hard enough,” said Suzy Greenberg, vice president, Intel Product Assurance and Security. “Intel takes a transparent approach to security assurance to empower customers and deliver product innovations that build defenses at the foundation, protect workloads and improve software resilience.”