ArterisIP Drives Artificial Intelligence & Machine Learning Innovation for 15 Chip Companies

Interconnect IP enables fast and efficient integration of tens or hundreds of heterogeneous neural network hardware accelerators

CAMPBELL, Calif. — November 14, 2017 — ArterisIP, the innovative supplier of silicon-proven commercial system-on-chip (SoC) interconnect IP, today announced that in the past two years, 15 companies have licensed ArterisIP’s  FlexNoC Interconnect or  Ncore Cache Coherent Interconnect IPas critical components in new artificial intelligence (AI) and machine learning SoCs.

ArterisIP technology gives chip design teams the means to integrate machine learning processing elements into their systems quickly and efficiently, ensuring that they meet their schedule and functional safety requirements.

Ty Garibay, Chief Technology Officer (CTO), ArterisIP

These nine (9) publicly-announced ArterisIP customers have created or are developing machine learning and AI SoCs for data center, automotive, consumer and mobile applications:

  1. Movidius (Intel) – Myriad™ ultra-low power machine learning vision processing units (VPU)
  2. Mobileye (Intel) –  Since 2010; EyeQ®3, EyeQ®4 and EyeQ®5 advanced driver assistance systems (ADAS) using multiple heterogeneous processing elements for vision processing and machine learning
  3. NXP – Multiple ADAS and autonomous driving SoCs implementing machine learning, based on cache coherency and functional safety mechanisms
  4. Toshiba – Automotive ADAS SoC using cache coherence and functional safety mechanisms
  5. HiSilicon (Huawei) –  Since 2013; new Kirin 970 Mobile AI Processor with Neural Processing Unit (NPU)
  6. Cambricon – Neural network processor with multiple processing elements
  7. Dream Chip Technologies – ADAS image sensor processor with multiple digital signal processor (DSP) and single instruction multiple data (SIMD) hardware accelerators
  8. Nextchip – Vision ADAS SoC with multiple processing elements
  9. Intellifusion – Machine learning visual intelligence with multiple heterogeneous on-chip hardware engines

In addition to the nine publicly-announced customers listed above, the following six (6) companies are also using ArterisIP to implement new AI and machine learning hardware architectures:

  • Two (2) major semiconductor and systems vendors targeting autonomous driving
  • A major semiconductor vendor targeting consumer electronics
  • A major autonomous flying vehicle vendor
  • A leader in new automotive sensor technologies
  • An innovator in data center analytics

All of these innovation leaders create SoCs that accelerate machine learning and neural network algorithms using multiple instances of heterogeneous processing elements. Each SoC architecture is tailored to its target market requirements based on an on-chip interconnect configured specifically for the task. They have all licensed ArterisIP  interconnect technology because it:

  • Eases the on-chip integration of these different processing engines while allowing design teams to finely tune power management and quality-of-service (QoS) characteristics, like path latency and bandwidth;
  • Simplifies software development and enables customized dataflow processing by supporting cache coherence in key parts of a system. This allows the system to take advantage of data reuse and local accumulation in shared caches, which reduces die area and can increase memory bandwidth while reducing processing latency and power consumption;
  • Protects data in transit and at rest to increase functional safety diagnostic coverage, allowing large supercomputer-like SoCs to meet the stringent requirements of the automotive ISO 26262 specification.

“Efficiently implementing machine learning and visual computing in commercially viable systems requires hardware teams to accelerate neural network functions using many types of hardware accelerators, with the types and number of accelerators based on performance, power and area/cost requirements,” said Ty Garibay, Chief Technology Officer at ArterisIP. “ArterisIP technology gives these teams the means to integrate these processing elements into their systems quickly and efficiently, ensuring that they meet their schedule and functional safety requirements.”

“Machine learning has become the ‘killer app’ for our advanced interconnect IP, with a perfect match between the QoS, power consumption and performance required by AI and what the FlexNoC and Ncore interconnects deliver,” said K. Charles Janac, President and CEO of ArterisIP. “Our team is excited to be such a critical enabler to the new generation of neural network, machine learning and artificial intelligence chips.”

Presentation Download

For more information, please  download this presentation titled, “ Implementing Machine Learning and Neural Network Chip Architectures using Network-on-Chip Interconnect IP.”

About ArterisIP

ArterisIP provides  system-on-chip (SoC) interconnect IP to accelerate SoC semiconductor assembly for a wide range of applications from automobiles to mobile phones, IoT, cameras, SSD controllers, and servers for customers such as  Samsung Huawei / HiSilicon Mobileye (Intel) Altera (Intel), and  Texas Instruments. ArterisIP products include the  Ncore cache coherent and  FlexNoC non-coherent interconnect IP, as well as optional Resilience Package (ISO 26262 functional safety) and  PIANO automated timing closure capabilities. Customer results obtained by using the ArterisIP product line include lower power, higher performance, more efficient design reuse and faster SoC development, leading to lower development and production costs. For more information, visit or find us on LinkedIn at


Kurt Shuler
Arteris Inc.
+1 408 470 7300
Email Contact

Review Article Be the first to review this article

Featured Video
Senior Electrical Engineer for Allen & Shariff Corporation at Pittsburgh, Pennsylvania
Upcoming Events
Methodics User Group Meeting at Maxim Integrated 160 Rio Robles San Jose CA - Jun 5 - 6, 2018
2018 FLEX Korea at Room 402/ 403, COEX Seoul Korea (South) - Jun 20 - 21, 2018
IEEE 5G World FOrum at 5101 Great America Parkway Santa Clara CA - Jul 9 - 11, 2018
SEMICON West 2018 at Moscone Center North, Room 24-25, 730 Howard St, San Francisco, CA 94103, USA San Francisco CA - Jul 11 - 12, 2018
DownStream: Solutions for Post Processing PCB Designs
TrueCircuits: IoTPLL

Internet Business Systems © 2018 Internet Business Systems, Inc.
25 North 14th Steet, Suite 710, San Jose, CA 95112
+1 (408) 882-6554 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering TechJobsCafe - Technical Jobs and Resumes GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise