Deploying vision capabilities on edge platforms requires difficult tradeoffs between latency, throughput, memory footprint, communication bandwidth, power, and cost. Luckily, there’s an ever-growing diversity of hardware choices to allow system designers to select the best option that meets their needs. Historically, hardware diversity also implied time-consuming software development to port vision applications and algorithms to a new hardware target and optimize for real-time constraints. Intel® is working with the industry to solve the puzzle of hardware diversity for traditional and deep learning-based vision at the edge.
The Intel® Distribution of OpenVINO™ toolkit (which stands for Open Visual Inference and Neural Network Optimization) enables developers to streamline the deployment of deep learning inference and high-performance computer vision applications across a wide range of vertical uses cases at the edge. The toolkit is compatible with popular open source deep learning frameworks, and enables developers to easily target execution on CPUs and accelerators (GPUs, FPGAs, VPUs, and so on) specially designed for AI inference, such as Intel® Vision Accelerator Design Products. The beauty of the toolkit is that it provides a unified and common abstraction layer for AI inference across diverse hardware targets, with a comprehensive and intuitive API that merges simplicity with optimized performance. Software simplicity and performance – just what the developer ordered!