Proving the Business Case for the Internet of Things

Intel portfolio to speed AI development

Steve Rogerson
November 19, 2019



Intel says it is speeding the development, deployment and performance of artificial intelligence (AI) with hardware that stretches from the cloud to the edge.
 
At the company’s AI Summit last week in San Francisco, Intel welcomed the next wave of AI with updates on products designed to accelerate AI system development and deployment from cloud to edge.
 
The tech giant demonstrated its Nervana neural network processors (NNPs) for training (NNP-T1000) and inference (NNP-I1000). These are the company’s first purpose-built ASICs for complex deep learning with scale and efficiency for cloud and data centre users.
 
Intel also revealed its next-generation Movidius Myriad vision processing unit (VPU) for edge media, computer vision and inference applications.
 
"With this next phase of AI, we're reaching a breaking point in terms of computational hardware and memory,” said Naveen Rao, Intel corporate vice president. “Purpose-built hardware like Intel Nervana NNPs and Movidius Myriad VPUs are necessary to continue the incredible progress in AI. Using more advanced forms of system-level AI will help us move from the conversion of data into information towards the transformation of information into knowledge."
 
These products should strengthen Intel's portfolio of AI offerings, which is expected to generate more than $3.5bn in revenue in 2019. The AI portfolio should help users enable AI model development and deployment at any scale from massive clouds to tiny edge devices, and everything in between.
 
Now in production and being delivered to customers, the Nervana NNPs are part of a systems-level AI approach offering a full software stack developed with open components and deep learning framework integration.
 
The Nervana NNP-T strikes a balance between computing, communication and memory, allowing near-linear, energy-efficient scaling from small clusters up to large pod supercomputers. The Nervana NNP-I is power- and budget-efficient and suitable for running intense, multimodal inference at real-world scale using flexible form factors. Both products were developed for the AI processing needs of AI users such as Baidu and Facebook.
 
"We are excited to be working with Intel to deploy faster and more efficient inference compute with the Intel Nervana neural network processor for inference and to extend support for our state-of-the-art deep learning compiler, Glow, to the NNP-I," said Misha Smelyanskiy, director of AI system co-design at Facebook.
 
Additionally, Intel's Movidius VPU, scheduled to be available in the first half of 2020, incorporates architectural advances that are expected to deliver a performance more than ten times the inference performance of the previous generation with up to six times the power efficiency of competitor processors.
 
Intel also announced its DevCloud for the edge, which, along with the firm’s distribution of the OpenVino toolkit, addresses a key pain point for developers by allowing them to try, prototype and test AI products on a broad range of Intel processors before they buy hardware.
 
Complex data, models and techniques are required to advance deep learning reasoning and context, bringing about a need to think differently about architectures.
 
With most of the world running some part of its AI on its Xeon scalable processors, Intel says it is continuing to improve this platform with features such as deep learning boost with vector neural network instruction that brings enhanced AI inference performance across the data centre and edge deployments.
 
While that will continue to serve as a strong AI foundation for years, the most advanced deep learning training needs for Intel users call for performance to double every 3.5 months. Intel says it is equipped to look at the full picture of computing, memory, storage, interconnect, packaging and software to increase efficiency and programmability, and ensure the ability to scale up distributing deep learning across thousands of nodes to scale the knowledge revolution.
 
The photo shows the Nirvana NNP-T for training mezzanine card.