Intel announced during Baidu’s Create conference this week that Baidu will help to develop the former’s Nervana Neural Network Processor.
Speaking on stage at the conference in Beijing, Intel corporate vice president Naveen Rao made the announcement.
“The next few years will see an explosion in the complexity of AI models and the need for massive deep learning compute at scale. Intel and Baidu are focusing their decade-long collaboration on building radical new hardware, codesigned with enabling software, that will evolve with this new reality – something we call ‘AI 2.0."
Intel’s so-called Neural Network Processor for Training is codenamed NNP-T 1000 and designed for training deep learning models at lightning speed. A large amount (32GB) of HBM memory and local SRAM is put closer to where computation happens to enable more storage of model parameters on-die, saving significant power for an increase in performance.
The NNP-T 1000 is set to ship alongside the Neural Network Processor for Inference (NNP-I 1000) chip later this year. As the name suggests, the NNP-I 1000 is designed for AI inferencing and features general-purpose processor cores based on Intel’s Ice Lake architecture.
Baidu and Intel have a history of collaborating in AI. Intel has helped to optimise Baidu’s PaddlePaddle deep learning framework for its Xeon Scalable processors since 2016. More recently, Baidu and Intel developed the BIE-AI-Box – a hardware kit for analysing the frames of footage captured by cockpit cameras.
Intel sees a great deal of its future growth in AI. The company’s AI chips generated $1 billion in revenue last year and Intel expects a growth rate of 30 percent annually up to $10 billion by 2022.
By Ryan Daws