Intel Reveals Two New AI-Focused Chips at the Hot Chips Conference

Intel Reveals Two New AI-Focused Chips at the Hot Chips Conference

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

In a bid to accelerate training and inferences taken from artificial intelligence (AI) models, Intel has unveiled its two new processors. These two chips are part of its Nervana Neural Network Processor (NNP) selection.

The AI-focused chips will be called Spring Crest and Spring Hill, as they were disclosed on Tuesday at the Hot Chips Conference, held in Palo Alto, California.

The Hot Chips Conference is an annual tech symposium held annually in August.


Why are these chips important?

AI-focused work is growing each year. With that in mind, having the ability to turn data into information, and then into knowledge requires specific hardware and casing, memory, storage, and technologies that can interlink, evolve and support new and complex usage and AI techniques.

At #HotChips31, Intel is pushing #AI everywhere:

— Intel News (@intelnews) August 20, 2019

These two new chips and accelerators, as part of Intel's Nervana NNPs, are built from the ground up, focusing on AI in order to give customers the right intelligence at the right moment.

"In an AI empowered world, we will need to adapt hardware solutions into a combination of processors tailored to specific use cases," said Naveen Rao, Intel VP for Artificial Intelligence Products Group.

Rao continued, "This means looking at specific application needs and reducing latency by delivering the best results as close to the data as possible."

What will the chips do?

The Nervana Neural Network Processor for Training is built to manage data for several different deep learning models within a power budget. Moreover, it does so while delivering high-performance and improving memory efficiency.

They're built with flexibility in mind, working around a balance of computing, communication, and memory.

They are specifically created for inference, and to accelerate deep learning deployment at scale. Easy to program, with short latencies, fast code porting, and supporting all major deep learning frameworks, these chips will cover a broad range of capabilities.

Watch the video: HC30-T2: Architectures for Accelerating Deep Neural Nets (June 2022).


  1. Lisimba

    I apologize for interrupting you, but I propose to go the other way.

  2. Jarvis

    I think you are wrong. I can prove it. Write to me in PM, we will discuss.

  3. Wicasa

    And so everything is not bad, just very good!

  4. Donnachadh

    As a variant, yes

  5. Suetto

    I liked it ... I advise, for those who have not watched, take a look - you will not be able to use it

  6. Oceanus

    It's unbearable.

Write a message