Intel’s Hala Point Supports Future Neuromorphic AI Research, 200 Times Faster Than the Human Brain!

By admin Apr 18, 2024

Intel is pioneering advancements in neuromorphic computing with its Hala Point system, designed to support the next generation of AI research by providing performance speeds up to 200 times faster than the human brain. This breakthrough follows the company’s first large-scale research system, Pohoiki Springs, and offers significant improvements in architecture, including a more than 10-fold increase in neuron capacity and a 12-fold performance boost. When applied to spiking neural networks, Hala Point operates at speeds up to 200 times faster than the human brain when running with a lower number of neurons.

According to Mike Davies, Director of Neuromorphic Computing Lab at Intel Research, “The computational cost of current AI models is increasing at an unsustainable rate. The industry needs new computing methods that can scale effectively. Intel developed Hala Point to combine efficient deep learning with novel neuromorphic continual learning and optimization capabilities.”

Neuromorphic chips, such as Hala Point, emulate biological neurons using many small processing units that communicate through spike signals, similar to biological neurons, and adjust their behavior accordingly. This architecture deviates from traditional chips by decentralizing computational tasks and using direct communication between processing units.

Intel introduced its first neuromorphic chip, Loihi, in 2018, which used 14nm technology and found applications in scenarios such as machine olfaction. In 2021, Intel upgraded its neuromorphic lineup with Loihi 2, its second-generation neuromorphic chip built on Intel’s first EUV process node, Intel 4. This new chip is equivalent to a 4nm process, with 128 neuromorphic cores and each core containing 192KB of flexible memory.

Neuromorphic chips like Loihi 2 differ from conventional CPUs and GPUs by having no external memory. Each neuron has its own memory allocated for weight inputs, recent activity caching, and a list of all other neurons to which spikes are sent. This unique architecture allows Loihi 2 to achieve speeds 50 times faster and power consumption 100 times lower than conventional CPU and GPU systems when running AI inference and optimization tasks.

In addition to its hardware offerings, Intel has also introduced Lava, a new software framework designed for Loihi chips. Lava and its associated libraries, written in Python, are open-source on GitHub and allow developers to create programs for Loihi without direct access to hardware.

Currently, Intel uses Loihi 2 chips in applications such as robotic arms, neuromorphic skin, and machine olfaction. The chip applies principles of neuromorphic computing, including asynchronous and event-driven spiking neural networks (SNNs), compute-in-memory architecture, and dynamic sparse connectivity to achieve significant improvements in energy efficiency and performance.

When used with spiking neural network models, Hala Point can run its full set of 1.15 billion neurons in real-time at 20 times the speed of the human brain, with speeds reaching up to 200 times faster when working with lower neuron counts. While Hala Point is not intended for neuroscience modeling, its neuron capacity is approximately equivalent to the brain of an owl or a macaque monkey.

Early research indicates that Hala Point can achieve an energy efficiency ratio of 15 TOPS/W when running deep neural networks, leveraging 10:1 sparse connectivity and event-driven activity. This allows Hala Point to handle real-time data (such as video from cameras) without batch processing, which is a common optimization method for GPUs but introduces delays in data processing.

Intel emphasizes that Hala Point significantly advances the foundation laid by its predecessor Pohoiki Springs by enhancing mainstream deep learning models’ performance and efficiency, especially for real-time workloads such as video, speech, and wireless communication processing. This technology could eliminate the need for frequent retraining on growing datasets, saving substantial energy over time.

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *