HPE collaborates with NVIDIA to bring supercomputing solutions for GenAI

Hewlett Packard Enterprises (HPE) launched a supercomputing solution for artificial intelligence (AI) training in partnership with NVIDIA. The GenAI system can be used by large enterprises, academic institutions, and governmental organizations to expedite the training and fine-tuning of AI models using private data sets.

With liquid-cooled supercomputers, accelerated processing, networking, storage, and services, this all-inclusive AI-native solution makes it easier to quickly train and fine-tune AI models utilizing proprietary data sets.

Executive Vice President and General Manager at HPE Justin Hotard states that for an organization to enable generative AI, it must take advantage of sustainable solutions and provide the specialised performance and capacity of a supercomputer.

The solution will be powered by NVIDIA Grace Hopper GH200 Superchips and integrated with HPE Cray supercomputing technologies. When combined, they provide enterprises with the performance and scale needed for massive AI workloads, such as training deep learning recommendation models (DLRM) and large language models (LLM).

“To drive innovation and unlock research breakthroughs, the world’s top companies and research centers are training and fine-tuning AI models. However, to do so effectively and efficiently, they require purpose-built solutions,” stated Hotard. Organizations must use sustainable solutions that offer the specialized power and capacity of a supercomputer to enable the training of AI models to facilitate generative AI.”

“NVIDIA’s collaboration with HPE on this turnkey AI training and simulation solution, powered by NVIDIA GH200 Grace Hopper Superchips, will provide customers with the performance needed to achieve breakthroughs in their generative AI initiatives,” stated Ian Buck, vice president of hyperscale and HPC at NVIDIA.

Three software tools are included in the solution to assist users in developing their own AI applications and training and fine-tuning AI models. Customers can create and implement AI models more quickly with HPE’s Machine Learning Development Environment because it integrates with well-known ML frameworks and makes data preparation easier. Programmers can create, port, debug, and improve programs with the full toolkit provided by the Cray Programming Environment package.

NVIDIA AI Enterprise, the third component of the system, offers pre-trained models, tools, and frameworks to make the creation and implementation of industrial AI more efficient.

Also Read | LTTS collaborates with AWS to help automotive companies transition to smart vehicles using GenAI

The system can accommodate thousands of graphics processing units (GPUs), and for a faster time-to-value, it may allocate all of the nodes’ capacity to a single AI application. This is according to a statement from the business.

Next month, HPE will provide the generative AI supercomputing solution in over 30 countries, including India.

Related News