Elon Musk aims to create a powerful supercomputer for the next version of Grok, a productive artificial intelligence chatbot offered on the X platform of his AI-focused company, xAI. This supercomputer will enhance Grok’s capabilities and assist in developing more advanced language models for future iterations.
According to The Information, xAI may collaborate with Oracle to develop this supercomputer and have it ready by 2025. The planned setup will bring together numerous NVIDIA H100 GPUs, amounting to four times the size of today’s largest clusters.
Musk stated earlier this year that training Grok 2 would require 20,000 NVIDIA H100 GPUs and that future Grok models would need over 100,000 H100 GPUs. Creating this supercomputer will enable xAI to develop more sophisticated language models.
If xAI continues with its plans, the demand for H100 GPUs will increase, which will be a positive development for NVIDIA. It will also boost the competitive edge of leading companies like OpenAI and Google by compelling other companies in the field to upgrade their models.