Nvidia and Microsoft announced their largest monolithic transformer language model to date, an AI model with a whopping 530 billion parameters they developed together, named the Megatron-Turing Natural Language Generation model.
MT-NLG is more powerful than previous transformer-based systems trained by both companies, namely Microsoft’s Turing-NLG model and Nvidia’s Megatron-LM. Made up of three times more parameters spread across 105 layers, MT-NLG is much larger and more complex. For comparison, OpenAI’s GPT-3 model has 175 billion parameters and Google’s Switch Transformer demo has 1.6 trillion parameters.
Bigger is generally better when it comes to neural networks. It requires them to ingest more training data. MT-NLG is better at a wide variety of natural language tasks such as auto-completing sentences, question and answering, and reading and reasoning compared to its predecessors. It can also perform these tasks with little to no fine-tuning, something referred to as few-shot or zero-shot learning.
As these language models become larger, AI researchers and engineers need to come up with all sorts of techniques and tricks to train them. It requires careful coordination: the model and its training data have to be stored and processed across numerous chips at the same time.
MLT-NLG was trained using Nvidia’s Selene machine learning supercomputer, a system made up of 560 DGX A100 servers with each server containing eight A100 80GB GPUs. Selene is also powered by AMD’s EPYC 7v742 CPU processors and is estimated to cost over $85m, according to The Next Platform.
All 4,480 GPUs use NvLink and NVSwitch to connect to one another. Each one was capable of operating over 113 teraFLOPs per second. It’s incredibly expensive to train these models and even if they’re running on top-of-the-range hardware, it requires software hacks to reduce training times.
Nvidia and Microsoft used DeepSpeed, a deep learning library containing PyTorch code that allowed engineers to cram more data across numerous pipelines in parallel to scale up Megatron-LM. In all, 1.5TB of data was processed to train the model in a process that took a little over a month.
“By combining tensor-slicing and pipeline parallelism, we can operate them within the regime where they are most effective,” Paresh Kharya, senior director of product management and marketing for accelerated computing at Nvidia , and Ali Alvi, group program manager for the Microsoft Turing team, explained in a blog post.
“More specifically, the system uses tensor-slicing from Megatron-LM to scale the model within a node and uses pipeline parallelism from DeepSpeed to scale the model across nodes.
“For example, for the 530 billion model, each model replica spans 280 Nvidia A100 GPUs, with 8-way tensor-slicing within a node and 35-way pipeline parallelism across nodes. We then use data parallelism from DeepSpeed to scale out further to thousands of GPUs.”
MT-NLG was trained on a giant dataset known as The Pile. Compiled by Eleuther AI, a group of AI researchers and engineers leading a grassroots effort to open-source large language models, it is made up of multiple smaller datasets totaling 825GB worth of text scraped off the internet from sources like Wikipedia, academic journal repositories, and news clippings.
Dealing with such large volumes of text means the dataset can’t be cleansed of toxic language. Unfortunately, this means MT-NLG can generate offensive outputs that might be racist or sexist.
“Our observations with MT-NLG are that the model picks up stereotypes and biases from the data on which it is trained,” Kharya and Alvi said.
“Microsoft and NVIDIA are committed to working on addressing this problem. We encourage continued research to help in quantifying the bias of the model…In addition, any use of MT-NLG in production scenarios must ensure that proper measures are put in place to mitigate and minimize potential harm to users.”
For those keen to try MT-NLG out there’s bad news; we’re told it’s not going to be commercially available any time soon.