Nvidia Is Coming for Intel’s and AMD’s Lunch, Too, With New Chip Designs

Nvidia’s GPUs for AI are all the rage right now, but there’s a lot more to the story.

The world has gone crazy over generative artificial intelligence (AI), and we have Nvidia‘s (NVDA -5.68%) financial outlook for the ages to thank for it. Fueled by rising demand for generative AI services like ChatGPT, Nvidia said its quarterly revenue is poised to skyrocket from $7.2 billion last quarter (the three months ended in April 2023) to $11 billion next quarter, and sustain at a similar level for the rest of this year.

Growth like this from a company of such size is incredibly rare, and Nvidia is keeping its foot on the gas. At a recent tech conference, it announced more AI supercomputing hardware: the DGX GH200. This is more than just a GPU system, though; it’s actually a GPU-CPU combo. Nvidia is making forays into other parts of the data center, disrupting business that has historically been dominated by Intel (INTC 4.83%) and Advanced Micro Devices (AMD -5.64%).

Nvidia adds to its lineup of chip flavors

The DGX GH200 is a massive computing unit. It strings together 256 Grace Hopper Superchips. Nvidia’s Grace Hopper Superchips combine a CPU (central processing unit, the old workhorse of computers that crunch information to operate software) and GPU (graphics processing unit, used to accelerate the crunching of data) into a single package.

To be sure, the DGX GH200 is a specialized piece of equipment designed for massive AI model training. It also underscores Nvidia’s growing lineup of chip types over the years. In tandem with the DGX GH200 announcement, Nvidia rattled off some of the other current chip designs it offers.

CHIP ARCHITECTURE NAME DESCRIPTION
Hopper The latest GPUs, built on decades of Nvidia’s work in high-end video game graphics
Grace Energy-efficient CPUs designed using Arm — the same chip licensing company Apple uses for its iPhone and MacBook processors
BlueField Data processing units (DPUs) for offloading network, storage, and security tasks from data center CPUs
NVLink Chip-to-chip interconnects to speed up data transfer times, acquired with Mellanox in 2020

DATA SOURCE: NVIDIA.

Stepping on Intel’s and AMD’s toes?

Nvidia’s GPUs facilitating new AI work is sending market expectations for the company sky-high. But there’s a lot more to Nvidia than AI. In fact, in nearly all of Nvidia’s media explaining its various chip designs, Nvidia mentions AI and high-performance computing (graphical design, 5G network management, manufacturing compute, etc.).

In other words, Nvidia silicon can handle far more than generative AI.

That’s why the foray into CPUs and other chip types is an important development for Nvidia. Not only is it tackling a brand new market in generative AI, but it’s also increasingly stepping on the toes of Intel and AMD. Historically, enterprise computing (both on-premises servers or data centers, and the cloud) has been handled by CPUs from the Intel-AMD duopoly. And as a new AI and high-performance computing era dawns, Intel and AMD’s market is still quite large — but possibly ripe for the taking given Nvidia’s more energy-efficient Arm-based CPUs paired with its GPUs.

COMPANY SEGMENTS TRAILING-12-MONTH REVENUE
Intel data center, AI, networking, and edge $25.0 billion
AMD data center and embedded compute $11.6 billion
Nvidia data center $15.5 billion

DATA SOURCES: INTEL, AMD, NVIDIA.

Nvidia’s long-term potential is still incredibly large

AI represents an inflection point for the data center and cloud computing industry. On the last earnings call, Nvidia CEO Jensen Huang said that the computing hardware installed in all of the world’s data centers is worth $1 trillion. This hardware needs to be replaced about every four to five years, making for a very large and consistent market opportunity.

But AI and cloud-based high-performance computing means much of the existing data center architecture — again, dominated by the CPUs that Intel and AMD have historically provided — need more than a simple refresh. Going forward, Nvidia believes far more GPUs, more energy-efficient CPUs, and other types of computing accelerators will be needed than traditional CPUs from Intel and AMD. Data center infrastructure value could be heading well north of $1 trillion from here.

Of course, it’s going to take more than four or five years to replace this massive amount of infrastructure. Nevertheless, it illustrates the massive opportunity still ahead of Nvidia and its expanding portfolio of chips — starting with GPUs, but also including new Arm-based CPUs and various networking components. Nvidia stock currently trades for 52 times this year’s expected earnings per share. It’s a premium price tag, but well deserved.