Nvidia’s (NVDA) licensing deal with chip startup Groq (GROQ.PVT) shows how the tech giant is leveraging its massive cash pile to sustain its preeminence in the AI market.
Nvidia this week said it struck a non-exclusive deal with Groq to license its technology and hired the startup’s founder and CEO Jonathan Ross, its president, and other employees. CNBC reported the agreement to be worth $20 billion, marking Nvidia’s largest-ever deal. (The company declined a request for comment on the figure.)
Bernstein analyst Stacy Rasgon said in a note to clients Thursday that the Nvidia-Groq deal “appears strategic in nature for NVDA as they leverage their increasingly powerful balance sheet to maintain dominance in key areas.” Nvidia’s cash inflow climbed more than 30% from the previous year to $22 billion in its most recent quarter.
“This transaction is … essentially an acquisition of Groq without being labeled one (to avoid the regulators’ scrutiny),” added Hedgeye Risk Management analysts in a note Friday.
The move is just the latest in a string of AI deals by Nvidia, the world’s first $5 trillion company. The chipmaker’s investments in AI firms span the entire market, ranging from large language model developers such as OpenAI (OPAI.PVT) and xAI (XAAI.PVT) to “neoclouds” like Lambda (LAMD.PVT) and CoreWeave (CRWV), which specialize in AI services and compete with its Big Tech customers.
Nvidia has also invested in chipmakers Intel (INTC) and Enfabrica. The company made a failed attempt around 2020 to acquire British chip architecture designer Arm (ARM).
Nvidia’s wide-ranging investments — many of them in its own customers — have led to accusations that it’s involved in circular financing schemes reminiscent of the dot-com bubble. The company has vehemently denied those claims.
Groq, meanwhile, was looking to become one of Nvidia’s rivals.
Founded in 2016, Groq makes LPUs (language processing units) geared toward AI inferencing and marketed as alternatives to Nvidia’s GPUs (graphics processing units).
Training AI models involves teaching a model to learn patterns from large amounts of data, while “inferencing” refers to using that trained model to generate outputs. Both processes demand massive computing power from AI chips.
While Nvidia easily dominates the chip market for AI training, some analysts argue that Nvidia could soon see greater competition in the inference space. That’s because custom chips like Google’s (GOOG) TPUs (tensor processing units) — and arguably Groq’s chips called LPUs (language processing units) — may be better suited for certain tasks. LPUs, for instance, are faster and more energy efficient when used for certain models, utilizing a type of memory technology called SRAM within the chips. On the other hand, Nvidia GPUs rely on off-chip HBM made by companies like Micron (MU) and Samsung (005930.KS).
Ross, the Groq CEO, said in a recent interview that the upstart aimed to provide chips for half the world’s AI inference computing needs — and cheaply.
“What we want to do is we want to drive the cost of compute as close to zero as we can get it. Every year we want to make it cheaper,” he told Indian business outlet YourStory. Notably, Ross already helped create Nvidia’s greatest source of competition: The executive led the development of Google’s first-generation TPUs.
Cantor Fitzgerald analyst CJ Muse said Nvidia’s “acqui-hire” of Groq talent and licensing of its intellectual property showed the chipmaker “is playing both offense and defense” in the AI space. Muse said the deal would allow Nvidia to take “even greater share of the inference market.”
Nvidia shares rose roughly 1% Friday.
Others on Wall Street were more confused by Nvidia’s move and its potential $20 billion price tag. Hedgeye Risk Management analysts argued that Groq’s chips are “still unproven” when it comes to large AI models, due to their low memory capacity.
“Groq’s current technology is greatly limited to only a small subset of inference workloads,” added DA Davidson analyst Alex Platt.

