Nvidia Signs a Deal to License Groq AI Tech

Nvidia Licenses Groq’s Inference Technology Amid Rumors of a $20B Report
Nvidia entered into a non-exclusive licensing agreement concerning Groq’s AI inference technology (Dec. 24, 2025).
The founders of Groq, Jonathan Ross, as well as the President of Groq, Sunny Madra, will be joining Nvidia together with other members of Groq’s team.
The terms were not disclosed; CNBC stated the deal was valued at about $20B, an enormous premium on Groq’s recently reported $6.9B valuation.
Nvidia avoided a clean acquisition headline and instead struck a technology license and leadership migration, which is an arrangement where dynamics may shift in competition without necessarily being an M&A deal.
In a Dec. 24, 2025, press release, Groq announced that it signed a non-exclusive licensing agreement with Nvidia about artificial intelligence inference solutions, while Jonathan Ross and Sunny Madra will be joining Nvidia to help "advance and scale” the licensed technology. Groq announced that GroqCloud “will continue uninterrupted,” while Groq remains independent with new CEO Simon Edwards.
The most debated part of the deal is the money. Groq did not provide details about the financial terms of the deal. However, CNBC stated that the deal is worth around $20 billion. Note that Groq’s previous valuation was $6.9 billion after raising $750 million in September 2025.
Nonetheless, this headline number also conflicts with the wording of the deal itself: "non-exclusive" and "$20B" simply do not go together. The answer may be that the "non-exclusive" nature refers to the form of the license itself, while the actual value (if that $20B number is accurate) is in what came with it.
This means that while the license can be considered non-exclusive in nature, Nvidia can still maybe gain something close to an exclusive deal in return for hiring some essential people who know how to implement Groq’s LPU strategy.
Why Nvidia Needs This Right Away
Increasingly, Nvidia’s business relies on inference, where the survival of real-world products depends on cost and speed. Inference-based sales copilots, marketing automation tools, medical documentation assistants, and legal summarization software all rely on serving speed. If inference is slow or expensive, it’s not possible to scale the solution.
It's where Groq’s positioning for deterministic low-latency inference comes in. Groq positions its LPU (Language Processing Unit) stack based on deterministic execution and an architecture designed for predictable serving, not just throughput.
Nvidia isn’t just “working with an inference chip startup.” The wording suggests Nvidia may be licensing parts of Groq’s LPU approach, a combined hardware-and-software design meant to make inference fast and predictable, helped by lots of on-chip memory and a compiler built for planned scheduling.
Groq says it will keep operating normally: it stays independent, GroqCloud keeps running, and Simon Edwards stays in charge. The question is whether Groq can keep building and shipping while Nvidia hires key Groq LPU leaders.
The next clue will be the product. If Nvidia releases new inference hardware or software that works like Groq’s predictable inference, this deal becomes less about the price and more about Nvidia trying to own more of the “serving” part of AI for business automation.
Y. Anush Reddy is a contributor to this blog.



