TLDR
- Nvidia shares advanced 0.3% in early Tuesday trading to $202.74, approaching its October record close of $207.
- Google plans to introduce its next-generation tensor processing units (TPUs) at the Google Cloud Next conference in Las Vegas, created in collaboration with Marvell Technology.
- The new TPU generation targets AI inference tasksâwhere models generate responses to user inputsâwhile Nvidia maintains dominance in model training.
- KeyBanc’s John Vinh reaffirmed his Overweight stance on Nvidia with a $275 target, highlighting the CUDA platform as a significant competitive advantage.
- Google secured a multibillion-dollar TPU agreement with Meta and extended Anthropic’s access to as many as 1 million chips, though supply limitations persist.
Nvidia continues its impressive run. The semiconductor giant’s shares have surged 15% in the past 30 days and are now approaching record territory. The upward trajectory persisted Tuesday despite Google’s impending announcement in the artificial intelligence chip arena.
Trading at $202.74 before the opening bell, Nvidia posted a 0.3% gain. The stock is closing in on its all-time closing record of slightly above $207, established in October 2025.
The rally continued as market participants anticipated forthcoming quarterly results from leading technology firms. Investor sentiment around Nvidia’s core operations appears increasingly positive.
However, not everything is smooth sailing. Google is poised to reveal its latest tensor processing unit generationâTPUsâduring this week’s Google Cloud Next event in Las Vegas.
Google’s Inference Push
Bloomberg reports that Google created these newest processors through a partnership with Marvell Technology. The chips prioritize AI inference: the operational phase where a deployed model delivers responses to user questions.
“The battleground is shifting towards inference,” Gartner analyst Chirag Dekate told Bloomberg. Google Chief Scientist Jeff Dean echoed that view, saying it now makes sense to specialize chips for either training or inference as AI demand grows.
Google has been working toward this milestone for several years. The TPU initiative now includes Meta among its prominent clientsâthe social networking company executed a multibillion-dollar contract to acquire TPUs through Google Cloud. Anthropic similarly broadened its TPU capacity to potentially 1 million processors.
A structural advantage exists as well. Among leading AI developers, none manufactures proprietary chips at a scale comparable to Google, strengthening the connection between model development teams and hardware design engineers.
Google has simultaneously expanded its TPU accessibility. PyTorch developers can now utilize TPUs, and the company has allegedly tested on-site TPU installations for corporate clientsâmoving away from its traditional cloud-exclusive approach.
Nvidia’s CUDA Moat
Wall Street analysts remain unfazed. KeyBanc’s John Vinh upheld his Overweight recommendation on Nvidia Monday with a $275 price objective, contending that the CUDA software ecosystem establishes substantial obstacles for potential rivals.
“We see limited competitive risks and expect Nvidia to continue to dominate one of the fastest-growing workloads in cloud and enterprise,” Vinh wrote.
Nvidia CEO Jensen Huang has stated previously that his company’s chips can execute applications “you can’t do with TPUs.” Significantly, Google itself continues deploying Nvidia GPUs alongside proprietary TPUs for artificial intelligence initiatives.
Nvidia’s forthcoming Vera Rubin architecture remains projected to be the most sophisticated AI hardware available upon release.
Supply constraints may also impede Google’s expansion plans. An anonymous startup executive informed Bloomberg that TPU availability presented genuine challenges, with restricted chip access beyond what Google allocated to “the more elite teams.”





