Graphics by AJP Song Ji-yoon SEOUL, December 01 (AJP) - As Google’s tensor processing unit (TPU) rises as a formidable challenge to Nvidia’s near-monopoly in AI computing, the clearest signals of how the rivalry is unfolding may not be found in the chips themselves, but in the earnings and order books of Korea’s two memory giants: Samsung Electronics and SK hynix.
Samsung is said to have supplied more than 60 percent of the high-bandwidth memory (HBM) used in Google’s TPU designs through Broadcom, Google’s chip-design partner, and is expected to expand its share next year with sixth-generation HBM4.
Until the first half of this year, SK hynix had been the primary supplier of HBM3E chips for Google’s Ironwood TPU, but that dynamic may shift in the second half and into next year, analysts say.
Each TPU, typically equipped with six to eight HBM stacks, is also believed to come at up to 80 percent lower cost than Nvidia’s H100 GPU — a key reason hyperscalers are accelerating adoption.
A structural shift beneath the GPU–TPU rivalry
Behind the GPU-versus-TPU debate lies a broader transformation in how artificial intelligence infrastructure is being built.
According to Kyung Hee-kwon, a senior researcher at the Korea Industrial Research Institute, the global AI transition is increasingly being shaped not by chipmakers, but by big tech platforms designing computing systems around their own data and workloads.
“AI today is being led by platform companies — what many refer to as the Magnificent Seven,” Kyung said. “These firms are focused on agentic AI that enables large-scale automation, rather than fully autonomous human-like intelligence.”
For years, Nvidia’s GPUs were seen as indispensable for AI computation. But semiconductors, Kyung noted, are tools — not ends in themselves.
“If a chip delivers better power efficiency and performance for a specific purpose, there is no inherent reason it must be a GPU,” he said.
Google’s TPU, developed over several years and now deployed at scale in its data centers, exemplifies this shift. Unlike Nvidia, Google is not bound by the CUDA software ecosystem and instead operates a vertically integrated stack, allowing TPU accelerators to demonstrate efficiency gains in targeted workloads.
Google’s seventh-generation TPU, Ironwood, unveiled at Google Cloud Next 2025 in Las Vegas on April 12, 2025. Yonhap Still, Kyung emphasized that TPUs and GPUs serve complementary roles.
“This is not about GPUs being replaced altogether. GPUs remain critical for training and general-purpose computing. What we are seeing is the emergence of alternative accelerators — especially where power efficiency and supply constraints matter.”
Supply bottlenecks push platforms toward custom silicon
With global foundry capacity stretched and delivery lead times extending into years, hyperscalers are increasingly unwilling to wait for GPUs.
“AI has become a technology tied to national competitiveness and security,” Kyung said. “If GPU supply cannot meet immediate demand, companies will seek viable alternatives that can be deployed now.”
This pressure has accelerated a wave of custom-silicon development far beyond Google, including Amazon’s Trainium, Microsoft’s Maia, and in-house AI accelerators at Tesla and other platforms.
Memory remains the constant, regardless of who wins
For Korea’s memory makers, the implications are structurally favorable regardless of which accelerator architecture gains ground.
“Whether computing shifts from GPUs to custom accelerators, Korea’s role fundamentally remains the same,” Kyung said. “High-bandwidth memory, advanced mobile DRAM and graphics memory are essential across all AI architectures. What changes is the route to market, not the underlying demand.”
This explains why Samsung Electronics and SK hynix sit at the center of both GPU- and TPU-driven ecosystems — and why their contrasting exposures offer a clearer lens into the AI race than any single chip announcement.
Graphics by AJP Song Ji-yoon According to Park Yu-ak, an analyst at Kiwoom Securities, Samsung’s growing presence in custom accelerators reflects its broader footprint across memory and foundry, while SK hynix continues to anchor the high-end GPU market through its HBM leadership.
Candice Kim 기자 candicekim1121@ajupress.com