
- Nvidia is integrating Samsung Foundry to expand NVLink Fusion for custom AI silicon
- NVLink Fusion allows CPUs, GPUs, and accelerators to communicate seamlessly
- Intel and Fujitsu can now create CPUs connected directly to Nvidia GPUs
Nvidia is deepening its efforts to make itself indispensable in the AI landscape by expanding its NVLink Fusion ecosystem.
Following a recent collaboration with Intel, which allows x86 CPUs to connect directly to Nvidia platforms, the company has now hired Samsung Foundry to help design and manufacture custom CPUs and XPUs.
The move, announced during the 2025 Open Computing Project (OCP) Global Summit in San Jose, demonstrates Nvidia’s ambition to expand its control across the entire hardware spectrum of AI computing.
Integrate new players into NVLink Fusion
NVLink Fusion is an IP and chiplet solution designed to integrate central processing units (CPUs), explained Ian Buck, Nvidia’s vice president of HPC and Hyperscale. GPUs and accelerators integrate seamlessly into the MGX and OCP infrastructure.
It enables direct, high-speed communication between processors within large-scale systems, with the goal of removing traditional performance bottlenecks between computing components.
During the summit, Nvidia revealed several ecosystem partners, including Intel and Fujitsu, both of which are now able to build CPUs that communicate directly with Nvidia GPUs via NVLink Fusion.
Samsung Foundry joins this list, offering complete design-to-manufacturing expertise for custom silicon, an addition that strengthens Nvidia’s reach in semiconductor manufacturing.
The collaboration between Nvidia and Samsung reflects a growing shift in the AI hardware market.
As AI workloads expand and competition intensifies, Nvidia’s custom CPU and XPU designs aim to ensure its technologies remain central in next-generation data centers.
according to TechPowerUpNvidia’s strategy comes with severe limitations.
Custom chips developed under NVLink Fusion must connect to Nvidia products, with Nvidia retaining control of the communications controllers, PHY layers, and NVLink Switch licensing.
This gives Nvidia significant leverage in the ecosystem, though it also raises concerns about openness and interoperability.
This tight integration comes as competitors such as OpenAI, Google, AWS, Meta, and Broadcom develop in-house chips to reduce reliance on Nvidia hardware.
Nvidia is working to embed itself deeper into the fabric of AI infrastructure by making its technologies inevitable rather than optional.
With NVLink Fusion and the addition of Samsung Foundry to its dedicated silicon ecosystem, the company is expanding its influence across the full hardware spectrum, from chips to data center architectures.
This reflects a broader trend among participating competitors. Broadcom is moving deeper into AI with dedicated hyperscale accelerators.
OpenAI is also said to be designing its internal chips to reduce reliance on Nvidia GPUs.
Together, these developments represent a new phase of competition in AI hardware, where control of the silicon-to-software pipeline determines who leads the industry.
Nvidia’s partnership with Samsung appears aimed at countering this by accelerating the rollout of custom solutions that can be quickly deployed at scale.
By integrating its IP into broader infrastructure designs, Nvidia is positioning itself as a key part of modern AI factories, rather than just a GPU supplier.
Despite Nvidia’s contributions to the OCP Open Hardware Initiative, its NVLink Fusion ecosystem maintains strict boundaries that favor its architecture.
While this may ensure performance benefits and ecosystem consistency, it may also raise concerns about vendor lock-in.
Follow TechRadar on Google News and Add us as a favorite source Get expert news, reviews and opinions in your feeds. Make sure to click the follow button!
And of course you can too Follow TechRadar on TikTok To get news, reviews and unboxings in video form, and get regular updates from us on WhatsApp also.