Data centres as we know them are on their way out. The traditional model of providing storage and computing power for information systems is being replaced by something entirely different – and nobody in corporate boardrooms should ignore what’s happening.
Tech giants have already committed over $1 trillion to AI infrastructure projects in 2025, with McKinsey projecting that AI infrastructure spending could reach $6.7 trillion by 2030 . This isn’t just about upgrading servers – it’s about rebuilding the foundation of how computing gets done.
From Data Centres to AI Factories
No ads. No tracking.
We don’t run ads or share your data. If you value independent content and real privacy, support us by sharing.
Read More
At Computex 2025, Nvidia CEO Jensen Huang put it bluntly: ‘Nvidia is not a technology company only anymore, in fact, we’re an essential infrastructure company.’ His vision goes beyond traditional data centres to what he calls ‘AI factories’ – facilities designed to produce tokens rather than simply store data.
The distinction matters enormously for anyone making technology decisions. Traditional data centres were built around the idea that computing happens in response to requests. AI factories work differently – they’re designed for continuous production of intelligence, reasoning and problem-solving capabilities that haven’t been seen before.
Huang’s announcement of new products including NVLink Fusion and the Grace Blackwell NVL72 chip reflects this fundamental change. ‘What used to be oneshot AI is now going to be thinking AI, reasoning AI, inference time scaling AI and that’s going to take a lot more computation,’ he explained.
The Infrastructure Arms Race Intensifies
Amazon has committed $75 billion to AI infrastructure in 2025 , while Microsoft plans to spend $80 billion on AI data centres this fiscal year. Meta is raising its capital expenditure budget, and Alphabet has committed $75 billion to the race.
The battle for AI infrastructure dominance is reshaping global technology supply chains. AMD continues to challenge Intel in the x86 market while making gains in AI accelerators, and Broadcom has emerged as a critical player in networking infrastructure for AI systems.
Taiwan finds itself at the centre of this transformation. Nvidia announced partnerships with the Taiwanese government, Foxconn and TSMC to build what Huang called the first ‘giant AI supercomputer’ for Taiwan’s AI infrastructure ecosystem. The island’s position as a semiconductor manufacturing hub makes it crucial to global AI ambitions.
The scale of investment reflects the fundamental challenge facing every major corporation: AI energy demands are expected to more than double electricity requirements by 2026, forcing companies to completely rethink their infrastructure strategies.
Why Businesses Should Care
For companies outside the technology sector, these developments matter because AI infrastructure is becoming as essential as electricity or internet connectivity. Huang’s prediction that ‘in 10 years time you will look back and you will realize that AI has now integrated into everything’ isn’t hyperbole, it will likely arrive even sooner.
The global AI data centre market is projected to grow at 28.3% annually through 2030, reaching approximately $157 billion. But the real significance lies in what this infrastructure enables: what Huang calls ‘Agentic AI’ – systems that can understand, think and act independently.
‘Agentic AI is basically a robot in a digital form,’ Huang explained. ‘These are going to be really important in the coming years, we’re seeing enormous progress in this area.’
The Custom Silicon Revolution
One of the most significant announcements from Computex was Nvidia’s decision to open up its NVLink technology through NVLink Fusion. Previously, Nvidia sold complete systems built with its own components. Now, organisations can mix Nvidia GPUs with custom components, creating more flexible architectures.
This reflects a broader trend towards custom silicon. Broadcom and other companies are enabling cloud giants to develop custom AI accelerators , potentially reducing costs by 40% compared to standard GPUs. Companies like Google, Meta and Amazon are already designing their own AI chips to reduce dependence on any single supplier.
Custom silicon allows companies to optimise hardware for specific AI workloads, potentially delivering performance advantages that translate into competitive benefits. Yet this trend also highlights the geopolitical tensions surrounding AI development, with Chinese firms like DeepSeek challenging US tech dominance through more efficient algorithms rather than purely relying on hardware advantages.
The Networking Challenge
As AI systems scale up, networking becomes increasingly critical. Broadcom CEO Hok Tan estimates that networking currently represents 5-10% of data centre spending , but this could grow to 15-20% as the number of interconnected GPUs increases.
Nvidia’s NVLink technology addresses this challenge by enabling high-speed connections within and across server racks. The company promises annual performance improvements, with the Grace Blackwell GB300 offering 1.5 times more inference performance and double the networking capacity compared to previous generations.
For businesses evaluating AI infrastructure investments, networking considerations are becoming as important as processing power. The ability to scale AI systems depends heavily on how efficiently data can move between computing resources.
This infrastructure transformation is already creating ripple effects across sectors, from autonomous vehicles gaining commercial viability to cost-effective alternatives emerging in markets like India, where GPU scarcity is spurring CPU-based AI solutions.
The Road Ahead
The transformation from data centres to AI factories represents more than a technology upgrade – it’s a complete rethinking of computing infrastructure. Companies that understand this early will have significant advantages over those that treat AI as just another software application.
For organisations without in-house capabilities, Nvidia is offering detailed blueprints to accelerate AI factory construction. But the broader lesson is clear: the computing infrastructure that powers business operations is changing fundamentally, and the companies that adapt fastest will define the next decade of competitive advantage.
As Huang noted, Nvidia has been scaling computing performance by about a million times every decade – and they’re still on that trajectory. For businesses, the question isn’t whether AI infrastructure will transform their industries, but how quickly they can adapt to take advantage of the opportunities this transformation creates.