The rise of China’s DeepSeek is forcing a rethink of AI energy demand forecasts, especially in the US power sector. While AI hype remains strong, DeepSeek’s efficiency-focused approach suggests that runaway power consumption may not be inevitable.

The Chinese startup has demonstrated that high-performance AI models can be developed at a fraction of the cost and energy compared to early leaders like OpenAI. This underscores a broader industry trend: AI is not just a power-hungry disruptor but also a driver of energy efficiency.

Tech experts have long argued that data centres are becoming more efficient as fast as AI expands. Dion Harris, head of data-centre product marketing at Nvidia, pointed out: “In the last decade, inference efficiency in some language models has improved by 100,000 times.” He believes this trend will continue, balancing AI’s rising computational demands.

DeepSeek’s approach highlights software-driven efficiency gains, optimising older Nvidia or equivalent chips using reinforcement learning and lower precision algorithms. These techniques reduce energy consumption while maintaining strong AI performance, challenging the assumption that more power-hungry data centres are inevitable.

This shift has major implications. If AI development prioritises efficiency, the global energy demand surge predicted by AI expansion could be more moderate than expected. Additionally, these advancements may reshape global AI competition, giving an edge to firms that focus on smart energy use rather than raw power.

As AI continues to evolve, the industry may find itself at a crossroads—pursuing ever-larger, energy-intensive models or refining software and hardware efficiencies to keep power consumption in check.

-Agencies