In the rapidly evolving global landscape, a new geopolitical and economic order driven by artificial intelligence is forcing middle powers to navigate a reality where the most powerful AI systems are controlled by a handful of firms in the United States and China. This concentration of power presents a formidable challenge, but the narrative of an AI race where these nations are hopelessly outmatched is fundamentally misleading. The true long-term competition is not about who can build the largest frontier models, but about who can most effectively capture the immense economic and strategic value of AI through widespread deployment, adaptation, and integration into existing sectors. For nations outside the US-China duopoly, the path to relevance and prosperity in the AI era runs directly through a comprehensive, full-stack open-source strategy, which offers a critical lever to build sovereign capability, foster innovation, and secure a competitive edge without attempting to own the frontier of model development itself.
1. The Shifting Landscape From Models to Ecosystems
The prevailing focus on the development of massive frontier models overlooks a crucial shift in how value is being created and captured within the artificial intelligence sector. While these large models represent the cutting edge, a new intermediary layer between them and their real-world application is rapidly becoming the primary arena for innovation and competitive advantage. This layer encompasses critical functions such as distillation, where large models are compressed into smaller, more efficient versions; fine-tuning, which adapts models for specific tasks or domains; and inference optimization, which makes them faster and cheaper to run. It is in this space that generic AI capabilities are transformed into low-latency, cost-effective systems that can operate on-device or on-site. These functions are heavily dependent on shared tooling, open standards, and reusable software components, highlighting that the economic and strategic gains from AI will flow not just to the inventors of the technology, but to those who build the applications, own complementary assets, and cultivate innovation ecosystems capable of absorbing and reallocating its value.
This evolution necessitates thinking about AI not as a single product but as a complete technology “stack,” a layered system where each component contributes to the final outcome. While the top layers of frontier models and the bottom layer of high-performance computing infrastructure are highly concentrated, many other layers—including software frameworks, developer tools, data pipelines, and deployment infrastructure—are already shaped by open-source principles. These intermediate layers play a disproportionate role in determining how quickly AI capabilities spread throughout an economy, how easily they can be adapted for local needs, and who ultimately benefits from the downstream value. This reality presents a clear strategic path for middle powers. By focusing on building strength elsewhere in the stack, governments can retain significant agency over critical areas like data access, the development of open tools, public procurement rules, and the deployment of AI in regulated sectors. This approach recognizes that every nation’s AI ambitions, whether explicitly stated or not, already depend on the security and sustainability of global open-source infrastructure.
2. Navigating the Geopolitics of Open-Source AI
The growing appeal of open-source AI to middle powers has inevitably transformed it into a key instrument of geopolitical competition. As open models continue to narrow the capability gap with proprietary systems and as governments worldwide seek to leverage them, both the United States and China increasingly view openness as a strategic lever to expand their global influence. China, in particular, has embraced this strategy, with recent releases disrupting the assumption that only capital-intensive, closed models would dictate the future of AI. Chinese models now account for a majority of new open-model adoption, overtaking the US. This push is part of a broader history of leadership in open-source software, supported by one of the world’s largest domestic developer communities. The US, after initially focusing its AI strategy on closed models and safety regulations, has pivoted to frame open-source AI as a strategic asset for promoting technological diffusion and soft power, recognizing its importance in the global competition for technological alignment.
The strategic calculation for both superpowers is that by exporting open models, they can align other nations with their broader technological ecosystems and hardware stacks, thereby securing global market share. This alignment can manifest in two ways. First is vertical lock-in, where adopting an open model also leads to the adoption of the surrounding infrastructure—the specific software frameworks, cloud platforms, and optimized hardware on which the model runs. Second is horizontal lock-in, which occurs at the application layer as countries and firms become dependent on particular model families or interoperability standards, embedding their digital economies within the exporter’s ecosystem. For middle powers, this geopolitical context underscores a critical point: while openness is a powerful strategic tool, it is not a cure-all for sovereignty. Simply adopting open models does not mitigate dependencies on foreign hardware or cloud infrastructure. The strategic value of an open approach lies not in passive consumption but in actively creating a national ecosystem that can adapt, build upon, and innovate around these models, allowing nations to capture economic value without being locked into a single country’s technological sphere of influence.
3. The Tangible Advantages of an Open Ecosystem
Embracing a full-stack, ecosystem-wide perspective on open source delivers a distinct set of advantages that empower states to build genuine capability without directly competing at the frontier of AI model development. This approach fosters sovereign capability—not in the sense of complete technological independence, but as the practical ability to reliably access, adapt, and deploy AI systems on a nation’s own terms. It shifts the focus from owning a single model to cultivating a domestic ecosystem of talent, institutions, and tools required to build, understand, and improve AI across the entire pipeline. This foundation of builders, rather than mere adopters, is fundamental to long-term resilience and ensures a country can capture the economic value of AI domestically. Furthermore, an open approach can be a powerful catalyst for reimagining the state itself, enabling governments to deliver better public services under tight fiscal constraints by becoming more technologically innovative and adaptable. Open models and modular architectures allow public-sector teams to experiment with and customize tools for specific needs, replacing components as needed and avoiding lock-in to monolithic, proprietary systems.
Beyond strengthening national capacity and transforming government, an open ecosystem acts as a powerful multiplier for economic diffusion and downstream innovation. Open-source AI and software lower barriers to access, reduce costs, and accelerate experimentation, enabling small and medium-sized enterprises to adopt advanced technologies more quickly. However, the deepest economic impact comes not just from using open tools but from building upon them. This is particularly evident in AI for science, where the open release of tools like AlphaFold 2 has catalyzed global biomedical research by allowing scientists to adapt and extend its capabilities. By fostering an environment where new applications and sector-specific innovations can be built on an open foundation, nations can drive significant productivity gains. Openness also offers net security benefits, allowing defense and intelligence agencies to inspect, fine-tune, and operate models in secure, localized environments. It provides strategic optionality by preventing dependency on single vendors and promoting a competitive supplier pool. Finally, by contributing to shared international projects, from codebases to governance frameworks, middle powers can enhance their soft power, collaborate on public-service tools, and collectively shape the global AI market to their advantage.
4. Launching a Flagship Open-Source Program
Governments should establish flagship national or regional open-source AI programs designed to build domestic capability in fine-tuning, evaluating, and deploying models, thereby stimulating a vibrant open ecosystem. The goal of such a program is not to build a national large language model (LLM) from scratch, an approach that is often an inefficient use of public resources. Attempts to create a frontier-scale model require billions of dollars in compute and specialized talent, and any resulting model would likely become obsolete within months as the field advances. This path also does little to reduce underlying structural dependencies on foreign hardware and cloud infrastructure, risking the creation of an expensive artifact rather than a lasting national capability. Instead, a more effective strategy is to build a distilled and fine-tuned national LLM on top of an existing open-source foundation model. While this does not solve the problem of technological dependence, it offers significant benefits in preserving linguistic and cultural uniqueness, which becomes increasingly important as AI permeates society.
The true value of a flagship open-source project lies not in the final model itself but in the ecosystem and capability-building that occurs as a byproduct. Developing the expertise to repeatedly train, adapt, and productize open models is an invaluable national asset. This process cultivates full-stack technical expertise within national institutions, ensuring that knowledge of AI architectures, data pipelines, and optimization techniques is held domestically. It fosters a self-reinforcing community of researchers, engineers, and firms who can adapt open models for local use cases, ensuring that progress is continuous and not reliant on a single project. To be successful, such a program requires strategic public compute resources, curated national data sets, reusable open tools, institutional anchors like a public-interest laboratory, and sustained multi-year funding. By focusing on building the capacity to create and understand any model, middle powers can develop a resilient and innovative ecosystem that can harness the power of AI for national benefit, regardless of who develops the next frontier model.
5. Treating Open Tooling as Critical Infrastructure
To fully realize the benefits of AI, governments must make the creation and maintenance of open-source tools a cornerstone of their national strategies, recognizing that value and resilience are found not only in the models but also in the software layers that support them. AI is only as effective as the software ecosystem it is built on, which includes both the underlying software that makes models run reliably and the higher-level tools that integrate them into real-world workflows. Therefore, a national AI strategy must prioritize open-source tool-building in three key areas: maintenance of critical infrastructure, development of tools for government, and creation of tools for science. All governments rely on critical open-source software, whether they realize it or not, and funding its maintenance is essential. Establishing a national open-source trust fund or running red-teaming competitions to identify systemic risks can mitigate vulnerabilities and ensure the stability of the digital services that depend on this shared infrastructure.
Within government itself, an “open-first” approach to building AI capability is crucial for fostering innovation and public trust. Developing tools in-house, as demonstrated by initiatives like the UK’s Incubator for Artificial Intelligence (i.AI), allows governments to create applications tailored to their specific needs at a lower cost, reduces duplication of effort, and enhances transparency. This approach also enables collaboration between like-minded countries, allowing them to pool resources and share solutions for common public-service challenges. To attract the necessary talent, governments should create incentive structures that recognize and reward open-source contributions. Similarly, in the scientific domain, new incentives are needed to encourage researchers to build, share, and maintain reusable AI-enabling tools. By treating tool development as a core research activity and funding its long-term maintenance, governments can turn publicly funded science into a continuous source of shared infrastructure that accelerates AI-enabled R&D and drives long-term productivity across the economy.
6. Curating Data and Fostering Pro-Innovation Regulation
Governments should strategically invest in curating high-quality national and international data sets in priority sectors to drive downstream innovation and build comparative advantages in AI. As the availability of general web data becomes exhausted, progress in AI will increasingly depend on specialized models fine-tuned on high-quality, domain-specific data. For middle powers, leveraging their unique data assets—from health records to industrial data—offers a clear path to compete in the long-term AI race of application and adoption. This requires an active strategy to build and maintain public data sets that the wider open ecosystem can use to create new solutions and companies. To guide this effort, governments should start by identifying where real user demand exists across research, public services, and industry. Rather than attempting to centralize all data, they should adopt a federated model that links data where they reside while ensuring secure, interoperable access. This approach preserves data ownership, maintains clear audit trails, and avoids the immense challenges of creating a single monolithic repository.
Alongside building data infrastructure, governments must ensure their regulatory environment does not stifle the open-source ecosystem. Debates over AI and copyright, while often framed as a conflict between large tech companies and content creators, can have the unintended consequence of harming the small players who drive open innovation. Excessively strict liability, data-protection, and copyright rules can disproportionately expose startups, researchers, and entrepreneurs to legal risks, as their transparent methods make them easier targets for litigation. Such regulations can squeeze out the very innovators who are looking to challenge incumbents, as they lack the financial and legal resources to absorb high licensing costs or navigate complex compliance burdens. To allow their open ecosystems to thrive, governments must provide legal certainty and technical feasibility for data use. This means implementing pragmatic opt-out tools that are genuinely implementable by small developers rather than imposing onerous reporting requirements. The key is to create a regulatory balance that is ambitious enough to build a national data advantage yet light-touch enough to empower the national open ecosystem.
7. Shaping the Market Through Government Procurement
Public procurement is one of the most powerful yet frequently overlooked tools in a national AI strategy, capable of shaping market structures and building state capacity simultaneously. By directing public-sector demand toward open and interoperable systems, governments can stimulate a competitive open-source ecosystem, lower barriers to entry for startups, and reduce dependence on a few proprietary incumbents. This approach serves as both an industrial strategy and a capability-building tool. Deploying and maintaining open systems within government enhances internal technical literacy and engineering capacity, ensuring the state is an intelligent customer and a capable user of AI. To effectively leverage procurement, governments should invest in technical expertise within departments, appointing open-source champions to identify viable use cases and develop practical evaluation guidelines. These guidelines should help officials understand the different tiers of “openness” and weigh the cost-benefit of open versus proprietary systems for various applications, acknowledging that open source is not always the cheaper or better option.
To foster a dynamic market, procurement processes must be made accessible to the small and medium-sized enterprises (SMEs) that often drive open-source innovation. This means moving away from rigid, specification-led contracts that tend to favor large, established vendors. Instead, governments should adopt challenge-based procurement, where they present a problem to be solved, allowing for more creative and diverse solutions. Using modular contracts with shorter timelines can also make the process more agile and competitive, enabling SMEs to participate more easily. However, it is crucial to recognize that openness is not a blanket solution, and mandating arbitrary quotas for open-source adoption has proven ineffective. The trade-offs must be evaluated on a case-by-case basis, considering performance gaps between open and proprietary models, the total cost of implementation (including technical resources for fine-tuning), and the in-house capacity required for successful deployment. When used strategically, procurement can be a powerful instrument for growing a domestic open ecosystem and ensuring the government gets the best value and capability from its technology investments.
A Path to Sovereign Capability and Influence
The capabilities of frontier AI systems continued to advance, and as they did, the resources required to sustain this progress grew far beyond the means of most governments. Leaders, therefore, worked to secure access to this frontier, but wisely concluded that for most countries, competing directly was neither effective nor realistic. The more viable strategy proved to be the construction of ecosystems, infrastructure, and capabilities required to deploy AI at scale. Yet, it became clear that investments in hardware alone did not amount to a comprehensive strategy or capture long-term value. The existence of a strong open-source ecosystem became a prerequisite for any country that wished to harness the opportunities offered by AI, as it was instrumental in driving innovation by lowering costs and accelerating downstream development. The successful middle powers were those that sustained investment across several concrete pillars: targeted open-source programs that prioritized ecosystem development, robust tool-building for government and science, the curation of high-quality national data sets, procurement that rewarded openness, and interoperable benchmarks that opened markets. In the end, the AI race was determined not by who built the best model, but by who built the best ecosystem.
