Superintelligence: The Next Great Power Struggle
July 24, 2025 Leave a comment
When a novel AI system can rapidly design useful new antibiotics like Halicin or engineer advanced chip layouts in hours that rival the best work of humans, we’re witnessing civilization-scale innovation as a real form of economic power. Hundreds of billions (and soon trillions) of dollars are currently flowing into AI R&D, from startups, big tech, and governments. Many of them are now sprinting toward superintelligence, which is basically defined as AI smarter (at least by a bit) than the most intelligent humans. Whoever controls the most powerful superintelligent models will literally hold the keys to future economies, scientific discovery, and social order.

Unfortunately, for now, today’s policymaking remains light-speed behind the technological development of AI. In 2023, Biden’s Executive Order 14110 instituted watermarking and a number of federal AI safety roles, but the follow-through was quite limited. And now, today, the White House under Trump unveiled “America’s AI Action Plan”, which is a bold pivot focusing on deregulation, rapid data-center expansion, and AI export bundles to allied nations.
The core recommendation: Create secure, full-stack AI export packages — hardware, software, standards — to anchor global model adoption to U.S. influence. It is essentially a geopolitical and economic gambit, not a working policy that will actually address the implications of how maintain some level of control over the key technical breakthroughs of superintelligence, when it arrives.
🔍 What’s at Stake
- Scientific breakthroughs: AI is already accelerating drug discovery by years. The investment Just for Mitochondrial diseases has crossed $500M this year alone. More dramatic discoveries are likely when superintelligence arrives. The economic potential is almost certainly in the trillions.
- Unexpected spin-offs: Quantum-safe optimization, climate model reductions, brain‑computer interface advances—none of which fit neatly into today’s regulatory boxes.
- Global norms on open models vs. closed: Hugging Face CEO Clem Delangue warned this week that China’s open‑source models risk embedding state-driven censorship “cultural aspects…the Western world wouldn’t want to see spread”.
🚦 Forks in the Road Ahead: Open vs Closed AI Models
This is the defining dilemma of AI: Open models accelerate progress, but closed models consolidate power. Early evidence is clear: Open-source AI models can boost economic value and innovation, by enabling faster iteration, reproducibility, and a broader developer base. But openness comes with a steep geopolitical price: China now leads in open AI development, controlling 60% of global frontier models as of 2024 by some measures.

The reality is that state actors can co-opt Western breakthroughs overnight, at least in open settings. On the other hand, closed models may slow innovation and limit oversight but they can retain strategic control, keeping the most advanced capabilities behind corporate or national walls. The question isn’t whether openness or secrecy is better—it’s which risks we are prepared to absorb: Stagnation and concentration, or proliferation and misuse. This tradeoff now defines the fork in the road ahead:
- Full-Steam Race
No regulation; model innovation runs wild. Risk: Runaway power without checks. Global surveillance, identity manipulation, or misaligned AGI. The new AI Action Plan from the White House essentially puts us on this trajectory. - Carefully Coordinated Regime
Binding export and compute caps, model provenance tracking, IP control, and multilateral audit frameworks—akin to nuclear treaties.
Think: Global AI Marshall Plan calling for democratic compute and monitoring consensus. - Hybrid Path
Strong national innovation, transparent oversight, and alliance-level tech guarantees—domestic push with democratic allied coordination that protects and controls technical advances in AI models to keep them out of the hands of unfriendly foreign governments and bad actors.
🔗 Key Policy Gaps
- Export as influence: The new plan is about embedding U.S. standards through export bundles, not regulating domestic model releases .
- Open-source bias: Without norms, Chinese models on open platforms may propagate censorship or propaganda .
- No AGI governance: While massive investment is being spent on data centers and power generation, there are no mechanisms to regulate frontier model alignment on superintelligence and beyond.
🧭 Recommendations—Hedging Civilizational Risk
The core risk of uncontrolled open AI is this: Once a sufficiently advanced model is released, it cannot be recalled. And unlike traditional software, frontier models can enable catastrophic misuse with minimal modification. Researchers have already shown that large language models can generate viable biological weapon synthesis protocols, design novel pathogens, and construct autonomous cyberattack chains. These are capabilities once limited to nation-state labs are now one fine-tune away from public availability (RAND, Nov 2024; NTIA, Jan 2024).
The threat isn’t theoretical: Open models like LLaMA have been jailbroken and proven to work around safeguards within weeks of release. Without shared global standards for provenance, control, and auditing, we risk seeding the digital equivalent of unregulated nuclear material into the open internet. The recommendation section that follows assumes this reality—and asks what firms, enterprises, and governments must now do to stay ahead of the curve, without ceding the future to chaos or authoritarian dominance.
For AI Firms (OpenAI, Meta, Anthropic, etc.)
- Implement secure-by-design: Watermarking + weight-signing + open audits.
- Delay frontier model release until audit, licensing, and multi-party governance are in place (e.g., export as package, not leak).
- Join international coalitions to standardize responsible openness.
For Enterprises
- Demand model provenance, watermark verifiability, and supply-chain traceability.
- Invest in hybrid infrastructures: cloud + on-prem control to hedge against AI ecosystem failure.
- Insist on explainable alignment as a procurement standard.
For Governments
- Embed AI export strategy within a democratic bloc—U.S., EU, Japan, UK—to enforce safety norms.
- Mandate transparency: National oversight bodies to certify “open-yet-aligned” frontier models.
- Prepare deterrence doctrine: credible threat of sanction or tech suspension against misaligned or weaponized AI use.
🏁 Conclusion
Those who follow my work know that I’m very much a tech positivist. But all innovation is a two-edged sword, and AI is likely the most powerful technology we’ve ever developed. It’s a watershed moment and we stand at a genuinely historic juncture: Build democracy-enabling superintelligence, or unleash power that could reshape societies without democratic control. The path ahead demands fusion. Bold innovation with ironbound governance, while we still can. If we balance speed with structure, we might just build the bright future that many of us are hoping for.























































