Stop asking bad questions on AI risk
The Brief: Why bad questions drive bad outcomes—and why no country can ‘win’ AI
“There is no such thing as a stupid question”, teachers like to say.
When it comes to AI risk, however, there are definitely dangerous questions.
Bad questions such as ‘who will win?’, ‘safety or speed?’, and ‘how do we join the AI race?’ shape policy in a way that increases risk.
Better outcomes begin with better questions.
This special edition of The Brief argues that AI risk is interconnected yet distinct across three levels—
the state of technology (AI safety)
the state of the world (AI competition)
the state of nations (AI readiness)
Action taken in one domain cascades into the others, often in unintended ways.
Which country has got its national AI strategy right?
By late 2025, more than half the world’s countries had published such a strategy.
Most strategies fail because they treat safety as industrial policy, competition as chip procurement, and readiness as digitisation. Each mistake stems from the same conceptual error: they ignore the interconnectedness of risk.
A more useful frame starts with three observations. First, all countries (frontier or not) are exposed to risk and opportunity across three linked dimensions: AI safety, AI competition and AI readiness. Second, success does not require dominance in every dimension. It requires deliberate engagement in each. Third, these dimensions offer multiple levels of participation, from minimal compliance to global leadership, giving countries room to calibrate ambition and manage risk.
AI is a System
Competition pressure is safety pressure.
Intense competition weakens tolerance for delay, increases incentives to deploy untested systems quickly, and weakens institutional appetite for safety rigour.
Example: A frontier lab accelerates a release after a rival demonstrates a model breakthrough. To keep pace, it relaxes internal evaluation thresholds; the model ships with untested emergent behaviours that later surface in high-stakes industrial settings.
Competition pressure is cross-border.
AI capabilities and norms diffuse across borders. One nation’s shortcut can become another’s vulnerability.
Example: A government relaxes export screening for fine-tuning tools to stimulate domestic startups. Within months, those tools appear in overseas model-as-a-service platforms, allowing actors in third countries to bypass their own safety restrictions.
Safe AI removes and adds risks.
The safety label accelerates deployment into defence and critical infrastructure, where competitive pressure creates new systemic risks.
Example: A frontier model clears safety tests, reassuring regulators, and removing a moderating influence on competitive behaviour. Firms and states then push the “safe” model into high-risk domains such as autonomous defence platforms and critical-infrastructure optimisation. The safety credential accelerates deployment in precisely the environments where competitive escalation creates new systemic risks.
Readiness is political leverage.
Readiness determines whether safety decisions actually land. In addition, high AGI-readiness translates into sovereign autonomy within competitive dynamics.
Example: A mid-power with strong compliances capabilities conditions market access on meeting its audit standards. Frontier labs accept the terms because the market is valuable, giving the mid-power influence over global safety norms beyond its capability footprint.
The world where AI can do the most possible good and least possible harm has three structural features: AI is safe, the balance of power is stable, and everyone shares in benefits.
These conditions map directly onto the three research domains underpinning this brief: technical AI safety (alignment+), competitive AI (distribution of power), and AI readiness (distribution of benefits).
To ask better questions, we now turn to each level of analysis in turn.
I. Technical AI Safety (alignment+)
Technical AI safety confronts one overarching question: can increasingly capable AI systems be steered, constrained and audited so as to reliably serve human ends?
Less useful questions
“Will alignment be solved?”
A bad question because alignment is a spectrum, not a binary.
“Should we pause AI?”
A bad question because competition dynamics determine feasibility.“Are models too powerful?”
A bad question because it is a moral debate disconnected from mechanism.
At the core is the alignment problem: defining the “why” of model behaviour. Alignment+ is the full stack of techniques needed to understand, constrain and supervise advanced systems, such as robustness (does it do what it claims?), interpretability (why and how did the system reach its decision?), and scalable oversight (can oversight keep step with model capability?). Other supporting conditions include foundational infrastructure and governance.
Today, alignment remains unresolved.
Researchers broadly agree that if alignment proves impossible, further capability scaling must stop. We still lack evidence that alignment is achievable at scale. Hence the emerging distinction: align where you can; control where you must.
Control governs what models can do. It spans access control, capability control, compute control, and operational control. Control is not synonymous with pausing AI: it is modulating, conditioning and constraining capabilities.
Expanding alignment research is one of the few permissionless high-impact levers available.
More useful questions
“What behaviours can we detect (versus what remains hidden) once models scale?”
What cannot be observed cannot be regulated.“Which safety signals generalise across architectures and use-cases?”
A model may appear safe in controlled tests yet fail under open-ended deployment. And because systems migrate between civilian and military use, dual-use means dual-risk.
“Which safety guarantees or evaluation rights must be shared across borders?”
Given that AI is dual-use, full openness is not viable. At the same time, AI safety is shared by default: either as a public good or as a shared vulnerability.
II. AI Competition
AI competition is not a sprint to a flag. AI competition is better characterised as competitive dynamics: the ongoing interactions, resource flows and strategic manoeuvres that shape capability, diffusion, governance and ultimately power.
Less useful questions
“Who will win the AI race?”
A bad question because the race metaphor narrows our view to winners and losers, when in fact many gain and many lose in different dimensions.“Which country will dominate AI by 2030?”
A bad question because this totals dominance to model capability, ignoring the impact of diffusion and bottlenecks.“Are we losing the AI war?”
A bad question because this conflates competition with conflict and nationalises a transnational technology.
Just as one could not ‘win’ the internet, calling a country a ‘front-runner’ misreads the temporal nature of AI benefits. The race framing collapses the system into zero-sum logic and obscures the fact that benefits diffuse unevenly across actors and time.
“In fact, one of my main reasons for focusing on risks is that they’re the only thing standing between us and what I see as a fundamentally positive future. I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.”
Dario Amodei, Machines of Loving Grace
Machine learning systems rely on compute, data and algorithms. A competitive state builds resilience across these while limiting rival access. But leadership is not only about capability. It is also about diffusion: how widely AI is deployed, including by adversaries. Lastly, competitive advantage hinges not just on what a state has, but what it controls and what it forces others to depend upon.
Competition behaves like a system of shifting dependencies, not a leaderboard of model sizes.
More useful questions
“How will US and Chinese perceptions of each other’s intentions shape pacing and sequencing of capability build-out?”
States balance against threats when intentions appear malign; against capabilities when intentions are opaque.
“Which dependencies (chips, rare earths, talent flows, cloud access) will determine leverage?”
Strategic advantage is embedded in chokepoints. For our perspective on rare earths, see our recent deep dive on the topic.
“What role will other countries play in the global AI competition framework?”
Third parties are conduits of dependence, influence, resources and diffusion.
III. AI Readiness
Readiness means preparing institutions, infrastructure and state capacity for the possibility that AGI succeeds.

Less useful questions
“How do we join the AI race?”
A bad question because superpower playbooks are infeasible and strategically irrelevant for most states. At the same time, many countries underestimate leverage of dependencies.“How do we adopt AI in public services”
A bad question because it equates readiness with digitisation. The task is not to adopt AI into existing institutions, but to adapt institutions to AI-induced demands.“When will AGI deliver its economic boost?”
A bad question because it assumes benefits will arrive smoothly. In reality, there is a temporal quality to AI benefits: an uneven diffusion across sectors and time.
Readiness requires recognising that AI benefits will be uneven, may arrive late, and will demand institutional restructuring on a scale many governments have not yet planned for.
Even if a country is not a frontrunner in capability competition, it can still invest in readiness to absorb, govern and respond to the consequences.
Readiness becomes the buffer while the alignment and competition drivers play out.
More useful questions
“What institutional, economic and infrastructural conditions allow multiple states to benefit from frontier AI”
Access to safe AI, benefit-sharing, and modernised institutions.“What happens if AI’s promised benefits are delayed or uneven?”
Jagged deployment affects stability, public trust, national finances and policy choices.“What will China and the US offer the world—and on what terms?”
In a multipolar setting, benefit-distribution strategies signal which states become balancers and which become followers.
The discipline of eliminating bad questions starts with systems thinking
In a high-stakes scenario, bad questions produce bad outcomes. If actors perceive survival to be at stake, the answer to “which country will win?”, “safety or speed?”, and “can we do this alone?” will be: us, speed, yes. The result is unilateral action, runaway competition and geopolitical instability.
Better questions are the first step to outcomes that are otherwise unreachable: plural beneficiaries, safety as a public good, and collaboration under asymmetric dependencies.
THE RED LINE VIEW
AI will transform sovereign power, but this is a plastic moment. States still have a broad set of strategic options, most of which remain under-explored. The greatest lever for reducing catastrophic risk is preserving geopolitical stability; the best hedge against instability is strengthening domestic readiness.





