Artificial Intelligence (AI) is no longer merely a technological innovation; it now operates as both a visible and subterranean force shaping the international order. Power, once based exclusively on armies and territory, now clearly includes networks and algorithms. Control over data and information determines not only a country’s economic prospects but also how far its citizens retain agency over their own lives.
In the United States, technological progress has historically been linked to market freedom and limited state intervention. Silicon Valley’s dominance was built on the belief that innovation flourishes where lawmakers keep their distance. AI developed as a promise of prosperity and productivity, without prior restraints, and as a contributor to increased national power.
This changed when algorithms proved capable not only of manipulating information but also of operating in the “grey zone” between peace and rivalry, carrying the risk of data leaks to strategic competitors and enabling cyberattacks, as well. Civil society now demands clear democratic safeguards, while the new administration in Washington views restrictions on AI primarily as a way to protect defense secrets and strategic advantages, rather than as institutional protection of human rights.
Across the Pacific, in China, digital surveillance of society is regarded as a major institutional achievement, while dominance in the electromagnetic space and superiority in the cognitive sphere are central goals of the Politburo. Yet a paradox persists: a modern state and military machine requires autonomy in decision-making and swift reactions, but the political system is based on absolute Party control. The Chinese Armed Forces have been trained for decades to operate on predetermined ideological doctrines, while officers and officials are required to constantly seek instructions on even sub-strategic matters.
This gap between political rigidity and operational necessity (common in authoritarian regimes) may prove crucial in a real crisis. Systems trapped in their own hierarchies tend either to react too slowly, or to respond dangerously fast on reflex, without authorization. The near-launch of nuclear weapons during the autumn crisis of 1983, triggered by successive false radar and computer warnings in the Soviet air defense network and prevented at the last moment thanks to one individual’s prudence and courage to bypass protocol and hierarchy, remains an enduring lesson.
And in the middle stands Europe. Neither a technological superpower nor a centralized laboratory-state. The EU does not compete with the U.S. and China in platforms or semiconductors; instead, it seeks to align AI with its values. The Artificial Intelligence Act constitutes the first comprehensive global regulatory framework for large-scale AI systems. It governs particularly “high-risk” applications in sectors such as health, justice, and critical infrastructure, imposing transparency, safety, human oversight, and accountability requirements, while prohibiting “unacceptable risk” uses such as generalized biometric surveillance or social scoring. It does not rely solely on market mechanisms but places the protection of existing fundamental rights at the core of regulation, with emphasis on the obligations of providers and users.
At the same time, the “Framework Convention on AI, Human Rights, Democracy and the Rule of Law” of the Council of Europe, adopted in May 2024 and open to signature by all states, requires parties to take appropriate measures to guarantee dignity, autonomy, non-discrimination, and data protection throughout the AI life cycle. Although neither instrument automatically ensures rights are upheld in practice, particularly in national security domains, together they establish a new international model in which technology must operate within boundaries consistent with democratic legitimacy.
Europe thus asserts itself not so much as an industrial force but as a normative power exporting standards. The market of 450 million Europeans gradually becomes a lever of regulatory influence: what complies in order to gain access to Europe soon complies everywhere. This strategy is a claim to a new form of power: hegemony of limits. While others invest in expanding their sovereignty, Europe invests in the perimeter of legality. This is a high-risk political choice grounded in a major ethical ambition: to declare that what can be done should not automatically be done; only what is just.
Even Machiavelli, if he lived today, might see in this stance a modern version of virtù: not the harsh imposition of will over others, but the capacity for self-restraint before power grows beyond the possibility of self-control. In his era, leadership was judged by the ability to tame rivers and armies; in ours, it is also judged by whether one can govern algorithms; systems that “compute” the world faster than humans themselves. The fortuna of the 21st century is the instability of networks, the “black holes” of data, the feedback loops that can escalate a single error into a crisis. A ruler who does not understand and control their tools will inevitably be ruled by them.
Europe’s project is far from guaranteed to succeed. The pressure for (economic) efficiency may push against the boundaries of ethical conduct and demand concessions. Yet this is precisely where the future of democracy will be determined: in the resolve to stand firm on the source of its legitimacy, even when it appears outdated compared with the technological promise of omnipotence.
If AI is the new currency of power, Europe is trying to mint its other side: a system in which authority remains accountable and humans are not assessed as statistical probabilities, but recognized as bearers of inalienable rights. The EU is not claiming the title of master of the technological age, but something perhaps more difficult: to define the conditions under which this new age will not become the erasure of human freedom. Should the Union rise to the height of its institutions, it may prove that the most resilient form of power is the one that imposes limits even on itself. Otherwise, algorithms will define the world without us. And by then, it will be too late to recover the boundaries we allowed to fade.
Ioannis Sidiropoulos, LL.M (LSE, UvA), lawyer (Athens, Cyprus)
Fellow, Diplomatic Academy, University of Nicosia
Senior Fellow, Strategy International think tank, Nicosia