Greece is at a rare constitutional crossroads. Prime Minister Kyriakos Mitsotakis has announced a sweeping revision of the Constitution, proposing changes to dozens of articles and calling for a broad national debate. Among the concepts now capturing public attention is artificial intelligence. Yet despite frequent references to “modernising the Constitution to include AI,” one critical question remains unasked: what does this actually mean?
At present, the phrase risks being a placeholder—a nod to progress rather than a legal commitment. Constitutions are not technology roadmaps. They are instruments designed to structure power, constrain authority, and protect citizens. If AI is to enter this constitutional conversation, it must do so in those terms. Otherwise, the revision risks being a symbolic gesture with little impact on the rights of citizens or the limits of state power.
AI as a New Locus of Power
AI is often framed as a neutral efficiency booster: a way to deliver public services faster, optimise decision-making, or modernise governance. From a constitutional perspective, however, AI is far more than a tool. It is a decision-making infrastructure, mediating access to rights, public goods, and state authority. When algorithmic systems shape welfare eligibility, tax enforcement, policing, migration, or judicial risk assessments, they exercise forms of power that were once human and accountable.
The danger is not automation itself, but the erosion of accountability when decisions are delegated to systems that are opaque, complex, and distributed across public and private actors. Citizens can easily become subjects of decisions they cannot understand, contest, or appeal. Greek constitutional law guarantees due process, equality before the law, and human dignity, yet automated decision-making challenges these protections at their core.
A constitutional revision must clarify that automation cannot bypass accountability. Decisions affecting rights must remain explainable, reviewable, and attributable to human authorities. Without such safeguards, even a modernised Constitution risks being a symbolic text devoid of practical effect.
Equality before the law also faces new challenges. AI systems learn from historical data—and history is rarely neutral. Left unchecked, algorithms can reproduce or amplify discrimination based on gender, ethnicity, disability, geography, or socio-economic status. A responsible constitutional approach would enshrine positive obligations for bias assessment, transparent auditing, and effective remedies. Anything less risks modernising language without modernising justice.
Human dignity, too, is under pressure. Algorithms that continuously score, predict, and nudge citizens risk treating individuals as data points rather than rights-bearing subjects. Constitutional recognition of AI must reaffirm that human judgement remains decisive in life-altering decisions. Responsible AI use is not about banning technology, but ensuring that technology serves citizens, not the state or market.
Finally, there is the democratic dimension, which is perhaps the most overlooked but most consequential aspect of AI in governance. Many AI systems used by the state are developed, deployed, or maintained by private actors, often operating outside Greece or even outside the European Union. This raises profound questions of sovereignty, oversight, and accountability. When core state functions—allocation of public benefits, law enforcement decisions, or predictive analytics used in policymaking—are mediated through privately controlled code, the locus of power shifts away from elected representatives and public institutions toward opaque corporate actors.
Constitutional recognition of AI should therefore anchor these systems firmly within democratic oversight. This means ensuring that any AI-influenced decision in public administration is not only transparent, but also reviewable by Parliament, subject to independent oversight bodies, and open to public accountability. It also requires robust legal mechanisms to hold private developers and vendors responsible when their systems produce discriminatory, harmful, or unlawful outcomes. Without such measures, even well-intentioned reforms risk entrenching a form of technocratic governance insulated from the people, where key decisions are delegated to algorithms designed and operated by entities beyond the reach of Greek law. This would not only undermine citizen trust in the state, but also contradict the original purpose of constitutional safeguards, which were conceived precisely to prevent the abuse of power—whether wielded by politicians, bureaucrats, or, now, code.
Constitutionalising AI Is About Rights, Not Technology
The current debate carries real political urgency. The Prime Minister frames the revision as “generous and bold,” aimed at modernising the state. But if AI is to be part of this conversation, it must be approached with precision and seriousness. Symbolic or vague references—simply acknowledging AI as a factor of modernity—will satisfy neither technologists nor human rights advocates.
Constitutionalising AI is not about listing technical standards or predicting which algorithms will dominate. It is about translating core constitutional principles into a world mediated by code. The principles that limited state power in 1975—after the Papadopoulos dictatorship—must now constrain power exercised through automated systems.
In practice, this would mean enshrining enforceable obligations:
- Due process and contestability: Citizens affected by AI-mediated decisions must have meaningful paths to challenge and review outcomes.
- Accountability and responsibility: Humans remain legally and politically accountable for automated decisions.
- Transparency and oversight: Systems used in governance are auditable and subject to independent scrutiny.
- Equality and non-discrimination: States must prevent algorithmic bias proactively.
- Human dignity: Automation must not override human judgement in life-altering decisions.
Without these principles, constitutional mentions of AI risk being hollow symbolism. Greece has an opportunity to lead in Europe, demonstrating how law can discipline technological power while still enabling innovation.
The revision debate is therefore not only a test of political skill but of constitutional seriousness. If AI enters the Greek Constitution, it must do so with clear rights, obligations, and enforceable safeguards. Otherwise, it risks entrenching technocratic control, obscuring accountability, and weakening the protections that constitutional law was designed to guarantee.
Greece’s Constitution was born from the desire to prevent the abuse of unchecked power. Today, power increasingly flows through code rather than decrees. If AI is to be part of constitutional reform, it must be treated not as a technical inevitability, but as a new arena where constitutional principles are tested, reaffirmed, and enforced. Until that clarity is achieved, the central question is not how AI will be mentioned in the Constitution, but whether the Constitution is ready to discipline power exercised through machines.
That is the debate Greece must have—and the one on which the legitimacy of the entire constitutional revision may ultimately hinge.
Dr. Konstantinos Komaitis is a resident senior fellow with the Democracy and Technology Initiative at the Atlantic Council, leading the work on digital governance and democracy.




