By Alexandra Attalides
“Technology is a useful servant but a dangerous master,” wrote Nobel laureate Christian Lous Lange a century ago. In the age of artificial intelligence (AI), this observation has transformed from philosophical warning into a concrete political dilemma.
As Cyprus enters an institutional debate on Democracy and the Rule of Law in the era of AI, the central question becomes unavoidable: if algorithms shape public opinion beyond democratic accountability, how free is choice – and who really holds power?
AI’s accelerating evolution is creating a strategic tension that democratic systems globally struggle to manage. From leading AI accelerators such as Sam Altman and Elon Musk to long-time critics, from Oxford and MIT researchers to international regulatory bodies, the consensus is converging: the transition toward systems of general (AGI) – or potentially super (ASI) – intelligence outpaces the regulatory and institutional capacities of states. This mismatch produces not only technical risks, but democratic ones: deficits in oversight, opacity in automated decision-making, and new forms of political influence that operate beyond the reach of traditional accountability mechanisms.
AI and Elections
A further concern relates to the scale and visibility of AI-driven influence operations in the world and in Cyprus itself.
Internationally, there is now a well-documented industry offering coordinated networks of automated or semi-automated social media accounts, algorithmic amplification services, and synthetic political content, designed to shape public opinion on behalf of paying clients.
The critical question for Cyprus is not whether such capabilities exist globally – they clearly do – but whether state authorities have a clear picture of the extent to which these practices may already be operating within the local digital public sphere, particularly in view of forthcoming elections.
Do our institutions possess the technical capacity, data access and regulatory tools necessary to detect, analyse and respond to such forms of algorithmic manipulation?
International precedents illustrate how rapidly democratic vulnerabilities can escalate. In Slovakia (2023 parliamentary elections), an AI-generated audio deepfake falsely portraying a party leader discussing electoral fraud circulated online in the final days before voting, prompting warnings from authorities about its potential impact at a decisive moment.
In the United States (2024 Democratic primaries), voters in New Hampshire received a fabricated robocall using an AI-generated voice mimicking President Joe Biden, urging them not to participate in the primary election – an incident that triggered federal and state-level investigations.
In India (2024 general elections), AI-generated videos of political candidates speaking in languages they do not know were widely disseminated online, raising concerns about informed consent, manipulation and the blurring of authenticity in political communication.
Across Latin America, particularly in Brazil and Mexico (2022–2024 electoral cycles), researchers and electoral authorities documented large-scale coordinated bot activity amplifying polarising narratives and disinformation, further eroding trust in democratic institutions.
These examples reveal the most profound democratic challenge: not merely the spread of false information, but the erosion of the public’s ability to distinguish truth from fabrication.
Europe’s regulatory response
The European Union has responded with a strengthened regulatory framework for artificial intelligence and digital political influence. The EU Artificial Intelligence Act, which entered into force in August 2024, introduces risk-based rules, bans certain high-risk AI practices, and establishes transparency and accountability obligations for advanced AI systems. In parallel, the Digital Services Act reinforces platform duties to address disinformation and systemic risks, while new EU rules on the transparency and targeting of political advertising, adopted in March 2024, restrict the use of personal data for political targeting. Together, these measures aim to safeguard democratic processes in the digital age.
Yet regulation, while necessary, is not sufficient. Rules can only operate effectively if states possess the institutional capacity to implement them and the political will to enforce them.
The central political question is as simple as it is consequential: can a democracy remain functional if public opinion is shaped by systems controlled by those who possess capital, computational power and privileged technological access?
If the answer is anything short of an unequivocal “no”, then democratic equality risks giving way to a new form of digital oligarchy – one in which influence is not earned through ideas but purchased through algorithms.
AI holds extraordinary potential. To harness its promise while containing its dangers, Cyprus – like all democratic states – needs a national strategy, institutional maturity, and political courage.
Because democracy cannot operate in the darkness of opaque algorithms. AI must remain a tool that empowers the public – never a weapon that controls them.
Alexandra Attalides is a Volt MP
Click here to change your cookie preferences