An Akel MP has tabled legislation to tackle the phenomenon of ‘deepfakes’ and AI manipulation of audio-visual material.

The bill from Akel’s Christos Christofides would make a criminal offence the circulation of manipulated audio/images/video if done without the person’s express consent.

In addition, affected people would be able to sue for damages and seek injunctive relief.

Injunctive relief is a court-ordered remedy that compels a party to do or refrain from doing a specific action.

Under the proposal, the prohibition would extend to 50 years after a person’s death.

Christofides points out that the depiction of actual persons without their consent poses risks of misinformation, fraud, impersonation and deception.

The uncontrolled use of such technologies and products could cause “serious violations of the fundamental rights of citizens and be harmful to the public interest, democracy and the rule of law generally”.

The MP also proposes enshrining into law that an individual has an inherent right to retain control over audio-visual material pertaining to their person.

According to Phileleftheros, the impetus for this initiative came from a faked photograph depicting Akel MP Giorgos Loukaides with former auditor-general Odysseas Michaelides.

The manipulated photograph, which circulated months ago, showed the two hugging.

What had happened is that the two men had met in a cafeteria, where someone took a picture of them. The two never hugged.

But the fake photo apparently caused a stir among the Akel fanbase, some of whom believed it genuine.

During a visit to a community, an Akel cadre was approached by locals who sought explanations for the apparent display of amity between the Akel MP and Michaelides.

The Akel cadre had to explain the photo was fraudulent.

In early November, the European Commission launched work on a code of practice on the marking and labelling of AI-generated content.

Under the AI Act, content such as deepfakes and certain AI-generated text and other synthetic material must be clearly marked as such. This requirement reflects the growing difficulty in distinguishing AI-generated content from authentic, human-produced material.