Are Cypriot firms automating authority faster than they are governing it? The AI risk discussion in Cyprus often focuses on the wrong problem.
By Petros Nearchou
Most of the conversation centres on autonomous systems. More commonly, it focuses on software making unpredictable or wrong decisions and on systems going rogue, as if AI is likely to break free and cause dystopian chaos. But, the actual risk is the exact opposite.
AI systems are not unpredictable. They are actually very obedient and execute existing permission models exactly as configured, including the “flawed ones”. And they do so with speed and consistency no human can match. Artificial intelligence does not, on its own, weaken governance. Instead, it executes whatever authority structure already exists. When connected to shared accounts or long-standing service credentials, AI instantly inherits years of accumulated access. Hence, creating massive potential risks for businesses.
The problem is not AI adoption in Cyprus. The problem is authority that was never deliberately defined before AI arrived. Permissions designed for human use acquire a different riskArtificial intelligence is boosting productivity in the euro zone but it is not yet causing a wave of layoffs due to greater automation of labour, European Central Bank President Christine Lagarde said on Thursday.
“What we are seeing for the moment is that it’s increasing productivity,” Lagarde told a committee of the European Parliament. “But we are not yet seeing consequences in terms of labour market and waves of redundancies that are feared, and that you know we will be extremely attentive going forward.”
profile at machine speed. When AI connects to an existing identity, it executes that identity’s full permission set continuously and at scale
Consider a law firm based in Limassol that used Google’s Gemini to accelerate document review. Within 48 hours, the AI was summarising contracts and extracting clauses from prior agreements. It also searched seven years of client files, merger negotiations and internal strategy notes because it operated under the managing partner’s account. No one approved that scope for AI. It was already embedded in the account’s permissions. A similar pattern arose where a financial services firm in Nicosia connected OpenAI’s ChatGPT to a shared OneDrive environment used for “team collaboration.” Over time, that account accumulated access to KYC files and transaction reports.
Ten employees used that login. After integration, an eleventh user joined.
The eleventh user was not human. It was software.
Nothing malfunctioned. The tools behaved exactly as required & authorised. They executed the authority already present in the connected accounts, authority the firm never deliberately defined. This is authority inheritance at scale. Authority inherited is authority multiplied. When AI systems are introduced, unchecked access grows exponentially rather than just continuing as before.
Cyprus’s economic structure amplifies this dynamic. Investment firms onboard clients remotely. Payment institutions process cross-border transactions through cloud platforms. Outsourced IT providers configure core systems. Shared accounts and broad service credentials are common shortcuts in a small, fast-moving market that prioritises speed and trust, resulting in persistent structural exposure.
Imagine a financial services firm deploying an AI assistant to respond to client portfolio questions. To simplify integration, the assistant operates under a senior relationship manager’s account, which provides legitimate access to full portfolio histories and KYC documentation. When a human used that access, professional judgment acted as a control layer. The manager understood which data was relevant to a client enquiry, which information was confidential across accounts, and which internal deliberations should never be surfaced externally.
The AI system, however, operates differently. If prompted cleverly, even unintentionally, it may retrieve and integrate information across portfolios, or, for example, summarise material prepared for regulatory submissions. It does not “know” that some technically accessible information is contextually inappropriate for certain situations. It simply follows the instructions set out within scope.
Again, nothing malfunctions. The system behaves exactly as configured. However, the human decision-making buffer has now disappeared.
Governance frameworks are beginning to reflect this shift. They now explicitly treat AI as an operational risk domain requiring traceable decision-making, documented access controls and continuous monitoring. The NIS2 Directive places boards under explicit accountability for cybersecurity oversight. The Digital Operational Resilience Act (DORA) requires financial institutions to manage third party ICT risk and demonstrate operational control.
Both frameworks implicitly test the same capability: can a firm prove controlled machine authority?
For Cyprus, this creates a competitive divide. Not between firms that use AI and those that avoid it — but between firms that can demonstrate controlled machine authority and those that cannot. The former moves through due diligence faster and responds to audits with confidence whilst maintaining trust in cross-border engagements. The latter discovers the true scope of automated access only when insurers, auditors, or clients demand precise explanations or when something goes wrong, because you cannot exactly blame AI if something malfunctions.
Cyprus positions itself as a credible regional hub for financial and technology services. International established firms are increasingly examining access governance models when AI systems are involved. Efficiency signals progress. Unmapped authority signals fragility.
Cyprus firms now face a simple but consequential decision:
Redesign access boundaries so every AI system operates with explicitly defined authority and limits, ensuring they are revocable. Or, continue enabling automation on top of inherited, undocumented permissions and attempt to justify that setup later under regulatory and commercial scrutiny.
The practical shift is simple. Stop asking only what AI can do and start defining precisely who AI is allowed to be within your organisation’s authority model.
The key question is no longer whether AI will act autonomously. It is whether you have specified the authority it is already executing.
*Petros Nearchou is a director at a US-based Enterprise Cybersecurity & IAM firm.
Click here to change your cookie preferences