By Avgi Michael and Anastasios Kostekoglou
The Digital Omnibus arrives at a critical juncture for Europe, amid growing debate as to whether the cumulative effect of its regulatory framework has constrained competitiveness, particularly considering differing approaches adopted abroad.
The case for simplifying digital laws
The European Commission has expressly acknowledged the need for a more simplified digital regulatory framework. As underscored in the Draghi Report, regulatory complexity in the digital sphere is increasingly viewed politically as a structural impediment to competitiveness, innovation and sustainable growth within the European Union (EU).
In response, on November 19, 2025, the European Commission, as part of its Digital Package, published the Digital Omnibus proposal, comprising two interrelated regulations, one designed to amend various digital laws, including, inter alia, the EU General Data Protection Regulation (GDPR) and the Data Act, and the other aimed at amending the EU Artificial Intelligence Act (AI Act).
While the proposed Digital Omnibus seeks to support individuals and businesses through a regulatory framework that facilitates cost-efficient and innovation-friendly compliance, its objective is not to dilute the level of protection afforded by the existing EU digital and cyber law acquis.
Rather, the initiative is grounded in a “simplicity by design” approach to legislation, aimed at reducing unnecessary complexity while preserving the EU’s high regulatory standards, policy objectives and fundamental rights safeguards.
GDPR meets AI Act
The Digital Omnibus proposals signal a strategic refinement of the GDPR in the context of artificial intelligence (AI), aimed at removing unnecessary barriers to data processing that could hinder AI development.
Recognising the central role of data in driving value in the digital economy, and in line with the objectives of the EU’s Data Strategy, these amendments seek to establish a coherent and cohesive framework for data availability and use, while maintaining the highest standards of privacy and personal data protection.
The emphasis on these two instruments is deliberate. The GDPR constitutes the EU’s foundational data protection framework and a recognised global benchmark for privacy law, while the AI Act, adopted in 2024, is fully binding and directly applicable across all Member States, with its provisions phased in from 2025 to 2027, reflecting the EU’s ambition to set a global standard in the comprehensive regulation of AI.
However, the AI Act’s strict requirements, significant extraterritorial effects, and enforcement framework, are expected to produce a profound impact on the further development and use of AI, raising concerns that the regulatory burden may stifle innovation.
However, a deeper concern is that the AI Act itself risks undermining the fundamental principles and rights enshrined under the GDPR potentially creating inconsistencies between the two frameworks and opening a regulatory gap within the EU’s legal architecture.
Rethinking the GDPR
The Digital Omnibus proposes clarifications and expands on the circumstances in which personal data, including special categories of data, may be legally processed, particularly in the context of AI system development and operation, biometric data processing and AI de-biasing activities.
The current legal framework shows why these changes matter. Article 9(1) of the GDPR establishes a general prohibition on processing special categories of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, genetic data, biometric data for unique identification, health data or data concerning sex life or sexual orientation.
In its current state, Article 9(2) only allows ten specific exceptions to this general ban. This gives rise to a potential impediment to AI development, as companies training AI systems on large, multi-source datasets may, notwithstanding filtering efforts, inadvertently include special categories of personal data.
Under current rules, accidentally processing sensitive data without a valid legal exception is a violation that may lead to administrative fines of up to €20 million, or in the case of an undertaking, up to 4 per cent of its total worldwide annual turnover in the preceding financial year.
Moreover, while the AI Act acknowledges that managing bias is a critical component of AI model training, given that these biases may adversely affect individuals’ health and safety, infringe fundamental rights or result in discriminatory outcomes, Article 10(5) permits processing special categories of personal data strictly for bias monitoring, detection and correction only in relation to high risk AI systems.
This creates a regulatory gap as AI systems outside the high-risk classification are effectively barred from using sensitive data to detect and remediate bias, notwithstanding their potential for being discriminatory too.
The Digital Omnibus represents a measured departure from this approach. Under the proposed Article 9(2), the processing of special categories of personal data in the context of the development and operation of AI systems or models is permitted, subject to the safeguards introduced by draft Article 9(5).
These safeguards require the implementation of appropriate organisational and technical measures to prevent the collection or otherwise processing of special categories of personal data. Where, despite such measures, special categories of personal data remain in datasets used for training, testing or validation, or within the AI system or model itself, the controller is obliged to remove them.
In circumstances where removal would impose a disproportionate burden, the controller must ensure that the data is effectively protected from use in producing outputs, from public disclosure, or from any other form of access by third parties.
Therefore, the proposed Article 9(5) clarifies that, as a general rule, special category data should not be used for AI development or operation, with the exception effectively covering residual data that persists despite the controller’s diligent preventive efforts, where removal would entail a disproportionate effort.
With regard to biometric data, including facial images or dactyloscopic data, the proposed Article 9(2)(l) permits processing where it is necessary for the purpose of verifying the data subject’s identity.
Crucially, this exemption applies only where the biometric data, or other means required for verification, is under the sole control of the data subject. In practice, this entails that the data remains on the data subject’s device or is stored in an encrypted form, accessible exclusively by the individual.
The proposals also seek to provide greater legal certainty by introducing a new provision governing the processing of personal data in the context of AI development and operation. Under the proposed Article 88c, controllers may rely on the legitimate interest ground under Article 6(1)(f) of the GDPR where processing is necessary for the controller’s interests in developing and operating an AI system or model.
Such processing may be pursued where appropriate and for legitimate interests, except where other laws explicitly require consent, or where such interests are overridden by the interests or fundamental rights and freedoms of the data subject.
Controllers must therefore conduct the balancing test mandated by Article 6(1)(f) and implement appropriate organisational and technical measures to safeguard data subject rights. These measures, as envisaged by the proposed Article 88c, include minimising the personal data used for AI training and ensuring that data subjects can exercise an unconditional right to object.
The Digital Omnibus in perspective
The Digital Omnibus marks a significant evolution in the EU’s approach to personal data in the context of AI development and operation. By introducing targeted measures, it seeks to support technological innovation while simultaneously enhancing individual protections, facilitating more effective de-biasing of AI systems and mitigating the risks of discrimination or other harmful outputs.
The proposals represent a deliberate effort to reconcile the protection of fundamental rights with the EU objective of remaining economically and technologically competitive.
However, challenges and ambiguities remain. The “disproportionate effort” standard for leftover sensitive data is inherently imprecise, creating the potential for divergent interpretations by controllers. There is a risk that companies might interpret it too loosely to avoid expensive fixes.
Likewise, the requirement to effectively protect sensitive data from use in generating AI outputs is also questionable given the opacity of modern machine learning systems and the difficulty of attributing specific outcomes to particular inputs.
The Digital Omnibus does not amount to a wholesale deregulation of data protection, nor is it a mere technical tweak. Rather, it constitutes a measured attempt to modernise the regulatory framework for the AI era, recognising that rigid, all-or-nothing prohibitions may be unworkable while retaining core safeguards for data subjects.
In this sense, the initiative reflects a broader EU strategy to recalibrate the balance between protecting individual rights and fostering innovation, ensuring that the EU continues to set global standards in data protection while remaining competitive in the technological landscape.
Avgi Michael is an associate and Anastasios Kostekoglou a trainee lawyer at Elias Neocleous & Co LLC
Click here to change your cookie preferences