Telli Menüü

Digital Omnibus Slashes Core European Digital Rights

On November 19th, 2025, the European Commission released the Digital Omnibus Regulation. While the Commission claims its goals are to reduce regulatory bureaucracy and simplify compliance, it extends far beyond this scope. Under its proposals, Big Tech is granted far more leeway in exploiting Europeans’ personal data, while the AI Act, which is still in the process of being implemented, is further delayed.

Protestors outside the Commission (source: Dave Keating's Substack)

Competitiveness: a valid concern?  

These developments sit against the backdrop of rising geopolitical tensions. China and the US stand as global leaders in AI development, with the EU lagging behind. In the Draghi Report, former European Central Bank President Mario Draghi outlines the roadblocks to EU’s competitiveness. He highlights burdensome regulation as a barrier to attracting foreign businesses and innovation, calling for simplification to boost the EU’s competitiveness on the world stage.

Concerns about competitiveness have coincided with the US’s ongoing political and economic pressure exerted on the EU, especially in the technology sector. President Trump’s aggressive tariff wars waged against the EU, in combination with sustained industry lobbying tactics, have successfully ingrained enough fear in the EU to compel it to make major cuts to hard-won regulations protecting digital rights, such as the GDPR. Consequently, the EU has shifted from its role as a regulatory power to one scrambling for “simplification” without due process.

To illustrate the scale of the shift, consider that while the GDPR took years of negotiation, the Omnibus is reopening it after only five weeks of public consultation and calls for evidence. A joint letter by NOYB, the Irish Council for Civil Liberties, and EDRi claims that, in addition, “Omnibus procedures compress parliamentary timelines and restrict scrutiny, handing disproportionate power to the Commission. The result is a package that risks bypassing democratic oversight and undermining confidence in the EU as an evidence-based regulator.”

Other amendments make it equally clear that the Omnibus mostly benefits large corporations, while offering few meaningful gains for small and medium-sized enterprises, despite that being one of the Omnibus’ core stated goals. Some of the most contested changes are in regards to the definition of personal data protected under the GDPR and the treatment of personal data in training AI models.

“…it is like a gun law that only applies to guns when the owner confirms he is able to handle a gun, and intends to shoot someone. It is obvious how absurd such subjective definitions are.”

(Max Schrems)

To protect or not to protect… that is the question 

The proposed changes to the GDPR narrows the scope of personal data, shifting autonomy and control away from people and towards companies. Currently, the GDPR uses a broad, objective definition of personal data. Information that can identify an individual directly or indirectly is protected. Individuals control how this information is used by companies operating in the EU.

However, the proposed changes opt for a subjective definition of personal data. In this case, protection by the GDPR would depend on whether a company can “reasonably” identify an individual. If they claim they cannot or do not intend to, the data in question would not be protected—opening the door for exploitation and abuse, according to The Civil Liberties Union for Europe (Liberties)

NOYB founder Max Schrems writes that “it is like a gun law that only applies to guns when the owner confirms he is able to handle a gun, and intends to shoot someone. It is obvious how absurd such subjective definitions are.”

AI Act: industry over people

A concerning consequence of changes to the GDPR (to put it rather lightly) also affects how large language models and other AI applications will be allowed to process personal data. If an AI company deems they have “legitimate interest” in processing personal data—including activity on social media, private documents, or chat history—to train their models, they will be allowed to do so without permission, Liberties adds. 

To be clear: this will primarily benefit US tech giants (e.g., Open AI, Meta, and Google) that have tirelessly argued that burdensome regulation limits their operation in the EU. The US, unlike the EU, does not have any national digital privacy laws that prevent them from exploiting citizens’ personal data (instead, the US has a patchwork of state-level policies). Loosening EU standards would give US companies more leeway in using European data with fewer restrictions and reduced transparency.

Another proposed change to the AI Act regarding transparency makes it explicitly clear it mostly benefits large corporations.

The AI Act is a risk-based approach to ensuring online safety. It categorizes four different levels of risk for AI systems: unacceptable risk, which covers banned systems that pose clear threats; high risk, which includes systems that could cause serious harm and must meet strict requirements before being deployed; limited risk, which are subject to transparency obligations; and minimal risk, which face few or no restrictions.

Though it marks a step in the right direction, the AI Act was criticized for catering to corporate interests. For example, currently, there is an exemption in Article 6(3) allowing providers of high-risk systems to opt-out of all legal obligations on the condition that they’ve completed a risk assessment, for which the criteria is quite broad. Senior Policy Analyst at Access Now, Daniel Leufer, writes that the only safeguard against this was for providers to publicly declare that they were exempting themselves. Now, the Omnibus would allow high-risk providers to opt out of legal obligations without publicly disclosing that they’ve done so.  

To illustrate the gravity of this change: high-risk AI systems are used in areas like safety equipment, critical infrastructure, education and training, employment and worker management, essential public and private services, law enforcement, migration and border control, and even legal decision-making. These are systems that directly shape people’s rights, opportunities, and access to basic services. 

Giving corporations unfettered access to people’s personal data will only further entrench power in Big Tech while worsening inequality and violence against marginalized communities. Reduced transparency means that people will remain in the dark about when their data has been used, leaving them with no recourse for being unfairly targeted by AI systems in instances like employment screening or racial profiling by facial recognition technologies. Meanwhile, with more data available to train their models, AI companies will be able to build what is essentially an infrastructure of personal identities. At best, this will allow them to deploy highly targeted ad campaigns; at worst, it will create vast datasets that could be used by foreign adversaries to manipulate public perceptions and undermine democratic processes.

Photo by SHVETS production (source: Pexels)

Civil Society: Digital Omnibus won’t help EU businesses

Though the Commission claims the Digital Omnibus will “reduce regulatory compliance” to boost innovation and competitiveness, it is likely it will have the opposite effect.

In their joint letter, NOYB, the ICCL, and EDRi write that “reducing the scope of fundamental rights of people in the EU will not strengthen European competitiveness for organisations who play by the rules. In an already concentrated market, deregulation will further erode European sovereignty and increase dependence on non-EU companies.” 

Therefore, “even if the absolute performance of European companies improves, the gap with the US in AI may widen,” writes Bruegel Non-resident Fellow, Mario Mariniello.

“Companies see uncertainty, lack of skills, problems accessing finance and national fragmentation as more significant barriers to investment. Simply reducing company reporting obligations will not give the EU a fresh start.”

(Mario Mariniello)

Mariniello adds that, even in the instances where the Omnibus does seem to help small and medium-sized companies, the root causes of the EU’s relative lagging performance in AI has more to do with deep structural causes than it does with burdensome regulation. “Companies see uncertainty, lack of skills, problems accessing finance and national fragmentation as more significant barriers to investment. Simply reducing company reporting obligations will not give the EU a fresh start,” he writes.

Another joint statement by 127 civil society organisations states that “If the EU truly aims to ease compliance with these laws, they should better support companies and authorities with the guidance and tools to keep people safe in the digital world—not dismantle the frameworks that provide legal clarity for businesses.”

What’s next

Starting in late November 2025, the European Commission will send the Digital Omnibus proposal to the European Parliament. In the new year, Members of Parliament will debate and propose changes, while EU governments in the Council work in parallel to agree on their own position. Once both sides have drafted their reports, the Commission, Parliament, and Council will enter “trilogue” negotiations to reach a final compromise.

If the Omnibus passes in its current form, it would clearly signal to the rest of the world that the EU is willing to forgo fundamental digital rights for the sake of doing better business with American tech giants. Setting such a precedent is especially dangerous during a time when institutional trust remains low. While it’s true that the EU must simplify overly-complicated legislation for the benefit of its own companies, this must not derail into deregulation if it wants to uphold its legacy as a regulatory power and defender of human rights.

This article was written by Natalie Jenkins as part of the Local Journalism Initiative.  

Loe edasi