Information manipulation should be approached as part of a broader hybrid-threat environment. It interacts with cyber operations, political coercion, economic pressure, and the exploitation of social fractures. European institutional actors increasingly treat information manipulation operations as closely linked to cybersecurity and hybrid threats, and as tools used by both state and non-state actors for political and strategic gain.

That framing changes what “effective response” means. In a hybrid-threat context, the key question is not only whether harmful narratives exist. It is under what conditions they gain traction and scale, and how those conditions can be disrupted. Hybrid pressure tends to be most effective where democratic legitimacy is contested, trust in institutions and media is weak, and polarisation is high. So resilience isn’t just detecting, exposing and debunking. It’s changing the conditions that enable scale and disrupting the infrastructure that keeps information manipulation activities operational.

From this perspective, defunding information manipulation and harmful content belongs inside a broader resilience and deterrence agenda. If hostile actors can cheaply build and sustain information manipulation networks by exploiting the online advertising and platform economy which rewards sensationalism, impersonation, and outrage, then the problem lies also in the market design that can be economically viable and operationally scalable for those networks.

Why monetisation matters in a hybrid-threat context

Information manipulation operations rely on an operational environment: websites, pages, ad accounts, amplification networks, and, crucially, the ability to replace assets when individual nodes are removed. Monetisation matters because it helps sustain that operational environment. Even where campaigns are not primarily profit-driven, monetisation can function as an enabling layer. It lowers operating costs, extends reach, and supports persistence across crises, elections, and policy debates.

The stronger concern is that commercial digital systems can provide paid reach, visibility, and resilience to coordinated manipulation efforts. Research documented how weak enforcement of political advertising rules enabled the circulation of undeclared political ads and pro-Russian propaganda ads at scale in the EU ahead of the 2024 European elections, reaching tens of millions of accounts. Different investigations also reported that Meta earned US$338,000 between August 2023 and November 2024 by hosting at least 8,000 sponsored content pieces linked to the Russia-linked “Doppelganger” operation. The EEAS similarly documented the use of inauthentic accounts and paid amplification tactics as part of broader information operation activity targeting European audiences.

The point here is operational. Harmful actors can exploit advertising infrastructure and recommendation systems to distribute harmful content more efficiently, especially at politically sensitive moments (see our latest Germany Report).

The monetisation chain is vulnerable

A second layer of the problem sits in the broader advertising market. Academic research shows that digital advertising systems routinely place major brands’ ads on low-integrity information sites, helping finance them at scale. Other research shows how opaque ad-tech practices, such as ad inventory pooling, allow low-quality publishers to hide inside legitimate market infrastructure and circumvent brand-safety protections.

This is where “defunding disinformation” becomes a concrete resilience lever: it targets the monetisation chain that sustains manipulation ecosystems, not just individual posts, channels or domains.

Structural vulnerability: opacity in the advertising supply chain

Research on the political economy of online disinformation shows that actors routinely exploit opacity in three repeatable ways: they pool advertising inventory with legitimate publishers; they obscure ownership through intermediary networks; they use domain switching and mirror sites to evade brand-safety enforcement. The result is structural resilience.

Even when specific channels are exposed or removed, the broader financial infrastructure sustaining them often remains intact. Individual nodes can be replaced while the monetisation pipeline continues functioning. For policymakers concerned with hybrid threats, this matters because it shows that the sustainability of information manipulation networks is partly embedded in the architecture of the digital advertising market itself.

One system: algorithms, monetisation, and platform incentives

These issues are often discussed separately, recommenders, advertising, and platform governance, but in practice, they form a single economic and technological environment. Recommender systems optimise for engagement and time spent. Monetisation programmes reward high-traffic creators and publishers. Advertising systems allocate revenue based on attention metrics.

That structure tends to favour content that is emotionally charged, polarising, or sensational, the same properties frequently used in information manipulation campaigns. This does not require platforms to “intend” to promote disinformation or harmful content. It is a systemic bias in favour of high-engagement information environments, which coordinated actors can exploit.

For hybrid threat actors, that creates an asymmetric advantage: relatively small networks can achieve outsized reach by leveraging attention-optimisation logic built into digital platforms.

Why the incentive problem is getting sharper

Digital advertising, especially programmatic advertising, creates financial incentives that reward engagement maximisation regardless of quality or truthfulness. At the same time, social media platforms are no longer just intermediaries in the advertising supply chain. They have launched revenue-sharing and content monetisation programmes, deepening platform-creator incentives around reach and engagement.

In other words, the market is not just distributing content. It is paying for it.

What can be done: market-based approaches

Several market-based approaches are emerging. They are not interchangeable, and they do not deliver the same type of leverage.

1) Ranking algorithms 

Ranking systems aim to prioritise and filter out misleading content on platforms and in search results. Many studies show strong performance across topics. But ranking also carries bias risks (false positives/negatives) and tends to become an adaptation game in adversarial environments. It can reduce visibility. It does not reliably disrupt the enabling economics.

2) Advertising demonetisation

The most direct lever is cutting off advertising revenue to disinformation publishers. Civil society pressure and organisations such as GDI have helped push brands away from harmful placements. The EU’s 2022 Strengthened Code of Practice on Disinformation formalised this principle, urging advertisers to avoid placing ads next to disinformation content.

However, demonetisation has structural barriers. Disinformation sites actively circumvent brand safety protections. Research documents how actors pool inventory (“dark pooling”), meaning advertisers can still inadvertently fund low-integrity publishers. That is why demonetisation is most effective when it is not reactive and URL-based, but connected to supply chain understanding and repeatable patterns. 

3) Supply-chain transparency

Supply-chain transparency is the enabling condition that makes demonetisation durable. Disinformation sites exploit opacity by pooling their ad inventory with legitimate sites. Research suggests a small number of major ad exchanges play disproportionate roles in the dark pools exploited by misinformation websites.

This is why mapping and tracking the supply chains behind manipulation networks is not “ad tech plumbing” as a side issue. From a hybrid threat perspective, it is a resilience capability: it helps identify recurring pathways behind disposable domains and accounts, enabling interventions to move upstream.

4) Platform market design

Disinformation should also be understood through a market-shaping lens: platforms have built markets designed to monetise engagement, creating incentives to circulate deceptive or emotionally charged content. There are even credible proposals to introduce “social welfare” mechanisms and quality-weighted signals.

However, engagement-maximising systems maximise engagement because they work. Redesign tends to reduce the metrics platforms use to measure success and generate revenue. That is why voluntary change at scale is unlikely without regulatory pressure, including systemic risk obligations under frameworks like the DSA.

From mapping to disruption

If the market is paying for manipulation, then accountability cannot stop at content outcomes. The practical question becomes: where exactly can the flow be disrupted, and who has the visibility to do it across platforms, intermediaries, and shifting domains.

This is where GDI is concentrating its work. Not on single channels or off lists, but on the repeatable monetisation patterns that sit behind disposable nodes: ad placements that keep recurring on low-integrity sites, the intermediaries that make those placements possible, and the cross-platform routes that convert attention into revenue.

Two routes follow from that logic. Demonetisation, to disrupt revenue where it is already flowing. And supply-chain mapping, to make the flow legible enough to disrupt upstream, so interventions can target the plumbing, not just the symptoms.

That is what “raising costs” of information manipulation looks like in practice: fewer safe harbours in the ad ecosystem, less paid reach for coordinated manipulation, and a narrower set of pathways through which influence networks can fund persistence.

_______

Picture by Jakub Żerdzicki



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *