A case for the DSA to be more prescriptive

Liam Melia
January 14, 2026
5 min read
Hero Background Image

I spent some time over the Christmas break reflecting on two announcements from the EU and from the Irish regulator (CnaM):

  1. CnaM’s investigation into the Illegal Content Reporting mechanism of TikTok and of LinkedIn
  2. The EU fine against X, specifically the case of deceptive design of the bluetick checkmark

Both got me thinking about whether regulator investigations and platform compliance might be easier if the DSA were more explicit or prescriptive. 

Disclaimer at this point that:

  1. This is not a comprehensive commentary on these two cases. I will not be discussing the merits of each or opining on how I think they will play out.
  2. My last in-house FTE role in industry before founding Pelidum was with TikTok where I project-managed the implementation of a lot of the Content Moderation obligations of the DSA. I left full-time employment at TikTok in July 2024. The views expressed here are mine alone. They are informed by my in-house experience but do not reflect, nor are they to be associated with, any of my previous employers.

Overview of the case against TikTok and LinkedIn

The press release from CnaM states that they are investigating whether the Illegal Content Reporting mechanisms:

  • are easy to access and user-friendly
  • allow people to report suspected child sexual abuse material anonymously
  • deceive people from reporting content as illegal

Let’s look a bit closer at each area of investigation:

Easy to access and user-friendly

Recital 50 of the DSA states that the Illegal Content Reporting mechanism should be:

  • clearly identifiable 
  • located close to the information in question 
  • and at least as easy to find and use as notification mechanisms for content that violates the terms and conditions of the hosting service provider

There is room to debate and interpret these requirements, for instance:

  • Should the mechanism explicitly reference the DSA or the EU?
  • What constitutes close? The number of clicks from the content? Prominence or position within the reporting menu? Should it be top of the menu? Or immediately visible when clicking on the content? 
  • What makes it ‘at least as easy to find and use as T&Cs reporting? Again, is it prominence? Labelling? Position in the UI? The number of fields to fill in? How many boxes to tick? How many screens to navigate through?

I have spent long hours working through the questions above. In some cases, I concluded that it was often a matter of trade-offs rather than one approach clearly being better than another. For example, a simple reporting menu with fewer fields to fill might make it easier for a user to submit a report, but collect less information about the alleged illegality. On the other hand, a more exhaustive reporting form with multiple fields for contextual information may facilitate more comprehensive reporting but be overwhelming for an average user. 

Allow people to report suspected child sexual abuse material anonymously

This requirement is very much a binary: either you allow for anonymous reporting of CSAM, or you don’t. I don’t see any immediate benefit to making this more prescriptive. 

Deceive people from reporting content as illegal

I suspect that this investigation stems from the fact that platforms maintain parallel channels for violations of their own T&Cs, and for Illegal Content Reporting under the DSA. Whether the UI is deceptive or not will be determined by factors such as:

  • The wording of each reporting option
  • Position and prominence within the reporting menu
  • The font, colour and affordance of the different options

While there may be some scope to tighten up guidelines here, I expect that these investigations will always be highly qualitative. Too many variables are at play to make this a simple binary type of decision.

The case against X on its bluetick mark

I am only going to address one aspect of the case against X: suspected deceptive design in the use of bluetick marks, as outlined in this announcement from the European Commission.

The basis for the case seems to me fairly straightforward: most platforms use a bluetick to indicate that an account is officially verified, which is particularly helpful for politicians, celebrities, companies, journalists and other public figures and institutions that are vulnerable to being impersonated. Inter alia, the bluetick mitigates against disinformation, fraud and scams. The contrarian X platform has put blueticks up for sale and uses grey ticks for verified accounts.

The problem of course is that, to the extent of my knowledge, nowhere is it codified that a bluetick must mean that an account is verified, neither in the DSA or in any other regulation (grateful to anyone who can tell me otherwise). It is a de facto rather than a de jure standard. I will let the lawyers in the room settle the question of whether a regulator can enforce a de facto standard.  

Speculation

I will now speculate whether more prescriptive guidelines would put a lot of these questions to bed and reduce the need for meticulous investigation and back and forth between regulators and platforms:

  1. Simply mandate a label for the reporting mechanism - just say it must be called EU/DSA Illegal Content and basta, end of discussion

  1. Mandate a position in the reporting menu (in the top n options or immediate visibility within the reporting menu)

  1. Mandate that blueticks mean verified accounts and nothing else

Regulatory overreach in the EU remains a hot topic, and I have written previously about elements of the DSA where I felt that compliance requirements were set at too high a threshold - yes I am looking at your Article 20 (1)(2) and the six month timeframe for appeal

However, sometimes simply laying down the law can make it easier for everyone involved. It makes requirements more clear-cut and therefore compliance less ambiguous - it becomes a question of 0 or 1, and obviates the need for lengthy debates from first principles about what constitutes deceptive, easy to access, or user-friendly. 

A good case of point for this is the TAT requirements set out for processing reports under Articles 9, 10 16, 20 and 22. The timeline given is without undue delay. I have had many lengthy debates about what this should translate into in operational terms: 24 hours? 48? 72? 12?

Ultimately legal and T&S teams had to come to an agreement about what they felt was operationally feasible and legally defensible. This type of horse trading was not necessary when I was at Facebook in 2017 working the NetzDG law. TAT was defined at 24 hours, with a provision of a further week for especially complex cases. 

As a result, no time was spent debating what the TAT should be on the moderation queues for NetzDG. We simply spent time trying to figure out how to make it work. This is what I have in mind when I speculate that being more prescriptive might make life easier for everyone involved. 

I would not envy a regulatory body trying to investigate whether a platform was handling reports without undue delay. While we might all agree that a TAT of one month would be clearly non-compliant, such egregious cases are rare. Building a case would be easier if the TAT was clearly defined. It would also reduce the scope for legal challenges, making the compliance process more straightforward and a little less arbitrary. 

That gives rise to a paradox. Picking cut-off points, setting standards, publishing a set of guidelines will inevitably involve some elements of arbitrary decision making: at some point someone has to decide whether a tick should be blue or grey or pink or whatever colour you like, or that 48 hours constitutes without undue delay. But by front-loading the arbitrary decisions into the legal text, it will make the adjudication of compliance far less arbitrary itself, and ultimately allow T&S professionals to get on with the job. 

Liam Melia is the COO and co-founder of Pelidum Trust & Safety

Interested in learning more?

Book some time with us for more information on what we do and how we can help your organisation

CTA Background Image