Renewed Threats towards Irish Deputy PM

Liam Melia
September 25, 2025
5 min read
Hero Background Image

Around the same time last year we wrote a blog post about death threats that had been made to the then Irish Prime Minister (Taoiseach), Simon Harris, which according to media reports had remained live on Instagram for over 24 hours after being picked up. We wrote a post at the time laying out how these kinds of errors can happen in spite of everyone’s best efforts. 

A few weeks ago Simon Harris, now Deputy Prime Minister (Tánaiste), and his family were again on the receiving end of threats issued via online platforms. We have written a short post to explain how platforms handle threats to life, from policy, detection, enforcement to escalation. 

Policy

Every tech platform that I have worked at had robust policies in place against threats of violence. They specifically defined credible threats of violence in order to identify those requiring escalation to law enforcement (more on this below). The threshold for enforcement and escalation is generally significantly lower for politicians and other elected officials given their risk profile.

Detection

Detection is fundamentally binary: reactive or proactive. Proactive detection of threats of violence will often be performed on the basis of text classifiers, which can be augmented with image or other contextual signals. 

Reactive detection can take the form of classic user reporting. Large businesses, civil society organisations and other VIPs, including politicians and political parties, often have more direct lines to large platforms. This is not as nefarious as is at times depicted. Large businesses are more likely to be the victim of IP infringement; VIPs and politicians are more likely to be the victim of online harassment, impersonation, and alas threats. Having a direct line* enables platforms to quickly react to high-severity reports from at-risk users.

Standard user reports typically go to front-line moderator teams, which are for the most part staffed by BPOs for large platforms. Reports coming in from direct outreach typically go to internal teams, such as Escalations or Incident Response. 

Enforcement

Violent threats are almost always deleted and posters will often incur account-level sanctions, up to and including permanent bans. 

Outbound escalation

Credible threats of violence, whether to NPCs like you and me, or VIPs like our elected politicians, are usually escalated to law enforcement. Several criteria are used to distinguish threatening language that may be nothing more than humour, satire or hyperbole depending on context: ‘I’m going to come over and shoot you’ is perfectly acceptable communication during a game of Call of Duty for instance. Similarly, ‘we’re going to destroy Liverpool’ is almost certainly figurative if posted by an Everton fan before a Premier League derby game.

Teams will look at the timestamp of the message, as well as indications of timing or method of the threat. They may also take into account further contextual information, such as geographical proximity or interaction history between the users: for example, a threat towards an Irish politician stemming from a fake account based outside of Ireland may be less credible than a threat from an authentic account based in the same city as the politician.

If the credibility threshold is reached, then the escalations or incident management team will perform an outbound escalation to the relevant Law Enforcement authorities. As an aside, a legal framework for this exists within the EU and is covered in Article 18 of the DSA. 

Inbound escalation

The above sequence of events assumes that platforms are alerted to the context by their own detection systems or by users. It happens as well that Law Enforcement alert platforms to the content and simultaneously request data on the suspected user account behind the threatening content.

Major platforms have their own intake portal for Law Enforcement and have had processes in place for several years for handling such requests. They have long published transparency reports on the numbers of data disclosures that they perform on an annual basis. As a final aside, Article 10 of the DSA put in place further legal obligations on both platforms and law enforcement authorities in the EU. 

By way of illustration, below are the links to Meta’s Law Enforcement reporting portal and to LinkedIn’s global transparency report for Government Data Requests:

The Pelidum team has years of experience in setting up and staffing the front-line teams that handle these incidents. If your team needs support, our DMs are open.

*the question of whether direct lines from companies, celebrities, NGOs, politicians, etc are ever misused is worthy of discussion in and of itself but is not the topic of this post.


Liam Melia is the co-founder and COO of Pelidum Trust & Safety

Interested in learning more?

Book some time with us for more information on what we do and how we can help your organisation

CTA Background Image