
Tech
Antisemitism Isn’t Just a Bug in the System. It’s Being Amplified by It.
As Australia headed into its 2025 federal election, a darker undercurrent pulsed through its digital platforms. CyberWell, a watchdog group specializing in online antisemitism, uncovered a disturbing trend: antisemitic narratives were not just circulating—they were being algorithmically amplified to more than 257,000 users. Using proprietary monitoring tools guided by the International Holocaust Remembrance Alliance (IHRA) working definition of antisemitism, CyberWell flagged 548 posts between November 2024 and April 2025. Of those, 80 were confirmed antisemitic.
The responses from social media platforms varied starkly. X (formerly Twitter) removed just 5% of flagged content, citing permissive “civic integrity” policies, while Facebook removed nearly 90%. Classic antisemitic conspiracies—like the Kalergi Plan—reemerged in digital camouflage, retooled into memes and coded language to evade detection.
CyberWell argues that such normalization of Jewish hatred poses a direct threat to democratic norms, public safety, and civil discourse. They advocate for mandatory IHRA-based moderator training and stronger enforcement. Platforms like YouTube and Facebook, which maintain clearer policies and trusted partnerships, demonstrated more robust moderation. But as the data suggests, uneven enforcement leaves critical gaps—ones that extremists are all too eager to exploit.

Scott Douglas Jacobsen: How did CyberWell identify and verify the posts?
Tal-Or Cohen Montemayor: CyberWell utilizes a combination of social media listening tools and a proprietary monitoring system to identify posts that are highly likely to be antisemitic, according to the IHRA working definition. Between Nov 11, 2024 – April 22, 2025, CyberWell’s monitoring technology flagged 548 posts in English on Facebook, X (Twitter), TikTok, and YouTube that included keywords related to the Australian federal election and had a high likelihood of being antisemitic.
Of the 548 posts, CyberWell selected a sample for manual review. In total, 80 posts were confirmed as antisemitic according to the International Holocaust Remembrance Alliance’s (IHRA) working definition of antisemitism by CyberWell’s research team. The high level of engagement around a select sample of just 80 posts indicates that the exposure of deeply anti-Jewish narratives ahead of the election period in Australia is far worse than what CyberWell’s research indicates.
Jacobsen: Elon Musk’s X platform removed only ~5% of flagged antisemitic election content compared to 54.2% in 2024. What explains the dramatic drop in moderation?
Montemayor: The significance of removal between X and other platforms is largely due to their policy approach to election-related content. Much of the hate speech that intersects with election issues is mistakenly perceived by X and their moderators as political expression and, therefore, allowed on their platform. X is the platform with the most permissive “Civic Integrity” policy, and it appears that much of the antisemitic election-related content is categorized under this policy as far as they are concerned. This extraordinarily low rate of actioning open Jewish hatred is not something we have encountered before.
Additionally, the gap between X’s rate of removal of antisemitic election content and their average rate of removal in 2024 highlights a key issue when relying on user reporting and escalation to major social media platforms, particularly to X: response time. The average rate of removal of reported antisemitic content by X in 2024, as collected by CyberWell, is a snapshot at the end of the calendar year, giving the platform many months to respond to our reporting. X’s average rate of removal of the antisemitic Australia election dataset collected by CyberWell is approximately 5% reflects the rate of removal three to five days after reporting it to X. While platforms take days to respond to user reports, the engagement algorithms continue to push and suggest content, especially ahead of events of wide public interest like a national election.
Jacobsen: Your report mentions the use of classic antisemitic conspiracy theories, such as the Kalergi Plan and alleged Jewish control over political parties. How have these narratives evolved?
Montemayor: The dominant antisemitic theme that election antisemitism centers around is conspiracy theories about Jewish global control and influence. These narratives characterize Jews as manipulative puppeteers who secretly control governments, political leaders, and the electoral process itself. Antisemitic conspiracy theories—such as the Kalergi Plan and claims of Jewish control over specific political parties—have evolved online by merging with contemporary political narratives and global events.
On social media, this very old anti-Jewish idea is often repackaged using coded language, emojis, and memes. The conspiracy theories suggesting secret Jewish control frequently surface in discussions about major political events, such as federal elections, where antisemitic tropes are embedded within broader ideological discourse. This blending allows hate actors to evade platform policies and challenges enforcement in practice while spreading this harmful narrative to mass audiences during times of increased social sensitivity and tension. This is extremely dangerous for the Jewish community in Australia, which is already experiencing a marked rise in violent and targeted attacks.
View this post on Instagram
Jacobsen: How have these gained traction in digital political discourse during election cycles?
Montemayor: CyberWell will be releasing a comparative analysis of antisemitic narratives during election cycles, examining how these anti-Jewish trends have gained popularity and audience during the UK, U.S., Canadian, and Australian elections towards the end of the summer.
However, we can share that in each of the four election cycles, classic antisemitism criticizing disproportionate Jewish power and conspiracies of covert control are the most prevalent types of Jewish hatred in election antisemitism across the board. This indicates that the dominant antisemitic theme in this dataset centers on conspiracy theories about Jewish global control and influence.
Notably, this form of classic antisemitism, consistent with the second example of the IHRA working definition, closely aligns with the core principles of major social media platforms’ hate speech and hateful conduct policies. This content includes offensive generalizations, harmful stereotypes, and conspiracy theories targeting a “protected group,” including those defined by religious affiliation or belief.
Since these carve-outs and protections are already recognized by most large social media platforms in their policies, it is reasonable to expect that platforms would enforce their policies against this type of content effectively. In practice, however, enforcement of election-related antisemitic hate speech appears to be significantly lower than typical enforcement rates against online Jewish hatred.
Political rhetoric focused on candidates and party platforms, including those that are irate and critical, are an important part of freedom of expression and political speech. However, the targeted violence against the Australian Jewish community and other Jewish communities across the globe has proven that online conspiracy theories and hatred has real-world consequences.
Jacobsen: How does CyberWell’s application of the IHRA working definition of antisemitism help distinguish rhetoric?
Montemayor: The International Holocaust Remembrance Alliance’s (IHRA) working definition of antisemitism is a globally recognized consensus definition, rooted in multi-disciplinary expertise, that CyberWell uses as a discourse analysis tool. The eleven examples featured in the IHRA working definition provide a framework for a lexicon focused on identifying particular beliefs, conspiracy theories, and narratives that are the cornerstones of Jewish hatred. We apply the definition as a tool for narrative analysis context. It not only helps us monitor specific narratives online but also organizes and allows us to track spikes in particular tropes, accusations, slurs, and narratives.
View this post on Instagram
Jacobsen: What are the strengths and weaknesses of the IHRA definition of antisemitism? How can social media companies improve enforcement during elections?
Montemayor: A major strength of the IHRA working definition is that it provides a comprehensive consensus definition of antisemitism that addresses the multifaceted nature of Jewish hatred as it has evolved over time and up to the modern day.
The IHRA working definition through the eleven categories laid out in the definition covers the evolution of Jewish hatred from its historical roots in religious antisemitism, race-based Jewish hatred during the Holocaust to its most modern iteration, political antisemitism via vilification of Jews as agents of the Israeli state, demonization of the concept of Jewish self-determination and using the state of Israel or the Israeli identity as a touchstone for promoting classic and openly anti-Jewish tropes, biases and hatred. However, as one of the most complex forms of hatred, even this working definition needs updates.
For example, CyberWell’s research of online antisemitism, particularly the October 7 denial campaign, has revealed that purposeful denial of atrocities or attacks committed against the Jewish community is a form of current antisemitism. The denial or ‘false flag’ narrative, either blaming the victims for the attack or erroneously claiming that they set up the attack, has also been used to delegitimize and dismiss the attacks against the Jewish community in Australia from Sydney to Melbourne. The recognition of Holocaust denial and distortion as a form of antisemitism, featured in the IHRA working definition, should be applied to the purposeful denial or distortion of atrocities committed against Jews for being Jews.
Some social media platforms have gone on the record stating that they use the IHRA working definition as a reference point when updating their policies, but the truth is the practitioners and enforcers of the policies, the content moderators, often outsourced by major platforms to third party providers around the world, are unfamiliar with the IHRA working definition and there is no indication that it is part of their regular training material.
A more comprehensive application of the IHRA working definition within the existing policies of the social media platforms, making sure the definition is part of content moderator training and implementation of recommendations from off-platform experts like CyberWell, including reliance on specialized datasets and keywords around events like the elections, would significantly impact better enforcement of digital policy on social media.
Jacobsen: There is a growing normalization of antisemitism online and offline in Australian society. What are the urgent consequences of this normalization?
Montemayor: The normalization of antisemitism—both online and offline—erodes social tolerance and creates an environment where hate speech, hostility, and violence against Jewish citizens is more likely to be accepted or ignored. It emboldens extremist actors to act criminally and violently, legitimizes dangerous conspiracy theories that erode trust, and fosters a climate of fear within Jewish communities. When antisemitic rhetoric goes unchecked, it weakens democratic norms and desensitizes the public to open bigotry and hatred. This is why many Jewish communities are experiencing increased incidents of harassment, threats to community safety, and the risk of real-world attacks—the increased violence is fueled by online radicalization and algorithmically charged hate speech. The platforms must be responsible for systematic and effective enforcement of their own digital policies in order to stem the tide of increasing violence.
Jacobsen: Facebook and YouTube demonstrated stronger enforcement. Why are they more proactive? Are they more successful?
Montemayor: Unlike the other platforms, YouTube takes a more defined stance by including specific policies on hate speech related to elections and civic integrity. The platform explicitly prohibits hate speech and harassment in the context of elections. Reflecting this policy, YouTube had the fewest antisemitic posts in the dataset. While the removal rate stood at 0%, this is attributable to the fact that only one video was identified during the monitoring period.
Overall, CyberWell’s research across platforms suggests that the more explicit a policy is, the more effectively it is enforced. This is true in terms of technological resources, such as pre-emptive AI removal through classifiers and human content moderation, which reviews users’ reports of violating content. While Facebook does not currently include explicit clauses in their policies targeting election-related hate speech, Facebook demonstrated the highest rate of content removal, taking down 89.47% of the reported posts. It is also worth noting that CyberWell is a trusted partner of TikTok and Meta, but not an official partner of YouTube. This may support stronger response mechanisms by Meta for reported antisemitic content.
Jacobsen: Thank you for the opportunity and your time, Tal-Or.