The Good, the Bad and the Algorithm

Navigating antisemitism on TikTok
By | Jan 10, 2023
Deep Dives, Winter Issue 2023
the tiktok logo displayed on a phone. arrows point to the phone on a gray background

Hundreds of teenage girls put on headscarves and makeup to look like bruised and emaciated Holocaust victims and invent fictional accounts of dying in Nazi concentration camps, overlaid in some instances with the Bruno Mars pop hit “Locked Out of Heaven.” Two Minnesota high school students dance in a boxcar and appear to skip into Auschwitz to “Tell Everybody I’m on my Way” from the Disney movie Brother Bear. Footage of dancing Orthodox Jews is shared with the caption, “Jews getting lit, knowing they basically own the media.” These are just a few examples of the discordant and disturbing playfulness that can animate antisemitic content on the wildly popular video-sharing app TikTok. To be sure, there are plenty of other types of antisemitism on TikTok, some more nefarious, as well as content that attempts to counteract it.

TikTok is a behemoth in the social media and entertainment world, having doubled in the last year to more than a billion users worldwide, the majority of them Gen Z (those 25 and under). “Our mission is to capture and present the world’s creativity, knowledge, and moments that matter in everyday life,” says the company’s website. “TikTok empowers everyone to be a creator directly from their smartphones and is committed to building a community.” The company is also looking to expand beyond video into gaming, music streaming and e-commerce. While all this is giving investors the tingles, TikTok, part of the larger Chinese tech company ByteDance, is giving others, including the U.S. government, the shivers. (In December 2022, Congress voted unanimously to ban TikTok on federal devices.) Data privacy is a top concern, as is the fear that the Chinese government could use the app’s powerful algorithm for influence campaigns.

As with all forms of social media, users thrive on how others interact with what they post, tweet or share: a keen political jab via Twitter, a vacation photo shared on Facebook or, in the case of TikTok, short bursts of video or pictorial memes featuring anything under the sun, typically combined with music, video effects, filters, emojis and more. And while there’s no shortage of light-hearted, informational and boldly creative content on TikTok, creators’ and consumers’ darker impulses are on display too.

A few of the TikTok creators who have dressed up as Holocaust victims. (Photo credit: TikTok screenshot)

As detailed in a 2021 report from the Institute for Strategic Dialogue (ISD), a UK-based think tank dedicated to combating extremism and polarization, hateful messaging on TikTok is fueled by features unique to the platform and by its innovative, even revolutionary, recommendation algorithm, which quickly learns users’ interests and generates an endless stream of suggestions. This AI algorithm is spoken of as something close to sentient—its power to read you and feed you reaching near-mythological status. As people use TikTok, the algorithm assesses not only their tastes in videos, but those of people they follow, and, based on data including likes, follows and comments, it pushes certain content to what’s known as a user’s “For You” page (FYP). Whenever a user opens the app, the FYP appears, and the stream of videos begins.

When TikTok debuted in 2016, videos were just 15 seconds long. The maximum duration was extended to 60 seconds in 2017, to three minutes in 2021 and to 10 minutes in 2022. A Wired magazine article published last year reported that the most popular TikTok videos were 21-34 seconds in length and that the average user opened the app 17 times a day, spending a total of one hour and 25 minutes watching videos. (To clarify terms, TikTok is both the app and a platform, and people refer to an individual video as a TikTok. To make matters even more confusing, some people refer to their TikTok account as their TikTok. So, you can open up TikTok to post a TikTok to your TikTok.)

The algorithm is spoken of as something close to sentient—its power to read you and feed you reaching near-mythological status.

Whatever the terminology, it’s addictive. Research into how this kind of rapid-fire visual stream affects the user—what researchers from the College of Journalism and Communication at Wuhan Sports University in China call the “anesthesia mechanism”—describes an experience where curiosity met with pleasure keeps people passively consuming without always realizing what they’re ingesting and promoting. Gabriel Weimann of the University of Haifa and Natalie Masri of Reichman University, both in Israel, have conducted several studies on TikTok. In “TikTok’s Spiral of Antisemitism,” published in Journalism and Media in November 2021, they characterize the app’s algorithm as “a disconcerting feature” that pushes content to users who may unwittingly boost a video’s popularity if they stay on it too long, click through to comments, or mistakenly “like” it, which can happen easily if you accidentally double-click on the video.

Whether it’s playful or not, other antisemitic content one might encounter on TikTok includes stereotypes, conspiracy myths (including that Jews somehow caused COVID-19 or control vaccines), neo-Nazi propaganda and hatred directed at Jewish people over Israeli policy. Holocaust denial or distortion is the most explicit form of antisemitism on the app, says analyst Ciarán O’Connor, who authored the ISD report, titled “Hatescape: An In-Depth Analysis of Extremism and Hate Speech on TikTok.” Doubt that six million Jews died, for example, is coded into videos such as “Homemade Pizza Recipe,” about the challenge of making six million pizzas using four ovens: “We could say we made the pizzas,” the cartoon video begins, “but we’ll also need to make organizations to enforce the fact that we made the pizzas…and we should also make it a crime to even question if we made the pizzas and we’ll also need some TV channels and constant Hollywood films to remind everyone we made the pizzas.” In this same vein, Weimann and Masri describe a TikTok featuring white supremacist Nick Fuentes. In a clip from an episode of his podcast “America First,” Fuentes makes a joke about cookies and how much time it would take to bake six million. “I think my math is wrong,” he says smugly. The user who shared this on TikTok added the caption: “6 billion? 6 trillion?”

This brings up the unique way TikTok allows creators to co-opt excerpts of previously posted videos and overlay them with their own audio and captions in what is known as a “stitch.” Original creators can enable the Stitch function when uploading a video, which allows others to use up to five seconds of it. Examples cited in an October Rolling Stone article on how Kanye West’s antisemitic rants had been used on the platform included footage of the rapper performing in concert juxtaposed with a recording of Joseph Goebbels speaking at a Nazi rally in 1933. Another TikTok stitches a slow-motion shot of West raising his arm with audio of someone shouting “Sieg Heil!”

Other TikTok functions include the Duet feature, whereby users can create a split-screen to react to another video. In their research, Weimann and Masri describe a “duet” of a young man opening an oven door and pointing inside placed alongside a Jewish creator’s video about Passover customs. Comments then followed such as, “You have to put a trail of coins for them to follow,” or, “I myself have been gassed over 6,000,000 times!!!” Another function is the Green Screen effect, which allows creators to superimpose themselves over a scene. An example of the Green Screen effect, included in the ISD report, appears to show two young men eerily facing each other in a concentration camp barracks, playacting as a Nazi guard and prisoner but in contemporary street clothes and affecting casual postures. One wears a grimacing skull mask; the caption over the other’s head reads, “Where is my family?”

Moment for just $19.97...subscribe now and get Good Karma FREE

These are not the only ways antisemitism manifests on TikTok. There are people who convey it in usernames such as @holocaustwasgood, @evilJews and so on. A less overt method of conveying antisemitism is through emojis (the oft-used nose, as well as shower and gas-pump emojis referencing Nazi gas chambers) and coded messages employed in captions and in comments to other videos. Often those pushing antisemitism on TikTok use something termed “algospeak,” misspellings or neologisms used to get around automated content moderation. “Jews” becomes “juice,” for example, and “H1tler,” “HH” or “88” appear as stand-ins for “Hitler” and “Heil Hitler” (‘H’ being the 8th letter of the alphabet). References to “1488” combine the double-eight with a 14-word slogan said to have originated with American white supremacist David Eden Lane, known for saying, “We must secure the existence of our people and a future for white children.”

Ironically, coded language is sometimes necessary to call out antisemitism on TikTok, too. “Jewish creators are constantly having to adopt algospeak,” says Raven Schwam-Curtis, a doctoral candidate in African American Studies at Northwestern University who uses they/them pronouns and makes TikToks about their experiences as a Black, queer, femme Jewish person. “You can’t even talk about antisemitism without adjusting the word in captions and text to look more like ‘ant!sem!tïsm.’ It’s kind of tough to combat hate when you can’t freely use the language necessary to address it.” Schwam-Curtis says their comments section is regularly rife with antisemitism as well as anti-Blackness, and that while TikTok did remove two especially hateful stitches, the majority of the reports they’ve filed with the platform haven’t resulted in anything actionable.

“Typically, if I want a situation to be resolved I have to take matters into my own hands,” says Schwam-Curtis. For example, they’ve taken advantage of TikTok’s comment filter feature. “Some of the words I’ve filtered out are ‘mutt,’ ‘half-breed’ and ‘bitch.’” They’ve also been called the n-word, the k-word, gorilla, rat, “and probably more that I’ve just blocked out for self-preservation.”

“You can’t even talk about antisemitism without adjusting the word to look more like ‘ant!sem!tÏsm.’ It’s tough to combat hate when you can’t even use language necessary to address it.”

Writer/comedian Eitan Levine amassed a large following on TikTok doing humorous videos but says he’s backed off, not so much due to hateful reactions to his content but because the app would too often flag anything Jewish as antisemitic and remove it, or people who were anti-Jewish would report Jewish content for removal. Of the antisemitic comments to his videos that got around content moderation, the direct insults or swastikas weren’t the worst of them. “I’d leave those up,” says Levine. It was when he’d post silly comedy bits and people would write “FREE PALESTINE” in comments or post a Palestinian flag that really bothered him. “Come on, it’s a bagel video!” he says. “They’re co-opting an actual issue just to be straight-up antisemitic.”

Efforts to estimate the extent of antisemitism on TikTok are in the early stages. In a 2022 study, “New Antisemitism on TikTok,” Weimann and Masri measured the growth of antisemitic content on the app over a four-month period in 2020 and again in 2021. Their data suggests that while antisemitism wasn’t rampant on the app during these periods, the growth rate was notable. For example, they identified 41 antisemitic comments to posts in the four-month period in 2020, which increased to 415 antisemitic comments a year later. The ISD “Hatescape” report analyzed 1,030 videos from 491 TikTok accounts associated with extremism and hateful content and identified 153 that were antisemitic. While some of the videos had fewer than 100 views, several had in the hundreds of thousands, and one, featuring footage of a video game set at Auschwitz, had close to 2 million views.

Considering the rapid overall growth of TikTok and the recent spotlight on celebrity antisemitism, it seems inevitable that hostility and prejudice toward Jews would metastasize on the platform.

On the question of how teenagers viewing jokes about Jewish people, antisemitic tropes or Holocaust denial might encounter messaging from organized white nationalist or neo-Nazi groups, Weimann says the algorithm can “flood an innocent kid with more extremist offerings,” calling it a vicious cycle whereby TikTok amplifies hateful messages that originate on fringe corners of the internet, including the dark web (a hidden collection of sites that require a specialized web browser). “The fusion of hate,” he says, is in “postings that channel hate to various target groups,” including Jews, LGBTQ and Black individuals, liberals and women.

Researchers agree TikTok isn’t doing enough to remove hateful, extremist material, nor does it make it easy for researchers to study hateful content on the platform.

Researchers agree TikTok isn’t doing enough to remove hateful, extremist material, nor does it make it easy for researchers to study hateful content on the platform. In July, TikTok announced it would provide select researchers with more transparency about the platform and its moderation system. ISD analyst O’Connor says this hasn’t happened, elaborating that TikTok has yet to offer an application programming interface (API) that academics might use to conduct research at scale, and so it remains difficult and labor-intensive to study user behavior and content. “They have stated that an API is on the way but have yet to offer firm details on when or how it will function,” he says. “Data access and algorithmic or engagement transparency are still sorely lacking on TikTok.” Weimann points out another factor that makes it hard for researchers to track hateful content on TikTok: Since antisemitic content is often visual/pictorial and not accompanied by keywords or hashtags that would flag it as such, it’s hard to quantify exactly how much antisemitic content is on the platform.

In its “Community Guidelines and Enforcement” report issued in September, TikTok notes that the platform relies on “automated moderation when our systems have a high degree of confidence that content is violative so that we can expeditiously remove violations of our policies. As a result, our overall protective detection efforts have improved.” According to that report, 113 million videos, most of a sexual nature, were removed from TikTok during the period of April to June in 2022. How much problematic content made it through the automated system is an open question, considering that 113 million is less than 1 percent of the total number of videos published during that period.

Despite the algorithm and the staggering amount of TikTok content overall, there are ways to counter the negative messaging people consume on the app. Users who are proactive about whom they follow and whom those creators associate with can push more positive content. This means searching hashtags and keywords related to specific topics of interest, such as Passover recipes, following those creators and commenting on their videos, which boosts their visibility. When encountering problematic content, it’s possible to do what’s called a “long-press” on the video, holding your finger down until a “Not Interested” button appears. TikTok’s support pages say that taking this step will discourage the algorithm from showing similar content. Notably, if you simply double-click a video, it gets marked as “Liked,” which turns the white heart symbol red and has the opposite effect. (Users who mistakenly like something can quickly reverse that action by tapping the heart to turn it white again.) In September the company also rolled out a “dislike” button for comments to videos. Dislikes aren’t visible to the creator or the public but provide TikTok with data to weed out extreme, hateful or otherwise offensive content.

Another way to combat the perpetuation of antisemitism on TikTok is to post content that uses the app’s enticing features and then leads young users to meaningful educational content elsewhere.

This is what social media researcher Tom Divon of the Hebrew University of Jerusalem’s Department of Communications and Journalism has done. When the notorious TikTok “Holocaust Challenge” was trending in 2020, leading teenage girls to pretend to be Holocaust victims, he decided to investigate. Before TikTok took the videos down, he and a colleague were able to save some 300 and interview some of the girls about what motivated them to pose as Holocaust victims. What they found was that many of those they spoke to had approached it from a position of curiosity and naïveté.

This led Divon to meet with the CEO of TikTok Germany and to design a series of classes for German schools and museums on how to increase views of their content to counteract ignorance of the Holocaust. Museums at Dachau and Bergen-Belsen concentration camps participated in the seminar, along with the Jewish Museum in Berlin and others. The key, he told educators and curators, is to communicate with Gen Z through the language of TikTok. He described this language in an interview with Haaretz in February: “To talk about the Holocaust, but in a way that’s a little more playful—not in a disparaging or disrespectful sense, but in the interactive sense. That means being colorful, rhythmic, filled with special features and movement in a way that ignites curiosity, that prompts viewers to ask questions, pushes them to visit the institution’s profile page and maybe—and this is the ultimate goal—to visit the institutions themselves.” The results have been encouraging: Growing numbers of views of their content speaks to the power of forging courageously onto TikTok.

For creator Schwam-Curtis, combating antisemitism and other forms of hate on TikTok was exhausting when they first started posting videos to the platform. “I wanted to focus my energy on creating and educating, but constantly found myself redirecting my time to addressing hateful comments. I think Toni Morrison put it best when she said, ‘The very serious function of racism [and I would say, antisemitism] is distraction. It keeps you from doing your work. It keeps you explaining over and over again your reason for being.”

However, Schwam-Curtis says the hate has decreased, in part because they’ve cultivated a genuine audience that appreciates their content. “I’m algorithmically more likely to land on those people’s pages,” they explain. And in addition to building relationships with followers, Schwam-Curtis says that together they’ve curated community norms. “Our community has a strong moral orientation, and when people come into our space who don’t respect that, my followers educate them!”

Leave a Reply

Your email address will not be published.