The Weight of Hate

Social media platforms need to do more to protect kids from rampant—and viral—toxic comments

The Internet can be a toxic hangout—especially for kids.

New research by Uttara Ananthakrishnan of the University of Washington Foster School of Business reveals that hateful commentary on social media content posted by minors is both rampant and viral.

Her analysis of 110 million responses to YouTube videos created by children and teenagers reveals that 1 in 20 of those comments was insulting, obscene, threatening or hateful.

Moreover, hateful quickly goes viral. When one toxic comment is posted, the likelihood increases that others will follow.

“In our study, we observed more than just the upfront cost of one offensive comment,” says Ananthakrishnan, an assistant professor of information systems at Foster. “When one person starts saying hateful things, they break societal norms of civility—and things can devolve very quickly.”

But this descent into toxicity is not inevitable… if social media platforms will commit to stopping it.

Love + Hate

Legions of kids are posting self-produced content to sites like TikTok, YouTube and Instagram. Through short videos of themselves singing, dancing, cooking, cracking jokes or creating memes, some want to entertain friends and others seek wider fame.

But posting on social media is fraught. While it can draw throngs of affirming “likes,” it also exposes its creator to public criticism—and far worse.

Uttara Ananthakrishnan

To measure the magnitude of online hate speech, Ananthakrishnan and co-author Catherine Tucker of the MIT Sloan School of Management analyzed more than 110 million comments on 50,000 videos posted by kids on YouTube, using natural language processing software to separate like from hate.

While a large majority of comments were positive, a significant minority responded with obscene or toxic language or cruel personal attacks.

For kids of any age, even a single negative comment can drown out a chorus of praise. The trouble is that it’s usually not just one. The researchers found that the first hateful comment often led to a torrent—often sparked by Internet “trolls,” who live up to their malevolent moniker by piling on odious responses.

“Trolls love this,” Ananthakrishnan says. “It’s like they see a small fire burning and decide to dump gasoline on it. Then it just blows up.”

What makes the situation even worse is that the biggest target of toxicity appears to be preteens, who tend to be at the most vulnerable stage of their emotional development.

Platforms: do more

Ananthakrishnan says that social media providers can protect free speech while also protecting against damaging speech.

All of the established social media platforms, she says, possess the AI-powered language and image processing technology to automatically scan, flag and conceal potentially hateful comments for human review—before they pollute the comment sections of modern kids just living online.

They simply need to use it.

“Algorithms are getting really, really good at identifying hate speech,” says Ananthakrishnan. “Platforms could do so much more to make the Internet a safe place. But they rarely do anything until it becomes a really big deal.”

Her paper shows that online hate aimed at children is a really big deal.

“Given the potentially devastating consequences of hate speech on children and the speed at which enforcement is required, it is imperative that platforms employ detection of toxic comments,” she adds. “While human judgment is more accurate in assessing hate speech than algorithms, speed, rather than accuracy, is critical in this context.”

The Drivers and Virality of Hate Speech Online” is the work of Uttara M. Ananthakrishnan and Catherine E. Tucker.

Ed Kromer Managing Editor Foster School

Ed Kromer is the managing editor of Foster Business magazine. Over the past two decades, he has served as the school’s senior storyteller, writing about a wide array of people, programs, insights and innovations that power the Foster School community.