Was Hate’s Reach into US Public Extended by Internet?

James E. Hawdon | (The Conversation) | – –

Extremism has always been with us, but the internet has allowed ideas that advocate hate and violence to reach more and more people. Whether it’s the deadly “Unite the Right” rally in Charlottesville or the 2015 Charleston church massacre, it’s important to understand the internet and social media’s role in spreading extremism – and what can possibly be done to prevent these views from leading to actual violence.

For six years, I’ve been director of the Center for Peace Studies and Violence Prevention at Virginia Tech, which researches the causes and consequences of violence in society. While I’ve been studying extremist ideologies for over a decade, I’ve focused on its online forms since 2013. From our research, we’ve been able to track the growth of these views on the internet – how they’re spread, who’s being exposed to them and how they’re reinforced.

The internet’s fertile landscape

The First Amendment allows us to express any ideas, no matter how extreme. So how should we define extremism? On the one hand, it’s similar to Supreme Court Justice Stewart’s famous quote about pornography – “I know it when I see it.”

Extremism is generally used to describe ideologies that support terrorism, racism, xenophobia, left- or right-wing political radicalism and religious intolerance. In a way, it’s a political term describing beliefs that don’t reflect dominant social norms and that reject – either formally or informally – tolerance and the existing social order.

Extremist groups went online almost immediately after the internet was developed and their numbers increased dramatically after 2000, reaching over 1,000 by 2010. But the data on organized groups don’t include the sheer number of individuals who maintain websites or make extremist comments on social media platforms.

As the number of sites spewing hate has grown, so have recipients of the messages, with younger people particularly vulnerable. The percentage of people between the ages of 15 and 21 who saw online extremist messages increased from 58.3 percent in 2013 to 70.2 percent in 2016. While extremism comes in many forms, the growth of racist propaganda has been especially pronounced since 2008: Nearly two-thirds of those who saw extremist messages online said they involved attacking or demeaning a racial minority.

Bubbles of hate

In recent years, the proliferation of social media – which gives users the ability to reach millions instantaneously – has made it easier to spread extreme views.

But it is in more subtle ways that our online experiences may amplify extremism. It’s now common practice for social networking sites to collect the personal information of users, with search engines and news sites using algorithms to learn about our interests, wants, desires and needs – all of which influences what we see on our screens. This process can create filter bubbles that reinforce our preexisting beliefs, while information that challenges our assumptions or points to alternative perspectives rarely appears.

Every time someone opens a hate group’s website, reads its blogs, adds its members as Facebook friends or views its videos, the individual becomes enmeshed in a network of like-minded people espousing an extreme ideology. In the end, this process can harden worldviews that people become comfortable spreading.

Unfortunately, this seems to be happening. When we began our research in 2013, only 7 percent of respondents admitted to producing online material that others would likely interpret as hateful or extreme. Now, nearly 16 percent of respondents report producing such materials.

While most people who express extremist ideas do not call for violence, many do. In 2015, about 20 percent of the messages people saw online openly called for violence against the targeted group; this number nearly doubled by 2016. Granted, not everyone who sees these messages will be affected by them. But given that the radicalization process often begins with simply being exposed to extremism, government authorities in the U.S. and around the world have been understandably concerned.

The role of social control

While all of this seems bleak, there is hope.

First, companies such as GoDaddy, Facebook and Reddit are banning accounts associated with hate groups. Perhaps more importantly – as we saw during and after Charlottesville – people are defending diversity and tolerance. Over two-thirds of our respondents report that when they see someone advocating hate online, they tell the person to stop or defend the attacked group. Similarly, people are using social media to expose the identities of extremists, which is what happened to some of those involved in the Charlottesville rally.

The ConversationPerhaps these acts of online and offline social control can convince extremists that, somewhat ironically, a tolerant society doesn’t tolerate extremist ideologies. This may create a more tolerant virtual world, and, with luck, disrupt the radicalization of the next perpetrator of hate-based violence.

James E. Hawdon, Director, Center for Peace Studies and Violence Prevention, Virginia Tech

This article was originally published on The Conversation. Read the original article.

—–

Related video added by Juan Cole:

Al Jazeera English: “The Stream – Hate speech v free speech: Where is the line? Part 1”

7 Responses

  1. How about social media, search engines, and such, having learned that a user looked at extremist materials, then used an algorithm to create filter bubbles that reinforce tolerance and anti-extremism.

  2. This essay is the worst kind of statism I’ve seen a long time. Now that the internet is not advancing Washington’s agenda, freedom of speech has become a bad thing? All of Hawdon’s argument taken together reminds me of Soviet Writer’s Guild chairman Alexei Alexandrovich Surkov’s arguments in favour of social realism in Stalin’s Russia. Soviet ideas censorship precluded writing which undermined the government. Has America fallen so far that a liberal publication calls for the repeal of the first amendment?

    • Alec – the spin and implications presented to Professor Cole’s thoughtful piece was unnecessary. No Constitutional Right was endangered in his text.

      Companies who operate social media can conduct themselves any way they see fit.

    • How would you, if you’d been living in Weimar Germany, have stopped the Nazis? You seem unperturbed that the Trumpite minority is intimidating the rest of us into silence in the manner of the rising KKK tearing down Reconstruction to impose 90 years of darkness (but hey, they got rid of a Federal dictatorship!); you certainly have nothing to break that silence.

  3. Well, the last time fascism was spread by radio. Whenever a new form of communication sweeps humankind, the pioneers enter a Wild West with no rules or norms to impede the spread of their messages. The corporate establishment bumbles behind to impose its version of order, but once imposed it’s there as long as the medium exists. So sure, Facebook can now ban the far right, but it can ban progressives too.

    Tolerance isn’t a useful enough concept. Because then you’re having to punish the intolerant, and they accuse you of being a hypocrite. The norm we need, which was largely buried long before the Internet by the rise of Reagan, is equality. Too many people do not understand that equality is the foundational difference between the Left and the Right. The Right did very well for itself by pretending to be offended by redistributive social practices when in fact these have always been a part of human societies (which fact is not taught in schools). Trump’s rise reveals that the lying bastards never had a problem with governmental power as Reagan claimed, they had a problem with anyone or anything at all trying to reduce inequality in all its malignant forms.

  4. Yes . . .

    I sell stuff on eBay because it gives me access to far more potential buyers than selling on Craigslist or via my local newspaper which only cover a small geographic area. Buyers are not concentrated in a small geographic areas, but sparsely distributed all over the USA and globe.

    The number of racist, hateful people has probably not increased by a huge number, BUT before the Internet, they were isolated and only able to connect with a small number of like-minded people within a small geographic area.

    BUT. . .

    Just as I was able to build a global community to discuss esoteric technical issues via ARPAnet then later the public internet, people scattered all over the globe can now easily connect about thousands of unique shared interests.

    In this respect, the hateful are no different in that they now have a “safe” space to congregate and share their hate. Given that these type of people contribute NOTHING to the advancement and stability of humans, the best thing society could do is return them to geographic isolation.

    These type of corrosive people are no benefit to society and their speech should be suppressed. Note that much of the globe does just that.

Comments are closed.