SPLC Testifies Before Congress on Financing of Domestic Terrorism
Lecia Brooks, Southern Poverty Law Center, January 15, 2020
The SPLC’s Lecia Brooks testified today before the U.S. House Subcommittee on National Security, International Development and Monetary Policy (Committee on Financial Services) about how technology companies can disrupt the funding, organizing and recruiting efforts of hate groups on their platforms. Brooks delivered the following oral remarks to the subcommittee, in addition to written testimony.
My name is Lecia Brooks. I am a member of the senior leadership team at the Southern Poverty Law Center (SPLC). We are a civil rights organization founded in 1971 and based in Montgomery, Alabama, with offices in five Southern states and Washington, D.C. For more than three decades, the SPLC been monitoring, issuing reports about, and training law enforcement officials on far-right extremist activity in the United States. Each year since 1990, we have conducted a census of hate groups operating across America, a list that is used extensively by journalists, law enforcement agencies and scholars, among others.
I would like to make three main points.
First, we are witnessing a surging white nationalist movement in the United States that is part of a larger, global movement linked by the idea that white people are being displaced, in part by migrants, in countries they believe should belong to them. This extremist movement represents a global terrorist threat and should be treated as such. Unfortunately, the words and actions of our president have energized and emboldened the white nationalist movement in the United States.
Second, this movement is rooted in a toxic, anti-democratic white supremacist ideology that is metastasizing on social media networks and other websites that traffic in hate. These networks are not only radicalizing people but are, in effect, incubating new terrorists – typically young white men who are motivated to act by what they call “white genocide.”
Third, we would like to recommend ways in which technology companies – including social media sites and online pay portals – can disrupt the funding, organizing and recruiting efforts of hate groups and bad actors who seek to normalize racism, antisemitism, and anti-immigrant ideologies as well as sexism and anti-LGBTQ animus.
The White Nationalist Movement Represents a Global Terrorism Threat
On August 3, 2019, the United States witnessed yet another mass shooting – this time in El Paso, Texas, where 22 people were killed and more than 20 were injured. Shortly before the shooting took place, a four-page manifesto appeared online, reportedly written by the shooter. The manifesto contained white nationalist talking points on “demographic displacement,” “white genocide” and “illegal immigration.” Much of its language mirrors that of President Trump’s rhetoric about a so-called “immigrant invasion” posing threats to American jobs and safety.
Technology companies, especially social media platforms, play an enormous role in the spread of hateful rhetoric and ideas, which can lead to the radicalization of people online.
Though the U.S. government has, since 911, devoted enormous resources to fighting international terrorism spawned by radical forms of Islam, it has done relatively little to combat another, increasingly virulent source of terror, one that has claimed many more lives in recent years: the white nationalist movement.
On March 15, 2019, a white nationalist massacred 51 Muslim worshipers at two mosques in Christchurch, New Zealand, and livestreamed one of the attacks on Facebook. On the killer’s weapon was written the white supremacist slogan known as the 14 words – “We must secure the existence of our people and a future for white children” – and coined by the infamous neo-Nazi terrorist David Lane. In what has become commonplace for white nationalist terrorists, the Christchurch killer also left a manifesto bearing the unmistakable fingerprints of the so-called “alt-right,” both in tone and reference. It celebrated the Norwegian terrorist Anders Breivik as well as Charleston terrorist Dylann Roof. It spoke of “invaders” who “replace” white people – the same kind of language used by Roof and other white nationalist terrorists.
When asked after the Christchurch massacre if he believed white nationalists were a growing threat, the president said, “I don’t really. I think it’s a small group of people that have very, very serious problems. It’s certainly a terrible thing.”
The president is wrong to dismiss the significant threat of violence represented by this movement. In fact, as we have seen in recent months, one terrorist inspires another to act.
On April 27, five weeks after Christchurch, a gunman walked into the Chabad of Poway synagogue in California and opened fire. A 60-year-old woman observing Passover was killed. Many more might have been slaughtered if the gunman’s assault rifle had not jammed. The man accused of the murder, John Earnest, posted an “open letter” littered with the same racist and antisemitic tropes that other white nationalist terrorists wrote before him. He praised Brenton Tarrant, the man charged in Christchurch, writing that Tarrant “was a catalyst” for him. “He showed me that it could be done. It needed to be done.”
The Poway shooting occurred exactly six months after 11 Jews were massacred at the Tree of Life synagogue in Pittsburgh by a man who reportedly shouted “All Jews need to die” before he opened fire.
The “small group of people” that President Trump referenced has spawned the likes of Dylann Roof, killer of nine African-American worshipers in Charleston; Anders Breivik, killer of 77 people in Norway; Robert Bowers, the accused Pittsburgh shooter; Wade Michael Page, murderer of six Sikhs at a Wisconsin temple; and James Alex Fields, killer of anti-racist protester Heather Heyer in Charlottesville, Virginia. Many other white nationalists in recent years – far too many to list – have also committed hate-inspired violence or been arrested before they could launch terror attacks.
According to the SPLC’s analysis, more than 100 people in the United States and Canada have been killed in attacks committed by extremists linked to the white supremacist movement since 2014. All of the perpetrators interacted with extremist content online.
In our view, the most important factor driving this movement and its violence is the fear and resentment over the nation’s changing demographics. The U.S. Census has projected that sometime in the 2040s white people will no longer be a majority in the United States.
This nativist fear is not new. We began to see sharp increases in the number of U.S.-based hate groups around the turn of the century, following a decade in which the unauthorized immigrant population doubled, rising from 3.5 million to 7 million. In 1999, we counted 457 hate groups. That number more than doubled – to 1,018 – by 2011, two years into the Obama administration. But, after that peak, the number began to decline steadily, to a low of 784 by 2014.
Our latest count shows that hate groups operating across America rose to a record high in 2018. It was the fourth consecutive year of growth – a cumulative 30% increase that coincides roughly with Trump’s campaign and presidency – following three straight years of declines. We also found that white nationalist groups in 2018 rose by almost 50% – from 100 to 148 – over the previous year.
Racist and antisemitic violence has followed the same escalating pattern. FBI statistics show that overall hate crimes fell slightly in 2018, although those involving violence (as opposed to property) reached a 16-year high. This followed a 30% increase in hate crimes during the three-year period ending in 2017.
Since the campaign, Trump has continued to energize the white nationalist movement through both his words and his policies. For example, he famously insisted there were “very fine people” among the hundreds of neo-Nazis and other white supremacists who marched in the streets of Charlottesville, Virginia, in August 2017, shouting slogans like “Jews will not replace us.” In 2018, he called Haiti and majority-black countries in Africa “shithole countries.” He has also implemented draconian policies at the U.S.-Mexico border, separating migrant children from their families, imprisoning tens of thousands of immigrants, and virtually shutting down the asylum system.
In some cases, violent acts by extremists appear to have been motivated by Trump’s words or by support for him. In March 2019, Cesar Sayoc, a Trump supporter, pleaded guilty to charges related to a mail bomb campaign in which he sent 16 devices to Democratic politicians, media figures, and other prominent critics of the president in October 2018, just before the midterm elections. At the time, Trump was raging about the so-called caravan that was bringing an “invasion” of migrants to the United States. Sayoc’s targets included George Soros, a Jewish billionaire who funds progressive causes. Soros was the subject of a false alt-right conspiracy theory – spread on social media and even parroted by mainstream politicians – that claimed he was orchestrating and funding the caravan. The theory dovetailed with white nationalist notions that Jews, more generally, are working to facilitate immigration.
Similarly, a study released in March 2018 found that President Trump’s tweets on Islam-related subjects were highly correlated with anti-Muslim hate crimes and that a rise in anti-Muslim hate crime since Trump’s campaign was concentrated in counties with a high Twitter usage.
White Supremacist Terrorists Are Being Incubated on Both Extremist and Mainstream Social Media Sites
The President has undoubtedly energized the white nationalist movement. But nothing has helped facilitate the process of far-right radicalization like the internet. Long before Donald Trump entered office, white supremacists around the world began constructing a robust, online ecosystem that indoctrinates people – especially young white men – into the world of hate. The dramatic rise in white nationalist hate groups and white supremacist killers in recent years is a testament to its effectiveness. Indeed, in the manifesto he posted online prior to murdering 51 Muslim worshipers in Christchurch, the killer posed a question to himself: “From where did you receive/research/develop your beliefs?” He answered thusly: “The internet, of course. You will not find the truth anywhere else.”
The Christchurch killer’s online radicalization narrative is now a terrifyingly common one. Before the days of the internet, far-right extremists typically had to publish and disseminate propaganda in printed form. Most Americans were simply never exposed to this material. Now, white nationalists commonly develop their views by coming into contact with extremist content online – either on social media or other sites that are fine-tuned to encourage young men to blame their real and perceived grievances on racial and religious minorities, immigrants, women, and others.
We’ve seen numerous examples of men who were radicalized online and went on to commit acts of terrorism.
Dylann Roof became convinced that black people pose a tremendous threat of violence to white people after he typed “black on white crime” into Google’s search engine and found himself on the website of the Council of Conservative Citizens, a white nationalist hate group that has called black people “a retrograde species of humanity.” Robert Bowers’ antisemitic beliefs were reinforced on Gab, a social media site crawling with references to “white genocide” and posts encouraging others to commit acts of violence against Jews. In his manifesto, John Earnest referred to his fellow users on the white-supremacist friendly forums 4chan and 8chan as his “brothers” before encouraging them to commit attacks of their own.
White supremacists hoping to disseminate their propaganda have been helped immeasurably by social media companies that are, in some cases, unwilling to moderate hateful or extremist content. Twitter, for example, allows some of the most prominent leaders of the white nationalist movement – including David Duke and Richard Spencer – to maintain accounts. YouTube is one of the most efficient radicalizing forces on the internet, one that white nationalists frequently credit with first introducing them to ethnonationalist ideas.
When tech companies do decide to act against hate, it is often only after a violent attack has occurred. They need to proactively address the problem of extremist content on their platforms rather than simply react after people have been killed.
Most people who associate with the white nationalist movement do not belong to a formal hate group but act as part of loosely organized communities of extremists who congregate around online propaganda hubs. The neo-Nazi website Daily Stormer, for instance, has cultivated a massive following of readers who daily consume content that tells them that the Holocaust was a hoax, that Jews are committing a genocide against white people, and that there is an impending race war in the United States. The site often presents this content under layers of humor that are designed to desensitize readers to grossly racist content and ease them into the world of hate. This is part of its strategy to recruit impressionable young people. Andrew Anglin, who runs the Daily Stormer, has said that his site is “mainly designed to target children.”
Social media and sites like the Daily Stormer have helped to cultivate an enormous online white nationalist movement – one that is now actively embracing violence as a solution to “white genocide.” Though many extremists see Trump as a fellow traveler – or even as a champion of their movement – they are frustrated with the pace of political change and, therefore, increasingly believe that they can bring about their ethnonationalist vision only through acts of violence.
Violent attacks by far-right extremists are growing in frequency and becoming more deadly. In a 2019 report, the Anti-Defamation League found that domestic extremists killed 50 people in 2018 – up from 37 in 2017 – and that “every single extremist killing – from Pittsburgh to Parkland – had a link to right-wing extremism.” Violence in the name of white supremacy encourages others to carry out similar attacks. An analysis by The New York Times found that “at least a third of white extremist killers since 2011 were inspired by others who perpetrated similar attacks, professed a reverence for them or showed an interest in their tactics.”
There are entire online spaces – including the forum Fascist Forge, threads on the social media sites Gab and Telegram, and many other — that exist solely to provide training and advice about how to carry out acts of violence; to disseminate polemical texts that promote racial terrorism; to encourage followers to commit their own violent attacks; and to venerate those who have carried out acts of domestic terrorism in the name of white supremacy. These online spaces are incubating future terrorists.
Many adherents to white nationalist ideology look upon white supremacist mass killers with a degree of religious reverence; it is not difficult to find images on social media of men like Roof, Bowers, and Earnest depicted as saints. Until the SPLC brought it to the attention of the website Teespring in 2019, T-shirts and mugs with the images of six white supremacist killers under the words “Praise the Saints” were available for purchase on the site. Men who commit acts of terrorism in the name of white supremacy are, in effect, promised they will be canonized within the movement.
These websites are not only radicalizing potential terrorists, they are injecting toxic white supremacist ideology and other extremist ideas into the mainstream. A Twitter employee who works on machine learning told Vice last year that Twitter has not taken an aggressive approach to removing white supremacist content from its platform because any algorithm it would use to identify objectionable content would also flag the accounts of some Republican politicians. “Banning politicians wouldn’t be accepted by society as a trade-off for flagging all of the white supremacist propaganda, he argued.” The president himself has retweeted content that originated in white nationalist networks, such as in August 2018 when he tweeted about the “large-scale killing” of white farmers in South Africa. He also has praised the reputation of far-right, internet conspiracy theorist Alex Jones as “amazing.”
Technology Companies Must Act to Disrupt the Funding of Hate Online
For decades, the SPLC has been fighting hate and exposing how hate groups use the internet. We have lobbied internet companies, one by one, to comply with their own rules to prohibit their services from being used to foster hate or discrimination. A key part of this strategy has been to target these organizations’ funding.
Hate group sites are funded by peer-to-peer interaction, not by large donors. Even a small amount of money can go a long way in spreading hate online. These groups and individuals are able to spread their toxic ideologies far and wide through ads and events that cost relatively little.
The first targets of our attack against hate group funding online were PayPal, Apple’s iTunes and Amazon. The SPLC found that at least 69 hate groups were using PayPal, the world’s largest online payment processor, to collect money from merchandise sales and donations. PayPal was earning a fee from each transaction, and essentially served as the banking system for white nationalism.
At iTunes, the SPLC identified at least 54 white-power bands that were earning 70 cents for each downloaded song. Amazon, too, was selling racist music, and groups were earning commissions by sending their users to Amazon to buy products. Within days of an SPLC expose in November 2014, Apple vowed to purge racist music and immediately began removing dozens of offensive bands from iTunes.
We continued our campaign over the months and years that followed, publishing reports and sharing information with the news media about the many ways Silicon Valley was enabling the spread of hate.
The former Klansman David Duke and others like him had their own channels on YouTube. Numerous hate groups had Facebook pages. Google was placing ads on hate group websites, funneling money to them from mainstream advertisers. The hugely popular website Reddit, too, was hosting racist content categories, or subreddits, with names that included racial slurs. Twitter was awash in racist comments. And racist websites like Stormfront and the Daily Stormer were hosted and serviced by a variety of reputable companies.
The public exposure was half the battle. We conducted the other part of the campaign privately. SPLC officials held dozens of meetings with top Silicon Valley executives. Some companies acted. Some took half steps. Others did little or nothing. But eventually, the far-right extremists who depended on Silicon Valley were beginning to feel the pain. “[S]lowly, methodically, the SPLC and other such groups are moving to cut off the miniscule financial support that sustains what little counter-culture is left,” complained the white nationalist Radix Journal in May 2015.
Our campaign really began to see results in June 2015, when Dylann Roof massacred nine African Americans at the Emanuel AME Church in Charleston. As a shocked nation mourned, the SPLC alerted PayPal that a key point in Roof’s radicalization came when he found racist propaganda on the website of the hate group Council of Conservative Citizens. Days later, PayPal canceled the group’s service.
Then, Google began aggressively pulling ads from hate group websites. Reddit dumped some of its most offensive subreddits. Other companies began to act with greater urgency. The SPLC kept up the pressure, cajoling companies and exposing those that dragged their feet.
Then, two years after the Charleston massacre, the dam burst. In August 2017, hundreds of white supremacists gathered under the umbrella of the “alt-right” in Charlottesville, Virginia, to protest the planned removal of Confederate statues. Violence broke out, and young anti-racist demonstrator Heather Heyer was murdered in the melee. Two law enforcement officers also were killed in a helicopter crash.
The SPLC revealed that organizers, speakers and attendees of the rally relied heavily on PayPal to raise money and move funds around during the run-up to the event. Responding immediately, the company dropped many of the accounts named by the SPLC, including that of key white nationalist Richard Spencer, who organized the rally. “As much as I hate to say it, these attacks have been extremely detrimental to my ability to move forward,” Spencer told HuffPost.
Within days of the rally, the websites Stormfront and Daily Stormer vanished as their providers pulled their services. Other companies acted as well, and several reached out to the SPLC to identify hate groups among their clients. In the months that followed, numerous extremists lost access to social media platforms like Twitter, YouTube and Facebook.
Extremists and their allies, again, blamed the SPLC. “[T]he radical Southern Poverty Law Center (SPLC) has slimed its way through the doors of the biggest tech companies out there, offering its services as the leading censor of conservative voices,” the hate group American Free Press wrote last June. In August 2018, anti-Muslim hate leader David Horowitz said, “The reason Mastercard and Visa gave us for cutting us off and thus sabotaging our online fund-raising operation is that the SPLC told them that we were a hate group.”
On Oct. 25, 2018, the Change the Terms coalition – including the SPLC and other civil rights groups – released a suite of recommended policies for technology companies that would take away the online microphone that hate groups use to recruit members, raise funds and organize violence.
In response to Change the Terms’ advocacy, several Silicon Valley leaders have made promising changes that align with the coalition’s vision for a safer online world. In March 2019, Facebook banned prominent white supremacists, published a report on content removal and made changes to its Livestream feature while also accepting the coalition’s recommendations on tracking URLs from extremist sites.
In May 2019, Internet-infrastructure firm Cloudflare cut its services to 8chan, an infamous online forum. The move came nearly two days after the mass shooting in El Paso, Texas, in which the alleged gunman posted an anti-Latinx manifesto on 8chan 20 minutes before murdering 22 people.
In June 2019, YouTube announced a broadened hate-speech policy, in which “content that alleges a group is superior in order to justify discrimination on characteristics like age, race, caste, gender, religion, sexual orientation, or veteran status” would be prohibited.
These shifts have made the internet safer for millions of people, but the work is far from finished.
In November 2019, Facebook announced that it was taking down substantially more posts containing hate speech from its platform than ever before, claiming that it removed more than 7 million instances of hate speech in the third quarter of 2019, an increase of 59% over the previous quarter. More and more of that hate speech (80%) is now being detected not by humans, Facebook said, but automatically, by artificial intelligence.
Hate groups have clearly been damaged by the efforts of the SPLC and its allied organizations, including the Change the Terms coalition, to fight them and their funding sources online. But the fight is far from over. Many extremists are finding new, though often obscure, internet platforms along with technology providers that don’t mind providing them with services.
Charities Must Also Be Vigilant in Fighting Hate Online
Charities also have a role to play in fighting hate online by blocking donations to hate groups. Charitable gift funds – including the largest charity in the United States – are helping dozens of hate groups raise millions of dollars by allowing their donors not to reveal their identities.
Donors Trust, Fidelity Charitable Gift Fund, Schwab Charitable Fund, and Vanguard Charitable are donor-advised funds that allow individual donors to have accounts from which they can contribute to the nonprofits of their choice. From mid-2014 through 2017, these four donor-advised funds combined to funnel nearly $11 million to 34 organizations that we have identified as hate groups, according to a Sludge analysis of recent tax filings. Among these groups are 12 anti-LGBTQ groups, 12 anti-Muslim groups, eight anti-immigrant groups, one white nationalist group and one “radical traditionalist Catholic” group. The white nationalist organization VDARE Foundation received $46,000.
These donor-advised funds companies are serving as financial pass-throughs to hate groups.
The Federal Government Has Long Failed to Devote the Resources Needed to Combat the Threat of the White Nationalist Movement
Following the violence at the white supremacist “Unite the Right” rally on the weekend of Aug 11-12, 2017, in Charlottesville, Virginia – which left an anti-racist counter-demonstrator dead and more than 30 people injured – Congress unanimously passed a joint resolution urging the Trump administration to “use all available resources” to address the threat from groups that espouse white supremacy. The resolution further called on the attorney general and other federal agencies to vigorously prosecute criminal acts by white supremacists and to improve the collection and reporting of hate crimes.
Clearly, little or nothing has been accomplished to improve the collection and reporting of hate crimes. (The Justice Department acknowledges that hate crimes are vastly underreported. Its Bureau of Justice Statistics estimates that there are as many as 250,000 hate crimes in our country each year. Yet, in its 2018 report, the FBI counted just 7,120 hate crime incidents.)
In terms of addressing white supremacist terror, we know very little about what this administration is doing or whether it is taking any steps whatsoever to counter the threat.
What we do know is that the Department of Homeland Security (DHS) has disbanded a group of intelligence analysts who focused on the threat of domestic terrorism. As part of the department’s Office of Intelligence and Analysis (I&A), these analyists shared information about possible domestic terror threats with state and local officials to help protect communities. One DHS official told the Daily Beast for an April 2, 2019, report: “We’ve noticed I&A has significantly reduced their production on homegrown violent extremism and domestic terrorism while those remain among the most serious terrorism threats to the homeland.”
There are other causes for concern. In 2017, six months into the president’s term, the FBI’s Domestic Terrorism Analysis Unit, part of the bureau’s Counterterrorism Division, warned of the rise of a “black identity movement.” The report was issued to law enforcement agencies across the country just a week before the white supremacist rally in Charlottesville. The reality is that no such movement exists. Federal law enforcement agencies also have shown a pattern of viewing anti-fascist protesters as just as problematic as the deadly white supremacist movement.
We do not want to leave the impression that federal law enforcement agencies across the board have not taken domestic terrorism seriously. To be clear, the FBI and its joint terrorism task forces have thwarted numerous white supremacist terror plots in recent years. In February, for example, the FBI arrested U.S. Coast Guard Lt. Christopher Hasson, a self-avowed white nationalist who worked at the Coast Guard’s headquarters in Washington, D.C., on charges related to what authorities said was a terrorist plot to attack politicians and journalists. In Kansas, three men who called themselves “the Crusaders” were convicted in April 2018 for plotting to blow up an apartment complex where Somali refugees lived. There are many other examples. (Indeed, the numerous examples of these plots reinforce the danger of this movement.)
But, there has not been the kind of sustained, coordinated focus at the highest levels to fight this growing threat. Since 9/11, our country has spent hundreds of billions of dollars to fight groups like Al Qaeda and ISIS. Comparatively, very little has been spent on domestic terrorism.
Congress Must Act to Combat White Nationalism and Racist Violence
To battle this metasticizing threat, we must act.
SPLC encourages corporations to create policies and terms of service to ensure that social media platforms, payment service providers, and other internet-based services do not provide platforms where hateful activities and extremism can grow and lead to domestic terrorism.
Removing hate groups from online platforms by removing their funding sources will prevent their ideas from reaching a wider audience and disrupt their networks.
Some technology companies have taken steps in the right direction, but internet companies must do far more to combat extremism and hate. To stem the rise of hate and domestic terrorism, we are encouraging tech companies to respect people over profits.
Thank you.