Facebook Allowed Several Neo-Nazi Pages To Remain Up Because They “do not violate community standards”

A report found that Facebook allowed various Neo-Nazi groups to remain on its platform, citing that they “do not violate community standards”, according to recent reporting from The Independent.

The Counter Extremism Project, a nonprofit combatting extremist groups, reported 35 pages to Facebook, according to The Independent. Although the company said it’d remove six of them, the other requests were met with this response:

“We looked over the page you reported, and though it doesn’t go against one of our specific community standards, we understand that the page or something shared on it may still be offensive to you and others.”

  •  – The Independent

The groups reported included international white supremacist organizations, with many making racist or homophobic statements. Some groups also had images of Adolf Hitler and other fascist symbols.

Although this is particularly troublesome following the Christchurch shooting — which broadcasted on Facebook Live — this has been a long-standing issue for Facebook. The platform is notorious for allowing hate speech to flourish while poorly applying its own community standards.

At first glance, Facebook’s definition of “hate speech” seems fine. Under its guidelines, Facebook bans hate speech that “directly attacks people based on their race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, or gender identity, or, serious disabilities or diseases.”

However,  Facebook ignores power imbalances in determining what’s hate speech.

On Facebook, for example, users can be banned for saying “men are trash”. The power imbalance between men and women in a patriarchal society tells you that, even if someone’s feelings might be hurt, saying “men are trash” doesn’t harm men on a societal level.

But, while it was banning users for “men are trash” commentary, Facebook took far longer to ban Alex Jones, the host of InfoWars who incited harassment and spread misinformation, as reported by Mic.

Along with Facebook ignoring power imbalances, it has also — intentionally or unintentionally — found a way to monetize hate.

Earlier this year, The Los Angeles Times reported that Facebook actually allowed advertisers to target users based on their interest in Nazis. Advertisers were able to hone in on topics like “Josef Mengele” and “Heinrich Himmler”.

By allowing advertisers to target people based off their interest in Nazism, Facebook essentially allowed a violent ideology — that has led to actual genocide — to become a method for profit. Doing so curbs any desire to take proactive action in order to tackle this type of violent speech that has led to consequences for oppressed people.

Facebook is under a lot of pressure now, especially from New Zealand’s Prime Minister Jacinda Ardern, who has remained unimpressed by the company’s responses to its broadcasting of the Christchurch shooting.

By allowing Neo-Nazis and other hate groups to remain on its site and even allowing them to use their dollars to pay for advertisement, Facebook laid the online seed that allowed things like the broadcast of the Christchurch shooting to happen on its platform.

Nothing occurs in a vacuum. Increasing Islamophobic rhetoric from all major political parties made Muslims an easy target. But, it’s online platforms like Facebook saying Neo-Nazis don’t violate community standards that helps to embolden their actions.

The ACLU is Suing the FBI for Information On Its ‘Black Identity Extremists’ Report

The ACLU and the Center for Media Justice are suing the FBI for records related to a controversial 2017 report that cited a rise in Black extremism following police-involved shooting deaths of African Americans.

The report, titled “Black Identity Extremists Likely Motivated To Target Law Enforcement Officers” claims that law enforcement officials were being targeted as protests against police violence erupted around the country.

In 2014, the ACLU submitted a public records request asking for all documentation since 2014 that used the phrases “black nationalist,” ”black identity extremist,” and “black separatist,” according to the Associated Press.

The lawsuit is happening because the FBI is withholding these documents and in some cases, refusing to search entire categories, according to the ACLU.

“The FBI’s baseless claims about the fictitious group of ‘Black Identity Extremists’ throws open the door to racial profiling of Black people and Black-led organizations who are using their voices to demand racial justice” Nusrat Choudhury, deputy director of the ACLU’s Racial Justice Program, said in a statement.

Surveillance of Black people is nothing new and foundational to the United States. As times change, though, surveillance has shifted from its early roots –like the slave pass — to tactics like the ones the ACLU is fighting back against.

Online monitoring seems to play a key role in the identification of supposed-‘Black Identity Extremists.’ Within the document, the FBI noted that reporting was primarily derived from things like police investigations and “subjects’ posting on social media.”

“As a Black activist and member of the Black Lives Matter Network, I am concerned that the FBI is deploying high-tech tools to profile, police, and punish Black people who stand up for racial justice,” said Malkia A. Cyril, co-founder and executive director of the Center for Media Justice, noting the programs similarities to COINTELPro.

So far, one arrest has been made due to the surveillance efforts of the ‘Black Identity Extremist’ program. In December 2017, Rakem Balogun was arrested due to Facebook posts, as reported by The Guardian.

As technology changes, so do the ways in which surveillance is carried out. Social media has become a playground for surveillance, especially when it comes to targeting movements. Although it’s unclear just how much the FBI’s identification of ‘Black Identity Extremists’ relies on online surveillance, it’s been a problem for years.

For example, ACLU documents showed the Boston Police Department used a program called Geofeedia to conduct online surveillance between 2014 and 2016 (notably, the same program used by the Chicago Police Department in the same time span). The BPD monitored hashtags like #BlackLivesMatter, #MuslimLivesMatter, and the use of basic Arabic words.

This obvious inclusion of anti-Black Islamophobia in these programs used by local police departments is important to note because the FBI’s made-up designation also puts Black Muslims at increased risk. The language used in the FBI’s report is reminiscent of the entrapment and surveillance based program Countering Violent Extremism, notorious for targeting Black Muslims.

The full extent of social media monitoring and how deeply the FBI is looking into the groups mentioned in its report are unknown, but it’s clear that it exists. Obtaining more information will allow the public to fully understand just how the culture of surveillance has adapted to a digital era. It will also give people insight into how the FBI is monitoring certain groups of people and how they choose whom to keep an eye on.

Whatever the documents show, it’s important to remember that this type of surveillance is not exceptional. Black people have been watched and tracked since they were forcibly displaced. The surveillance of Black people is the United States’ norm.

Facebook Confirms It Stored Millions of Passwords In Plain Text. You Should Change Your Password Right Now

Facebook isn’t exactly well known for having the best security practices and now, another blunder has surfaced.

On Thursday, a report from Krebs on Security revealed the company had stored hundreds of millions of users’ passwords for Facebook, Facebook Lite, and Instagram in plain text, making user passwords accessible to thousands of Facebook employees.

Usually, passwords are encrypted when they’re stored through a process called hashing, but Facebook had a “series of security failures” that left people’s password information wide open, according to the Krebs report.

Pedro Canahuati, Facebook’sVP of Engineering, Security, and Privacy, confirmed the breach in a blog titled “Keeping Passwords Secure”, writing, “As part of a routine security review in January, we found that some user passwords were being stored in a readable format within our internal data storage systems.”

This may not seem like a big deal, because the company said there’s no sign that the passwords were visible to anyone outside of Facebook. However, this just proves,  again,  that Facebook’s security is subpar.

For example, in November of 2018, private messages from 81,000 users were put up for sale, as reported by The Verge. And Facebook is still facing repercussions for the Cambridge Analytica scandal that resulted in Mark Zuckerberg testifying before Congress.

Facebook is notifying all the people who were affected. According to Krebs, that’s between 200 and 600 million users. The company isn’t forcing anyone to change their passwords, but you should, even if you aren’t notified.

There’s no sign of abuse, but it’s better to protect your account than to take any chances.

Facebook is Giving More Details on How it Handled Video of the Christchurch shooting

On Wednesday, in response to continued pressure from multiple countries, Facebook published a blog by Guy Rosen, Facebook’s VP of product management, providing further details on the company’s response to the Christchurch shooting in New Zealand that left 50 people dead.

Companies like Facebook often use artificial intelligence to identify content that should be removed. However, an over reliance on AI can lead to exactly what happened with the Christchurch video. The shooting was allowed to be broadcast on Facebook Live and after, videos continued to spread across the internet.

Although Facebook continues to cite that the video was only viewed about 200 times during its live broadcast — and that nobody flagged it to moderators — those excuses aren’t cutting it for users and lawmakers.

Now, the company is saying it tried to use an experimental audio technology, in order to catch copies of the video that it’s AI missed. Facebook wrote that it “employed audio matching technology to detect videos which had visually changed beyond our systems’ ability to recognize automatically but which had the same soundtrack.”

In the post Facebook noted that AI requires “training data”. Essentially it learns to ban a specific type of content by seeing it a certain number of times. That meant the AI could also be confused if the video was slightly altered visually, like being doctored or someone recording their own screen and then posting that.

Facebook didn’t provide too many details on the audio technology beyond that. But, it’s clear that the company is trying to be more transparent due to increased government pressure to explain its process.

New Zealand’s prime minister Jacinda Arden has made it clear that she’s been unimpressed with Facebook so far. She has been in contact with Facebook’s chief operating officer, but increased her criticism of the platform on Tuesday in Parliament.

Facebook is also facing potential consequences here in the United States.

Yesterday, Chairman of the House Homeland Security Committee, Rep. Bennie G. Thompson wrote a letter to executives from top companies, including Facebook and YouTube, where the video continued to exist. 

In it, Thompson said they must “do better” and called for executives, including Facebook’s CEO Mark Zuckerberg, to appear before Congress in order to explain their process and ensure something like this won’t happen again.

What is perhaps most frustrating about the entire conversation so far is that it continues to center on reactivity. Facebook says it couldn’t stop the original video because nobody reported it, but the type of online hate that the shooter identified as inspiration has existed on the platform for decades.

Social media companies have historically taken a relaxed approach to confronting online hate. But Christchurch, and the lack of appropriate response to it, isn’t an event that can easily be swept under a rug.

Facebook Settles Historic Civil Rights Lawsuit Alleging Discriminatory Ad Practices

Facebook has settled a civil rights lawsuit that alleged the company participated in discriminatory ad practices.

Facebook’s ad targeting system allows companies to exclude certain groups from seeing ads for things like housing and jobs. The settlement will require the company to essentially overhaul it’s ad system, which is its major money maker. It’s a first of kind settlement and one that could have huge implications for the company moving forward.

The company will create a separate space on Facebook, Instagram, and Messenger for advertisers who are making job, housing, or credit ads. Within that space, Facebook will get rid of targeted advertisement that allowed people to target ads based on age, gender, and options associated with “protected characteristics or groups” according to the ACLU.

People won’t be able to target ads based on a ZIP code or geographic area that’s less than a 15-mile radius. And, when developing “Lookalike” audiences for advertisers, Facebook will stop using age, gender, ZIP code, or membership in specific groups.

In addition, advertisers for jobs, housing, and credit will have to certify their compliance with anti-discrimination laws. Under the agreement, the ACLU will monitor Facebook and its progress for a three-year period.

Concerns around Facebook’s targeted advertisements arose with a 2016 ProPublica report. The outlet shared it was allowed to buy housing ads that excluded African American, Asian American, and Hispanic people from seeing it.

This was very obviously in violation of the Fair Housing Act of 1968, but Facebook originally defended itself to USA Today claiming that “multicultural” targeting is common in the ad industry. Still, the company said it was “working better” to understand the concerns brought up.

Then, in 2017, another ProPublica report found that discriminatory ads still got through Facebook’s system. Since then, Facebook has been hit by five discrimination lawsuits and charges from a variety of sources, including civil rights groups and individuals, as reported by CNN.

In September, the ACLU filed charges with the Equal Employment Opportunity Commission against Facebook, on behalf of the Communications Workers of America and individuals. The ACLU wrote, “These charges joined other litigation asserting race discrimination in job, housing, and credit ads and age discrimination in job ads.”

As online advertising becomes ingrained into people’s lives, it’s important to understand and confront discriminatory practices as they unfold on different platforms.

Discrimination hidden within Facebook’s targeted ads proves that exclusion doesn’t need to be stated outright and can be built into the ads themselves — similar to the practice of redlining.

Facebook probably isn’t the only online platform guilty of discriminatory practices, but this is still a win for civil rights advocates. Facebook is such a big company that being able to force them to completely overhaul a system sends a clear warning to everybody else.

“Because Facebook is such a dominant player in online advertising, today’s settlement marks a significant step toward ensuring that we don’t lose our civil rights when we go online to find a house, job, or loan,” the ACLU said. “But we’ll keep working to ensure that those rights remain intact no matter where we click.”

Any advertisers who aren’t creating housing, credit, or job ads will be able to continue using targeted advertising.

Congress Calls On Big Tech To Answer For Its Response to the Christchurch Shooting

Today, Chairman of the House Homeland Security Committee, Rep. Bennie G. Thompson, wrote letters to executives from tech companies — including Facebook and YouTube — over their response to the Christchurch shooting and how video of the horrific event was able to spread online.

“I was deeply concerned to learn that one of the shooters live-streamed this terror attack on Facebook, and the video was subsequently re-uploaded on Twitter, YouTube, and other platforms,” Thompson said. “This video was widely available on your platforms well after the attack, despite calls from New Zealand authorities to take these videos down.”

“You must do better,” Thompson added. “It is clear from the recent pattern of horrific mass violence and thwarted attempts at mass violence — here and abroad — that this is not merely an American issue but a global one.”

Thompson’s letter called for Facebook CEO Mark Zuckerberg, YouTube CEO Susan Wojcicki, Twitter CEO Jack Dorsey, and Microsoft CEO Satya Nadella, to appear before Congress and lay out how they’ll ensure something like this won’t happen in the future.

Although there’s great irony in anyone from Homeland Security calling out tech platforms for being a tool to spread Islamophobic hate, the letter reveals how big of a problem this has become. It also shows that the United States is under some pressure.

Viral trends are popular on the internet with people carefully planning ways to maximize the number of views on an event. Unfortunately, these viral moments are not limited to new games or jokes. As the Christchurch shooting demonstrated, it’s not too difficult to make hate to go viral.

Video of the shooting, which left 50 people dead, was broadcast on Facebook Live.  Before, it was difficult to know how many people watched the video as it streamed. Now, though, Facebook says about 200 people watched the live video — and nobody flagged it to moderators.

The shooter also uploaded a 17-minute video to Facebook, Instagram, Twitter, and YouTube. Since then, the video has exploded across social media, with companies scrambling to keep up.

In a tweet, Facebook said it removed 1.5 million videos of the New Zealand shooting in the 24 hours after the original broadcast. YouTube also said it removed thousands of uploads of the video and Reddit closed its infamous r/watchpeopledie after the video surfaced there.

Those who shared the video or didn’t report it when it surfaced online are part of the reason it spread as widely as it did. There shouldn’t be any entertainment in watching people die. But big tech companies — who own and operate the platforms where these hateful messages are being spread — shoulder some of the responsibility as well.

What the video’s rapid spread shows is that there’s a disturbing culture of promoting hate, in this case Islamophobia, that is perpetuated online. Most alarmingly, this problem isn’t new —  the shooter cited his own online influences in a manifesto — but tech companies have done little to nothing to stop it.

For example, Twitter is infamous for its reluctance to ban Nazis, mongers, and others. In some cases, known members of the alt-right have even been verified, a process meant to signify their position as an account of public interest. In 2018 article, TechCrunch referred to Twitter as a “Nazi haven”.

Facebook, meanwhile, allows advertisers to target people based on their interest in Nazis. Hate speech has been a problem on the platform since it was originally created, but not much has been done about it.

The Christchurch shooting occurred because Islamophobia has crept into every aspect of media and people’s lives. The dehumanization of Muslims that casts them as an inherent threat to social order isn’t limited to a particular party or platform.

The video was allowed to spread because tech companies have displayed time and time again that they don’t take hate seriously. There is a belief that it’s all talk and the hate ends online, but Christchurch is a harsh reminder that it doesn’t.

Tech companies can’t shrug responsibility any longer or pretend that they’re unaware of what’s festering on their platforms. They need to be held accountable for what they allowed to spread and for the consequences that follow.

Instagram Adds New Checkout Features For Shoppers

Shopping on Instagram just got a lot easier.

The app has added a new checkout feature. Users can now buy from their favorite store directly through the app. H&M, ColourPop Cosmetics, Nike and other brands are already using the feature to cash in on customers.

Merchants will have to pay a fee to sell their items through the app, which make it another benefit for Instagram’s business offerings. Users can store their PayPal, Mastercard and Visa payment information within the app.

The newest feature comes almost a week after Facebook, Instagram and Whatsapp crashed following a “server configuration change.” Some users who run their businesses solely through Instagram ran to Twitter to complain about how the crash negatively impacted their engagement and sales numbers for the day.

“Once your first order is complete, your information will be securely saved for convenience the next time you shop,” Instagram said on its blog. “You’ll also receive notifications about shipment and delivery right inside Instagram, so you can keep track of your purchase.”

With Instagram’s newest checkout feature, another crash could be detrimental to companies looking to make the platform one of many avenues for sales and marketing.

Photo: Instagram

Instagram has been under some rapid changes lately. From increased ads to IGTV to its recent series of shopping features, it’s almost impossible to close the app — which is great news for the company.

Instagram has even taken the time to build out its desktop features. Before its updates, users couldn’t view Instastories, post comments or view their Discovery page, but now the desktop version of the app is nearly as fun as mobile.

Despite some privacy and data issues with its parent company Facebook, Instagram may be in its golden age as more consumers and business professionals utilize the app. Instagram’s expansions are mirroring a similar time for Facebook, when users had to pry themselves from in front of their computer screens and phones.

During the rise of app culture, Facebook introduced games that connected users with friends, messenger, groups and other multimedia, stopping users from venturing to other apps to find fun. Now the company is applying these same tactics to bolster up users and engagement on Instagram.

 

WhatsApp’s Co-Founder Says You Should Delete Your Facebook Account

Earlier this week, WhatsApp co-founder Brian Acton urged students to delete their Facebook accounts, as reported by Buzzfeed News.

Acton made a rare public appearance alongside Ellora Israni, a former Facebook employee who founded She++. The two were speaking at Stanford University to students taking Computer Science 181. The undergraduate course focuses on” tech companies’ social impact” and their “ethical responsibilities.”

During his talk, Acton brought up the issue of moderation, which has become a big question in the tech world — especially for social media companies.

Tech companies continue to face struggles around moderating content. Although that’s something WhatsApp didn’t have to deal with because its encryption made it so no one could monitor what’s said on the app.

“I think it’s impossible,” Acton said in regards to moderation on other platforms. “To be brutally honest, the curated networks — the open networks — struggle to decide what’s hate speech and what’s not hate speech. … Apple struggles to decide what’s a good app and what’s a bad app. Google struggles with what’s a good website and what’s a bad website. These companies are not equipped to make these decisions.”

Acton went on to add, “And we give them the power. That’s the bad part. We buy their products. We sign up for these websites. Delete Facebook, right?”

Acton also criticized Facebook in other ways, referring to it as “a bit of a monoculture.” It’s apparent that he has been pretty vocal about his critiques  for a minute.

He originally left the company in 2017 — a decision that cost him $850 million — because Facebook wanted to monetize Whatsapp. In an interview, Acton told Forbes, “At the end of the day, I told my company. I told my users’ privacy to a larger benefit. I made a choice and a compromise. And I live with that every day.”

This isn’t the first time Acton has urged people to delete their Facebook accounts. When the Cambridge Analytica scandal originally broke last year, Acton tweeted, “It is time. #deletefacebook.”

He hasn’t tweeted another thing since.

Tech Companies Are Scrambling To Remove Video of The Christchurch Shooting

Since the Christchurch massacre, social media platforms have scrambled to keep video of it off their platforms. Days after the attack, it’s not difficult to find clips or still images from it. To many, this opens up questions about tech companies’ failures to regulate hate on their platforms, and who shares responsibility in moments like this.

After the shooting, where at least fifty Muslims were killed in two New Zealand mosques, archives of the alleged shooter’s page revealed only 10 people had tuned into his Facebook Live broadcast of the event, according to The Wall Street Journal.

Although the original video didn’t have many viewers, it exploded across social media in the days following the attack. Facebook, which has faced the brunt of criticism due to its site hosting the livestream, says it removed 1.5 million videos of the New Zealand shooting in the 24 hours after the shooting was broadcast.

In a thread on Twitter, the company said it blocked over 1.2 million of the videos before they were uploaded. However, that means about 300,000 videos managed to appear on Facebook, and even that number is still far too big.

On Sunday, New Zealand’s Prime Minister, Jacinda Arden, told reporters during a press conference in Wellington that Facebook’s chief operating officer Sheryl Sandberg reached out and the two haves plans to discuss the livestream.

“Certainly, I have had some contact with Sheryl Sandberg. I haven’t spoken to her directly but she has reached out, an acknowledgement of what has occurred here in New Zealand,” Arden said.

She went on to add, “This is an issue I will look to be discussing directly with Facebook. We did as much as we could to remove, or seek to have removed, some of the footage that was being circulated in the aftermath of this terrorist attack. But ultimately, it has been up to those platforms to facilitate their removal.”

In addition to a livestream, the alleged shooter uploaded a 17-minute video to Facebook, Instagram, Twitter, and YouTube.

An Uncontrollable Spread

Part of the issue is companies have begun to over-rely on artificial intelligence software that can’t actually detect violent content as it’s being broadcasted, as noted by The Wall Street Journal. Although some platforms, like Facebook, have human content moderating teams, they’re sometimes overworked, traumatized, and sometimes end up radicalized themselves.

Once a video goes up, it’s not difficult for people to upload it themselves, create copies, and slightly doctor them in order to repost. For example, The Wall Street Journal reported that a version of the video was edited to look like a first-person shooter game and then uploaded on Discord, a messaging app for videogamers. 

Since the attack, YouTube said its removed thousands of uploads of the video, but even the company supported by Google couldn’t stop the spread of the footage quick enough. Elizabeth Dwoskin and Craig Timberg of the Washington Post say reported that the tech giant had to take drastic measures:

As its efforts faltered, the team finally took unprecedented steps — including temporarily disabling several search functions and cutting off human review features to speed the removal of videos flagged by automated systems. Many of the new clips were altered in ways that outsmarted the company’s detection systems.

-The Washington Post

But even as tech companies, with all their engineering  support, took down the video, people who wanted to see it and distribute it knew exactly where to go.

Back in 2018, Reddit quarantined their infamous subreddit r/watchpeopledie, which allowed people to do exactly what it said — watch videos of people dying. According to TechCrunch, the subreddit shared extremely graphic videos like the 2018 murder of two female tourists in Morocco.

Despite the quarantine, people could still access the subreddit directly. It became active as people sought out videos of the Christchurch shooting. TechCrunch reported one of the subreddit’s moderators locked a thread about the video and posted the following statement:

“Sorry guys but we’re locking the thread out of necessity here. The video stays up until someone censors us. This video is being scrubbed from major social media platforms but hopefully Reddit believes in letting you decide for yourself whether or not you want to see unfiltered reality. Regardless of what you believe, this is an objective look into a terrible incident like this.

Remember to love each other.”

Late Friday morning, Reddit finally banned the subreddit entirely and similar ones such as r/gore and r/wpdtalk (“watch people die talk”). A spokesperson told TechCrunch, “We are very clear in our site terms of service that posting content that incites or glorifies violence will get users and communities banned from Reddit.”

The gaming platform, Valve, also had to remove over 100 profiles that praised the shooter.  According to Kotaku,  dozens of users on the site were offering tribute to the alleged shooter. One profile even showed a GIF of the attack and others called the shooter a “saint”, “hero”, or referred to him as “Kebab Remover”.

The concern of social media’s role in promoting the attack isn’t contained only to New Zealand. The leader of Britain’s Labour Party, Jeremy Corbyn, told Sky News on Sunday, “The social media platforms which were actually playing a video made by this person who is accused of murder…all over the world, that surely has got to stop.”

Corbyn went to on to explain how that although the responsibility rests in the hands of the operators of social media platforms, it calls into question how social media companies are regulated.

The spreading of hateful messages is one of social media’s biggest, oldest problems.

Before Cambridge Analytica or any other misinformation battle, hate speech and harassment were at the forefront on these platforms and groups were able to use them as a megaphone to spread their messages. Facebook is one of the wealthiest companies in the entire world. They supposedly employ the smartest people and the best engineers. So why has this problem, one that’s festered for so long, not been fixed?  That’s something tech companies are going to have to start answering for.

 

Slack Removes 28 Accounts Associated With Hate Groups

This week Slack announced that it has removed more than two dozen accounts linked to known hate groups from its platform.

“The use of Slack by hate groups runs counter to everything we believe in at Slack and is not welcome on our platform,” Slack said on its website.

Facebook, Twitter and other social media platforms have increasingly been used to spread bigoted ideologies and highlight violence that Slack wants no part of.

A survey from the Anti-Defamation League showed that 2018 was a record year for online hate and harassment. It seems that 2019 will not be reversing the course despite platform’s removal of such groups. The survey did not name Slack as an online location where users experienced hate speech — this could be because the platform is mostly used in professional environments where the behaviors would be punished externally.

Facebook has done mass removals of groups, ads, and pages associated with hate groups over the past few months.  Following its policy updates stopping page administrators of removed pages from creating duplicates, Facebook deleted 22 pages associated with conspiracy theorist Alex Jones, whose pages have been known to spread misinformation and incite violence. 

Controlling and getting rid of hate groups has become a growing issue for social media platforms and other websites that exponentially spread misinformation. Platforms like YouTube, Twitter, and Facebook have turned to policy updates to curb the impact of hate groups; however, these communities often quickly find ways to create new pages and profiles.

For Slack, hate groups are a fairly new issue. The platform has been more focused on building inclusive workspaces — it recently introduced plug-ins aimed at challenging users’ unconscious gender biases.

“Using Slack to encourage or incite hatred and violence against groups or individuals because of who they are is antithetical to our values and the very purpose of Slack,” the company said in its statement.

Here is Slack’s full statement:

Today we removed 28 accounts because of their clear affiliation with known hate groups. The use of Slack by hate groups runs counter to everything we believe in at Slack and is not welcome on our platform. Slack is designed to help businesses communicate better and more collaboratively so people can do their best work. Using Slack to encourage or incite hatred and violence against groups or individuals because of who they are is antithetical to our values and the very purpose of Slack. When we are made aware of an organization using Slack for illegal, harmful, or other prohibited purposes, we will investigate and take appropriate action and we are updating our terms of service to make that more explicit.