2018 Was A Record Year For Online Hate and Harassment

If logging on to Facebook, Twitter or any other social media platforms in the past year may have felt draining, you’re most likely not imagining it.

Facebook became a cesspool for online harassment in 2018, a new survey from the Anti-Defamation League shows. More than half (56%) of the survey’s respondents said they experienced hate through Facebook. Twitter and YouTube clocked in at 19 percent and 17 percent respectively for survey participants saying they were harassed on the platforms.

Photo: Anti-Defamation League

According to the Anti-Defamation League, 32 percent of Americans reported that the harassment they received was because of their sexual orientation, religion, race or ethnicity, gender identity or disability.

Twitter has updated its community standards, while YouTube is now working to alter features on its products to curtail some of the negative behavior. Earlier this month, YouTube announced that it is working on ways to stop “dislike campaigns” on videos by large online groups. In November, Twitter expanded its hateful speech policies to prohibit dehumanizing speech.

ADL’s report also shows American’s outlook on the impact of hate speech online. Nearly 60 percent of respondents believe that online harassment is making hate crimes off the internet more common. Another 50 percent, believe that online abuse is increasing the use of derogatory language.

As social media platforms continue to expand their policies to deal with the influx of cyberbullying and harassment, it is crucial for users to find their own way to cope by reporting the abuse or taking a break from the platforms.


Facebook Will Now Show You Which Advertisers are Downloading Your Data

Facebook announced it is updating its Custom Audience Transparency features later this month providing users with a snapshot of why they are being targeted in ads, and when companies are downloading their data.

The company said in a post that the “Why am I seeing this?” explanations will be more detailed, providing users with information on which businesses have downloaded users’ data and why.

Facebook has demographic targets for its advertising that are based on a user’s profile, which includes age and gender. Its interest targeting come from pages that a user likes and follows and other ads that they have clicked on. A Facebook spokesperson said that political-related ad targeting uses “a variety of signals to determine if someone might be interested in that content.”

The post also mentioned that “on behalf of” agreements may impact the update—modification will have to be made for agencies who are advertising on behalf of another business. The agreements are opt-in, which gives advertisers the choice to sign one.

Facebook has been working to become more transparent with its users on breaches, data mining and advertising practices used on its platform. In December, the company hosted a New York City pop-up to help users learn more about targeted ads, and how to control their privacy within the app after major data breaches.  

Update: This article has been updated to show some of the demographic, interest and political targets that advertisers can use. It also shows that the on behalf agreements are opt-in.

Facebook is Rating Employees on Their Social Impact

Facebook has announced revisions to its performance review system and will now critique its employees on if they are working to change social issues, according to CNBC.

The company said it may be basing pay raises on other factors as well. Employees may also be reviewed on criteria such as whether they are assisting in building new experiences for the platform or supporting businesses that use Facebook as a vital resource.

“So in a nutshell: Facebook’s moving from a focus on growth, to a focus on change” a representative of the company told CNBC.

Previously, employee reviews were based on their performance in enhancing user growth and engagement. Pay bonuses were also contingent on if employees helped improve Facebook products and revenue. Facebook’s recent changes reflect an effort to be more cognizant of its social impacts and correct prior errors.

According to CNBC, the new criteria is set to be used in the first half of 2019.


Facebook Bans An Additional 22 Pages Associated With Alex Jones

Facebook has removed 22 more pages associated with Alex Jones, a conspiracy theorist whose pages have been known to spread misinformation and incite violence. Jones is mostly known for making claims that 9/11 and the Sandy Hook shooting were hoaxes.

The company deleted the pages following its latest policy changes that prohibit administrators of removed pages from creating duplicates. Last month, Facebook announced it would make pages more transparent and have third-party fact checkers flagging misleading or incorrect content.

The company has relied upon third-party fact-checkers during the 2018 midterm elections to stop the spread of misinformation about polling locations, times and dates.

Facebook’s update also covers administrators using other existing pages once their pages have been removed.

Not all of the 22 newly removed pages had Jones as a direct administrator; however, there were many common administrators between those pages and the pages removed in August.

Facebook began cracking down on misinformation, pages and groups that have incited violence. In November, the platform banned the Proud Boys, an alt-right group that used Facebook heavily to recruit members and disseminate information to its followers. Facebook removed the page after members of the group were linked to violent protests in New York City.

As Facebook continues its efforts to improve its social impact, it is making the removal of misinformation and hate one of its biggest goals ahead of the 2020 presidential election.


Here’s Why We Never Experienced Facebook’s “Common Ground” Feature

Common Ground was set to be Facebook’s way of getting users on opposing sides of the political spectrum to have civil conversations on the platform until the company’s longtime global policy chief Joel Kaplan nixed the idea.

According to reports by the Wall Street Journal, Kaplan was worried the feature could be biased against conservative Facebook users and played a key role in making sure it never saw the light of day. Kaplan and other executives were uncertain on how Common Ground would impact user engagement on the platform.

Common Ground’s goal was to bring users together from different backgrounds and political views and encourage less hostile conversations.

You may remember Kaplan, who sat behind his longtime friend Brett Kavanaugh during the judge’s congressional hearing as he gave testimony about sexual assault allegations brought forth against him by Christine Blasey Ford.

Facebook employees saw this as a sign that Kaplan supported Kavanaugh, and the company was forced to make a public statement saying that its leadership team “made mistakes” when handling the events surrounding his confirmation.

Kaplan has seemingly been Facebook’s mouthpiece for conservatives. He’s a former White House aide for President George W. Bush and has pushed for a partnership with right-wing news site, The Daily Caller’s fact-checking division, according to WSJ.