GoFundMe Will Remove Anti-Vax Campaigns Spreading Misinformation

Anti-vax campaigns have entered the conversation around misinformation on the internet, with false theories about the harms that come with vaccinations spreading on platforms like Facebook and Youtube. The role fundraising sites play in this has mostly gone unnoticed, until now.

Recently, GoFundMe shared it will remove anti-vax campaigns on its website, as reported by The Daily Beast. A spokesman for GoFundMe told the outlet, “Campaigns raising money to promote misinformation about vaccines violate GoFundMe’s terms of service and will be removed from the platform.”

According to The Daily Beast, fundraisers by anti-vax groups have raised at least $170,000 in the last four years on GoFundMe. That’s a significant amount of money. Removing their ability to use GoFundMe may not completely defund anti-vax groups, but it’ll definitely make things harder for them moving forward.

Anti-vax campaigns can be dangerous not only because some of them use bad science, but also because they have real-life consequences. Health officials in New York partly blamed the anti-vax for a recent measles outbreak that have infected mostly children.

Last month, Pinterest announced it was halting search results on vaccines and combating anti-vax messaging. It was surprising to see Pinterest make that decision, but it made sites like Facebook share their own plans for combating anti-vax misinformation as well.

It’s unlikely that GoFundMe is the only crowdfunding site that the anti-vax movement uses. Hopefully, others will be motivated to follow GoFundMe’s lead and stop the spread of misinformation.

Former Amazon Worker Claims He Was Fired For Union Organizing

Justin Rashad Long — a former worker at the Amazon Fulfillment Center in Bloomfield —  has filed charges with the National Labor Relations Board (NLRB) accusing the company of unfair labor practices, as reported by Silive.

Last month, Long was fired by one of his managers for a “safety violation.” However, in his complaint, Long claimed that the real reason for his firing was that he was involved in organizing around Amazon’s poor working conditions.

An Amazon spokeswoman maintains that Long’s allegations are false, according to Silive, stating, “His employment was terminated for violating a serious safety policy.”

Amazon is no stranger to allegations regarding unsafe working conditions. In May of 2018, Business Insider reported the “horror stories” of Amazon workers. Some claims included workers urinating in trash cans because they didn’t have time to go to the bathroom.

The Bloomfield is the first New York-based fulfillment center in Staten Island. Months after it opened, workers announced a union push. Their concerns were also based around working conditions, according to the Guardian.

They received assistance from the Retail, Wholesale, and Department Store Union, whose president Stuart Appelbaum told The Guardian that Amazon has a record of “routinely mistreating and exploiting its workers.”

Retaliation from corporations is a major concern for labor organizers, and Amazon is aggressively anti-union. For example, the company sent a 45-minute union-busting video to Whole Foods managers after hearing rumors of potential organizing.

With that in mind, Long’s claims aren’t far-fetched. The RWDSU plans to continue supporting Long and other Amazon workers, according to Silive.

If Amazon workers across the country unionize, it will certainly mean big changes for the company, and if past complaints around working conditions carry any weight,  it seems that change is long overdue.

Facebook Allowed Several Neo-Nazi Pages To Remain Up Because They “do not violate community standards”

A report found that Facebook allowed various Neo-Nazi groups to remain on its platform, citing that they “do not violate community standards”, according to recent reporting from The Independent.

The Counter Extremism Project, a nonprofit combatting extremist groups, reported 35 pages to Facebook, according to The Independent. Although the company said it’d remove six of them, the other requests were met with this response:

“We looked over the page you reported, and though it doesn’t go against one of our specific community standards, we understand that the page or something shared on it may still be offensive to you and others.”

  •  – The Independent

The groups reported included international white supremacist organizations, with many making racist or homophobic statements. Some groups also had images of Adolf Hitler and other fascist symbols.

Although this is particularly troublesome following the Christchurch shooting — which broadcasted on Facebook Live — this has been a long-standing issue for Facebook. The platform is notorious for allowing hate speech to flourish while poorly applying its own community standards.

At first glance, Facebook’s definition of “hate speech” seems fine. Under its guidelines, Facebook bans hate speech that “directly attacks people based on their race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, or gender identity, or, serious disabilities or diseases.”

However,  Facebook ignores power imbalances in determining what’s hate speech.

On Facebook, for example, users can be banned for saying “men are trash”. The power imbalance between men and women in a patriarchal society tells you that, even if someone’s feelings might be hurt, saying “men are trash” doesn’t harm men on a societal level.

But, while it was banning users for “men are trash” commentary, Facebook took far longer to ban Alex Jones, the host of InfoWars who incited harassment and spread misinformation, as reported by Mic.

Along with Facebook ignoring power imbalances, it has also — intentionally or unintentionally — found a way to monetize hate.

Earlier this year, The Los Angeles Times reported that Facebook actually allowed advertisers to target users based on their interest in Nazis. Advertisers were able to hone in on topics like “Josef Mengele” and “Heinrich Himmler”.

By allowing advertisers to target people based off their interest in Nazism, Facebook essentially allowed a violent ideology — that has led to actual genocide — to become a method for profit. Doing so curbs any desire to take proactive action in order to tackle this type of violent speech that has led to consequences for oppressed people.

Facebook is under a lot of pressure now, especially from New Zealand’s Prime Minister Jacinda Ardern, who has remained unimpressed by the company’s responses to its broadcasting of the Christchurch shooting.

By allowing Neo-Nazis and other hate groups to remain on its site and even allowing them to use their dollars to pay for advertisement, Facebook laid the online seed that allowed things like the broadcast of the Christchurch shooting to happen on its platform.

Nothing occurs in a vacuum. Increasing Islamophobic rhetoric from all major political parties made Muslims an easy target. But, it’s online platforms like Facebook saying Neo-Nazis don’t violate community standards that helps to embolden their actions.

New Zealand Officially Bans People From Sharing Christchurch Shooter’s Manifesto

On Saturday, New Zealand’s Office of Film & Literature Classification officially banned the Christchurch shooter’s manifesto. By labeling it as “objectionable,” the government is considering its ban as a justifiable limit on freedom of expression.

Under the ban, it’s now illegal to have a copy of either the video or the document and to share it with others — including online links. The New Zealand government urges people to report social media posts, links or websites displaying the video or manifesto here.

If someone is found to have the manifesto, they can face up to ten years in prison, and those distributing it could face up to 14 years, as reported by Business Insider. The consequences for owning or sharing the video, though, are unclear.

Although the full video is banned, it doesn’t mean any screenshots or other still images from it falls under that. The Office of Film & Literature Classification website notes images from the video “depicting scenes of violence, injury or death, or that promote terrorism may also be illegal.”

It’s likely that the New Zealand government will consider still images on a case by case basis, so there’s no sweeping proposal to make them all illegal.

During the aftermath of the shooting,New Zealand Prime Minister Jacinda Ardern announced that assault rifles would be banned. It was simultaneously a win for New Zealanders but frustrating for those in the United States. In the US, people regularly see calls for gun control after violent shootings, but little in the form of legislation or actual changes to policy.

The response of New Zealand’s government after the Christchurch attacks proves that proactive steps can be taken to combat these shootings. The ban of the video and manifesto is just one step towards fighting the Islamophobic rhetoric that could embolden future attacks.

The manifesto’s ban has raised questions around free speech but the Chief Censor, David Shanks defended the office’s position in a statement, saying, “There is an important distinction to be made between ‘hate speech’, which may be rejected by many right-thinking people but which is legal to express, and this type of publication, which is deliberately constructed to inspire further murder and terrorism. It crosses a line.”

It’s also important to interrogate why people feel the need to read a manifesto obviously intended to help the attack go viral. To be frank, not much in the manifesto is “original.” The uncomfortable truth is that the shooter named online inspiration for his attack. The individuals and websites he named can still be found online.

One potential issue here is that banning the manifesto may further entice people to want to read it or turn the shooter into some kind of martyr for free speech or white supremacist ideology. We may not get a sense of the impact — positive or negative — of the new law until much later down the road.

The manifesto itself is a continuation of white supremacist violence, and that can be unpacked without giving the shooter more attention. Instead of trying to read the manifesto, people can listen to scholars, organizers, and community members, who work to understand and halt this specific type of violence and the rhetoric that follows.

The ACLU is Suing the FBI for Information On Its ‘Black Identity Extremists’ Report

The ACLU and the Center for Media Justice are suing the FBI for records related to a controversial 2017 report that cited a rise in Black extremism following police-involved shooting deaths of African Americans.

The report, titled “Black Identity Extremists Likely Motivated To Target Law Enforcement Officers” claims that law enforcement officials were being targeted as protests against police violence erupted around the country.

In 2014, the ACLU submitted a public records request asking for all documentation since 2014 that used the phrases “black nationalist,” ”black identity extremist,” and “black separatist,” according to the Associated Press.

The lawsuit is happening because the FBI is withholding these documents and in some cases, refusing to search entire categories, according to the ACLU.

“The FBI’s baseless claims about the fictitious group of ‘Black Identity Extremists’ throws open the door to racial profiling of Black people and Black-led organizations who are using their voices to demand racial justice” Nusrat Choudhury, deputy director of the ACLU’s Racial Justice Program, said in a statement.

Surveillance of Black people is nothing new and foundational to the United States. As times change, though, surveillance has shifted from its early roots –like the slave pass — to tactics like the ones the ACLU is fighting back against.

Online monitoring seems to play a key role in the identification of supposed-‘Black Identity Extremists.’ Within the document, the FBI noted that reporting was primarily derived from things like police investigations and “subjects’ posting on social media.”

“As a Black activist and member of the Black Lives Matter Network, I am concerned that the FBI is deploying high-tech tools to profile, police, and punish Black people who stand up for racial justice,” said Malkia A. Cyril, co-founder and executive director of the Center for Media Justice, noting the programs similarities to COINTELPro.

So far, one arrest has been made due to the surveillance efforts of the ‘Black Identity Extremist’ program. In December 2017, Rakem Balogun was arrested due to Facebook posts, as reported by The Guardian.

As technology changes, so do the ways in which surveillance is carried out. Social media has become a playground for surveillance, especially when it comes to targeting movements. Although it’s unclear just how much the FBI’s identification of ‘Black Identity Extremists’ relies on online surveillance, it’s been a problem for years.

For example, ACLU documents showed the Boston Police Department used a program called Geofeedia to conduct online surveillance between 2014 and 2016 (notably, the same program used by the Chicago Police Department in the same time span). The BPD monitored hashtags like #BlackLivesMatter, #MuslimLivesMatter, and the use of basic Arabic words.

This obvious inclusion of anti-Black Islamophobia in these programs used by local police departments is important to note because the FBI’s made-up designation also puts Black Muslims at increased risk. The language used in the FBI’s report is reminiscent of the entrapment and surveillance based program Countering Violent Extremism, notorious for targeting Black Muslims.

The full extent of social media monitoring and how deeply the FBI is looking into the groups mentioned in its report are unknown, but it’s clear that it exists. Obtaining more information will allow the public to fully understand just how the culture of surveillance has adapted to a digital era. It will also give people insight into how the FBI is monitoring certain groups of people and how they choose whom to keep an eye on.

Whatever the documents show, it’s important to remember that this type of surveillance is not exceptional. Black people have been watched and tracked since they were forcibly displaced. The surveillance of Black people is the United States’ norm.

Facebook Confirms It Stored Millions of Passwords In Plain Text. You Should Change Your Password Right Now

Facebook isn’t exactly well known for having the best security practices and now, another blunder has surfaced.

On Thursday, a report from Krebs on Security revealed the company had stored hundreds of millions of users’ passwords for Facebook, Facebook Lite, and Instagram in plain text, making user passwords accessible to thousands of Facebook employees.

Usually, passwords are encrypted when they’re stored through a process called hashing, but Facebook had a “series of security failures” that left people’s password information wide open, according to the Krebs report.

Pedro Canahuati, Facebook’sVP of Engineering, Security, and Privacy, confirmed the breach in a blog titled “Keeping Passwords Secure”, writing, “As part of a routine security review in January, we found that some user passwords were being stored in a readable format within our internal data storage systems.”

This may not seem like a big deal, because the company said there’s no sign that the passwords were visible to anyone outside of Facebook. However, this just proves,  again,  that Facebook’s security is subpar.

For example, in November of 2018, private messages from 81,000 users were put up for sale, as reported by The Verge. And Facebook is still facing repercussions for the Cambridge Analytica scandal that resulted in Mark Zuckerberg testifying before Congress.

Facebook is notifying all the people who were affected. According to Krebs, that’s between 200 and 600 million users. The company isn’t forcing anyone to change their passwords, but you should, even if you aren’t notified.

There’s no sign of abuse, but it’s better to protect your account than to take any chances.

San Francisco’s Cashless Ban Could Include Amazon Stores

Bans on cashless stores are popping up across the country as officials begin to weigh how they exclude certain groups of people from shopping at certain stores.

Now, San Francisco may be joining a host of other cities in passing similar legislation, but this time it could include Amazon Go Stores.

Last month, San Francisco’s District Five Supervisor Vallie Brown introduced a bill requiring “brick-and-mortar” businesses to accept cash. The original bill excluded Amazon Go stores because there aren’t any employees present to take cash.

However, Brown expanded the proposal to include Amazon’s stores on Tuesday. This is a bold move because Amazon isn’t the most graceful when it comes to the government trying to regulate its business practices. This was apparent when Seattle tried to propose a tax on big businesses to help the homeless.

This was also seen in Philadelphia, where a public fight between Amazon and city officials occurred after a bill was approved to ban stores from not accepting cash payments. According to the Associated Press, Amazon threatened to forgo plans to build a store in the city if the bill was passed.

Spoiler: the Philadelphia bill passed.

Amazon Go stores work by having cameras that follow people around. Ideally, you walk in, grab what you want, and a camera will register it and automatically charge your Amazon account once you leave.

The issue with cashless stores is that they’re exclusive by default. Not everyone has access to cashless payment options. In fact, studies have shown there’s a link between poverty and being unbanked.

In a memo, Brown cited the racial disparities between the unbanked and how cashless stores exclude Black and brown communities. She cited data from 2005, showing that 50 percent of African American and Latino households were unbanked.

“In this reality, not accepting cash payment is tantamount to systematically excluding segments of the population that are largely low-income people of color,” Brown said.

San Francisco’s proposal notes that although some may choose not to have a bank account, “Others may not be well situated to participate in the formal banking system, or may be excluded from that system against their will. In short, denying the ability to use cash as a payment method means excluding too many people.”

In a rush to modernize shopping, those who are already disenfranchised cannot be left behind. Amazon will probably take up a loud role in trying to lobby against the proposed bill, but it’s unclear how that will turn out.

The Wing and Time’s Up Announce New Partnership

Today, The Wing and Time’s Up announced their official partnership, as reported by Fast Company. The two organizations joining forces makes a lot of sense, because both aim to center women in their own ways.

With the new partnership, the two organizations will work on supporting each other. That includes hosting events and programming together and The Wing will provide regular meeting space to Time’s Up. In addition, Fast Company reported that The Wing is giving Time’s Up a “charitable gift of stock.”

Women have continued to face systemic issues in the workplace, such as the notorious pay gap. There’s also other issues, though, like how widespread sexual harassment is in for women in the workplace.

These factors can make work feel unsafe and limit women’s ability to thrive, but organizations like The Wing and Time’s Up are setting out to confront them.

The Wing co-founder and CEO Audrey Gelman said, according to Fast Company, “Both Time’s Up and The Wing believe that all women, across all industries and backgrounds, deserve safety, fairness, and dignity as they work and as we all shift the paradigm of workplace culture.”

The Wing was founded in 2016 to act as a sort of social club for women, where they could network, find community spaces, and feel empowered. Time’s Up was founded in 2018 to address systemic inequalities and injustices in the workplace.

This feels like a really natural partnership and it’s sure to help both organizations further amplify the work that they do. By bringing their networks together, the two organizations will definitely create a much stronger base.

The U.S. Government Is Testing Facial Recognition Technology at Airports

From TSA unnecessarily searching Black women’s hair to their profiling of Muslims, airport security can quickly turn from a simple headache into a reminder of your status as an inherent threat. Now, airports are introducing a new, invasive security measure that is raising alarms.

Throughout the United States, the US Customs and Border Protection program known as Biometric Exit is in use at departure gates in 17 airports, as reported by CNET. The program uses facial recognition technology to take pictures of people and “verify their identity.”

The agency says it only holds onto the photos of citizens “until their identities have been verified” and everyone else’s for 14 days. The photos of every non-U.S. citizen are also sent to the Department of Homeland Security’s Automated Biometric Identification System (IDENT), which can store information for 75 years.

By 2021 the system will be used to scan 97 percent of all travelers leaving the country, according to CNET. However, facial recognition is also being tested on cameras throughout airports.

Overall, though, the entire program should raise concerns.

One of the biggest issues with the utilization of facial recognition technology is that it’s not as accurate as people assume. Facial recognition is notoriously terrible at accurately reading Black women, for example. Plus, taking pictures of people’s faces without them knowing is simply invasive.

There are some opt-out measures but, as noted by the Electronic Privacy Information Center, CBP continues to change them. There is no formal procedure in place, after all.

There are also questions around the program’s legality. CBP claims it has the right to collect biometrics, but, according to CNET, the ACLU, EPIC, and the Electronic Frontier Foundation, and others say that no law allows CBP to collect biometric information on US citizens.

Airports are already difficult for pretty much anyone who isn’t white. Muslims, for example, face continued harassment and profiling when passing through security — and this includes Black Muslims. Implementing facial recognition technology that has historically been unable to read Black people poses a unique threat for communities who exist at multiple, vulnerable intersections.

The training of facial recognition programs themselves is also sketchy. New research by Os Keyes, Nikki Stevens, and Jacqueline Wernimont published on Slate revealed that the National Institute of Standards and Technology (NIST), who maintains the Facial Recognition Verification Testing program, has used pictures of victims of child pornography, immigration records, and photos of dead arrestees to train programs.

Not only should this warrant pause in any conversation of facial recognition tech, especially when used by the government, but it’s important to note that those images were used without consent.

When it comes to the Biometrics Exit program and plans to implement facial recognition tech throughout airports, the solution isn’t to train these programs to better recognize Black people or other vulnerable communities. Instead, it becomes a question of whether these government biometric programs should be allowed to exist.

Facebook is Giving More Details on How it Handled Video of the Christchurch shooting

On Wednesday, in response to continued pressure from multiple countries, Facebook published a blog by Guy Rosen, Facebook’s VP of product management, providing further details on the company’s response to the Christchurch shooting in New Zealand that left 50 people dead.

Companies like Facebook often use artificial intelligence to identify content that should be removed. However, an over reliance on AI can lead to exactly what happened with the Christchurch video. The shooting was allowed to be broadcast on Facebook Live and after, videos continued to spread across the internet.

Although Facebook continues to cite that the video was only viewed about 200 times during its live broadcast — and that nobody flagged it to moderators — those excuses aren’t cutting it for users and lawmakers.

Now, the company is saying it tried to use an experimental audio technology, in order to catch copies of the video that it’s AI missed. Facebook wrote that it “employed audio matching technology to detect videos which had visually changed beyond our systems’ ability to recognize automatically but which had the same soundtrack.”

In the post Facebook noted that AI requires “training data”. Essentially it learns to ban a specific type of content by seeing it a certain number of times. That meant the AI could also be confused if the video was slightly altered visually, like being doctored or someone recording their own screen and then posting that.

Facebook didn’t provide too many details on the audio technology beyond that. But, it’s clear that the company is trying to be more transparent due to increased government pressure to explain its process.

New Zealand’s prime minister Jacinda Arden has made it clear that she’s been unimpressed with Facebook so far. She has been in contact with Facebook’s chief operating officer, but increased her criticism of the platform on Tuesday in Parliament.

Facebook is also facing potential consequences here in the United States.

Yesterday, Chairman of the House Homeland Security Committee, Rep. Bennie G. Thompson wrote a letter to executives from top companies, including Facebook and YouTube, where the video continued to exist. 

In it, Thompson said they must “do better” and called for executives, including Facebook’s CEO Mark Zuckerberg, to appear before Congress in order to explain their process and ensure something like this won’t happen again.

What is perhaps most frustrating about the entire conversation so far is that it continues to center on reactivity. Facebook says it couldn’t stop the original video because nobody reported it, but the type of online hate that the shooter identified as inspiration has existed on the platform for decades.

Social media companies have historically taken a relaxed approach to confronting online hate. But Christchurch, and the lack of appropriate response to it, isn’t an event that can easily be swept under a rug.