Twitter Says That Protected Tweets For Android Users Were Exposed For Years

Twitter users with Androids were in for an unpleasant surprise when the platform notified them that a bug in its system made their “protected tweets” accessible for years.

On January 17, Twitter said in a blog post that Android users’ tweets were exposed if “certain account changes were made.”

The company said that Android users who changed their email address associated with their accounts between November 3, 2014, and January 14, 2019 were vulnerable to the bug.

Web users and iOS users were not impacted by the bug.

Twitter apologized in the blog post and encouraged users to review their privacy settings to ensure the settings reflect their preferences. Twitter did not say how many users had been affected.

“We are providing this broader notice through the Twitter Help Center since we can’t confirm every account that may have been impacted,” the company said.

This is the latest bug in a string of social media platform mishaps and now the Irish Data Protection Commission is investigating the matter. The company is also under investigation by the General Data Protection Regulation (GDPR) for data-collection issues.

Twitter’s privacy mishaps are part of a bigger trend in the tech industry over the past several months.

In December, a bug on Facebook gave third-party apps too much access to users’ photos and may have impacted up to 6.8 million users in total. Earlier this month, a TechCrunch report revealed that hackers hijacked dormant Twitter accounts in an attempt to spread Islamic State propaganda.


Facebook Removes Ads From Far-Right Gaming Group

Nearly eight months after Facebook removed Britain First–a far-right, fascist organization, from the platform for hate speech and violating its community standards–ads from the group began surfacing on the platform.

Facebook recently removed Britain First ads petitioning to stop a mosque from being built in the U.K. The ads were posted during the holiday season on a gaming page by Political Gamers TV, a network dedicated to “gamers worldwide who wish no limits on speech.”

Political Gamers TV is calling the removal of the ads “political discrimination” and plans to sue Facebook, according to the BBC. 

Facebook has made a growing effort to remove accounts, ads, and groups that violate its community standards. In November, Facebook removed alt-right group Proud Boys, its followers, and founder, Gavin McInnes, from using Facebook groups and Instagram after being linked to violent protests in New York.

Facebook is one of many social media platforms and other websites cracking down on hate speech and hate groups.

“Our team continues to study trends in organized hate and hate speech and works with partners to better understand hate organizations as they evolve,” a Facebook spokesperson said in a statement in November. “We ban these organizations and individuals from our platforms and also remove all praise and support when we become aware of it.”

Political Gamers TV’s Twitter and Instagram pages are still live.

LinkedIn’s Reid Hoffman Says He Regrets Investing In Misinformation Campaign

LinkedIn co-founder and billionaire, Reid Hoffman donated more than $750,000 to an organization that used misinformation to ensure a loss for Alabama Republican Senate candidate Roy Moore.

Moore lost in last year’s special election and came under fire for sexual misconduct allegations, but online bots helped bruise his campaign.

The campaign against Moore was ran by American Engagement Technologies (AET), a left-leaning firm that works to get Democrats into office. The firm is run by Mikey Dickerson, a key player in helping the Obama administration in establishing the United States Digital Service and in revamping the original website.

Hoffman said he donated to AET before knowing about their use of misinformation tactics to help Democratic candidates. He also mentioned his donations to dozens of other organizations.

“I find the tactics that have been recently reported highly disturbing,” Hoffman said in a statement to the Washington Post.  For that reason, I am embarrassed by my failure to track AET – the organization I did support – more diligently as it made its own decisions to perhaps fund projects that I would reject.”

AET allegedly used Russian-like tactics to boost support for Moore’s Democratic opponent Doug Jones through Twitter and Facebook. Hoffman has called for an investigation into the matter.

Social media moguls have been under increasingly harsh criticism for their platforms’ roles in misinformation campaigns and what organizations they fund. Earlier this month, the National Association for the Advancement of Colored People (NAACP) hosted a week-long protest against Facebook for its privacy and data issues plus its involvement with partisan strategy firms. Facebook has been under fire for hiring opposition research from Definers Public Affairs on George Soros, who has been a heavy critic of the company.

Twitter, Facebook, and Instagram were major targets during the Russian election interference. Ahead of the midterms, Facebook and Twitter worked to remove accounts and bots associated with misinformation.

“I want to be unequivocal: there is absolutely no place in our democracy for manipulating facts or using falsehoods to gain political advantage,” Hoffman told the Post.


Facebook Is Reportedly Working On A CryptoCurrency For WhatsApp

Facebook Inc. is creating a cryptocurrency that will work through WhatsApp, according to Bloomberg.

The cryptocurrency will focus on WhatsApp’s India market, which boasts more than 200 million users, and would allow users to transfer money. India also leads in the world’s remittances — $69 billion was sent to people in the country in 2017.

Facebook is developing a stablecoin based on the U.S. dollar to offset some of the volatility associated with cryptocurrencies and make daily purchases easier.

Stablecoins are becoming more popular and there were more than 120 ventures related to the coins in the past year, according to Stable.Report, a stable coin tracker.

The company has been increasingly dipping into the crypto industries by appointing its David Marcus to head its blockchain initiatives in Messenger in May. Marcus is the former president of PayPal and has been with Facebook since 2014. The company also has about 40 people in it’s blockchain department after a recent hiring sprint, according to Bloomberg.

Although the project is far from launch, it would would be the first from a tech company of Facebook’s size. India has over 400 million internet users and that number is expected to increase in the several years. If Facebook is successful, we could see other similar companies try to break into the blockchain world.

Here’s Why We Never Experienced Facebook’s “Common Ground” Feature

Common Ground was set to be Facebook’s way of getting users on opposing sides of the political spectrum to have civil conversations on the platform until the company’s longtime global policy chief Joel Kaplan nixed the idea.

According to reports by the Wall Street Journal, Kaplan was worried the feature could be biased against conservative Facebook users and played a key role in making sure it never saw the light of day. Kaplan and other executives were uncertain on how Common Ground would impact user engagement on the platform.

Common Ground’s goal was to bring users together from different backgrounds and political views and encourage less hostile conversations.

You may remember Kaplan, who sat behind his longtime friend Brett Kavanaugh during the judge’s congressional hearing as he gave testimony about sexual assault allegations brought forth against him by Christine Blasey Ford.

Facebook employees saw this as a sign that Kaplan supported Kavanaugh, and the company was forced to make a public statement saying that its leadership team “made mistakes” when handling the events surrounding his confirmation.

Kaplan has seemingly been Facebook’s mouthpiece for conservatives. He’s a former White House aide for President George W. Bush and has pushed for a partnership with right-wing news site, The Daily Caller’s fact-checking division, according to WSJ.


Facebook Said It Gave a Third Party App Too Much Access To User Photos

Another day, another bug reported from Facebook.

On Friday, the social platform said that it discovered a bug in its photo application programming interface (API) that allowed third-party apps to access a broader set of photos than usually permitted. Users who allowed third-party apps access to their photos may have been affected. 

The bug was caused by an error in a code update for the photo API and may have impacted up to 6.8 million users in total. Facebook said it immediately began investigating the issue. Once it was discovered and notified, the Irish Data Protection Commission (IDPC) concluded the reportable breach under the General Data Protection Regulation (GDPR).

The company said in a blog post that photo access was only available from September 13 to September 25, 2018. The bug also impacted photos that people uploaded to Facebook but chose not to post.

“For example, if someone uploads a photo to Facebook but doesn’t finish posting it – maybe because they’ve lost reception or walked into a meeting – we store a copy of that photo so the person has it when they come back to the app to complete their post,” Tomer Bar, Facebook’s engineering director, said in the post.

However, the bug did not affect photos sent in Messenger conversations.

Facebook said it will notify users who have been impacted by the bug through an alert which will direct them to the platform’s Help Center link.

“We’ve heard loud and clear that we need to be more transparent about how we build our products and how those products use people’s data – including when things go wrong,” a Facebook spokesperson told AfroTech. “These types of notifications are designed to do just that.”


Facebook Removed Former Exec’s Post Criticizing The Company’s Diversity

After posting an article criticizing Facebook’s lack of diversity and treatment of black users, the company’s former partnerships manager Mark Luckie received a notification saying that his post had been removed. 

“Facebook has a black people problem,” Luckie said in his memo.

The memo was originally sent as an internal email to each Facebook employee and later published on the platform.

Luckie told AfroTech he was caught off-guard when he received the notification and that he was able to laugh at the irony of the situation.

“I was experiencing some of the same issues I was helping Black influencers with,” Luckie said. “Users are at the mercy of Facebook.”

“Mark Luckie’s post does not violate our Community Standards and is available on our site,” Facebook spokesperson Anthony Harrison said. “We are looking into what happened.”

The post has since been put back on the website, but its removal emphasizes one of Luckie’s key points.

“Black people are finding that their attempts to create “safe spaces” on Facebook for conversation among themselves are being derailed by the platform itself,” Luckie said in the memo. “Non-black people are reporting what are meant to be positive efforts as hate speech, despite them often not violating Facebook’s terms of service. Their content is removed without notice. Accounts are suspended indefinitely.”

Joseph Dixon, the founder of dating app RealBlackLove Inc., said that Facebook forced him to change his group’s name because it violated the platform’s Community Standards on hate speech. 

“We made a Facebook group so that our singles can discuss dating,” Dixon said.

Dixon said that he changed the name of the group from “RealBlackLove Singles” to “Singles of Facebook.” He said that he searched through similar Facebook groups and found hundreds that targeted different demographics.

“It’s just terrible and unfair,” Dixon said.

As Facebook investigates why Luckie received a removal notification, the former employee said the company needs to amend its algorithm, which “hasn’t shown a lot of movement.”

A Former Facebook Manager Just Called Out The Company For Mistreatment of Black Employees and Users

A former Facebook manager posted a memo today accusing the company of having a problem with diversity, specifically one with black people.

Mark Luckie, the former Strategic Partner Manager for Global Influencers Focused on Underrepresented Voices, sent the memo to all Facebook employees on November 9 and reposted the message on the platform today.

“Facebook doesn’t have an excuse to not change,” Luckie told AfroTech. “This was my way of saying there is a way to change and this is how you do it.”

Luckie highlighted some of the internal and external issues the company has with handling diversity.

“Black people are finding that their attempts to create “safe spaces” on Facebook for conversation among themselves are being derailed by the platform itself,” Luckie said in the memo. “Non-black people are reporting what are meant to be positive efforts as hate speech, despite them often not violating Facebook’s terms of service.”

Luckie told AfroTech that he decided to write the memo after speaking with various black employees at Facebook who had similar worries. He noted that many black people within the company end up leaving because they feel alienated and these feelings are now transcending to black Facebook users.

“There’s a number of reasons to delete Facebook,” Luckie said. “If you don’t feel like Facebook has your back when you’re trying to have conversations on its platform, then you’re just going to go elsewhere.” 

Luckie said that although diversity hires are helpful, they are not the “cure-all” for Facebook’s problem.

“Inclusion should be a team effort. It is not enough to simply hire people to focus on diversity,” Luckie said. “Everyone on teams whose work focuses on varied cultural backgrounds should be responsible for ensuring the outcome of their work is representative of those groups.”

Luckie also mentioned that Facebook has created a facade of an inclusive space and that racism is rampant in its offices. Those who are on the receiving end of bigotry often hit a dead end when reporting issues to human resources.

In the memo, Luckie mentions incidents where he encountered microaggressions at the company and how other black employees were discouraged from joining and attending black events and groups.

“To feel like an oddity at your own place of employment because of the color of your skin while passing posters reminding you to be your authentic self feels in itself inauthentic,” Luckie said in the memo.

Luckie provided 10 suggestions for strategic plans and analytics that would help Facebook improve diversity and inclusivity in its offices and on its platform. He said that he also gave the suggestions on a team level while with the company.

“It wasn’t seen as a priority,” Luckie said. “Part of the reason I ended up leaving is that there were other black people in various areas of the company that were having the very same issues, so it was a systemic thing.”

The memo highlights some of the bigger issues with diversity in the tech space. Companies like Lyft, Uber, Google and Amazon have diversity initiatives across, but haven’t really moved the needle.

“Facebook has done a good job at convincing itself that it is inclusive,” Luckie said.

In a statement sent to Afrotech, a spokesperson for Facebook said the company is doing its best to be an inclusive company.

“Over the last few years, we’ve been working diligently to increase the range of perspectives among those who build our products and serve the people who use them throughout the world,” Anthony Harrison, Facebook spokesperson. “The growth in representation of people from more diverse groups, working in many different functions across the company, is a key driver of our ability to succeed. We want to fully support all employees when there are issues reported and when there may be micro-behaviors that add up.”

Instagram Users Can Now Track Time On The App With New Feature

Instagram rolled out its new activity dashboard, a feature that allows users to track how much time they spend on the app.  

Social media companies and smartphone makers have become increasingly aware of how their products are impacting mental health.

A recent study by the University of Pennsylvania found that limiting social media use to approximately 30 minutes per day may lead to significant improvement in well-being.

Instagram and its parent company Facebook announced in August that both platforms would offer activity dashboards to help users create healthy online habits; however, Facebook has not launched the feature yet.

“We want the time people spend on Facebook and Instagram to be intentional, positive and inspiring,” spokespeople from both platforms said in a blog post.

The “Your Activity” tab located within the profile pages shows a bar graph of user’s average daily usage on the app. Users can set daily reminders that notify them when they have reached their designated time limit. Time limits range from five minutes up to 24 hours.

There’s no mechanism that stops app usage once the time limit is reached and it does not track which features within the app are used the most. However, users can mute the app’s push notifications from the activity dashboard.

Another study by the University of Belgrade in Serbia found strong correlations between the time spent on social media platforms and depression. High rates of social media usage can lead to incorrect conclusions surrounding physical appearance, intelligence, and other characteristics, according to the study.

Lee Barnes, a marketing consulting analyst based in Atlanta, said he constantly uses Instagram and he does not see many benefits of using the app outside of staying connected with friends.

“Instagram only shows you what people want you to see,” Barnes said. “So, it’s easy to get caught up in what you see others doing which can make you feel less accomplished, successful and happy than you really are.”

Nia Wellman, 21, Hampton University student and HairDays CEO, said she uses Instagram every day to stay connected with the target audiences for her brands and is ashamed to say how much time she spends on the app.

Wellman also said although Instagram can have a negative impact on mental health, it is not 100 percent to blame for depression or anxiety.

“When we see everyone’s highlight reels on Instagram and social media, in general, we tend to believe that they are the reason for making us insecure,” Wellman said. “We have to remain focused on our actual goals, while simultaneously being happy for others.”

Apple’s iOS also lets users track how much time they spend on each app.

Instagram Is Cracking Down On Fake Engagement

Instagram is cracking down on fraudulent activity on its platform.

According to a press release, the social media giant will begin removing any likes and follows coming from third-party apps that artificially drive engagement. Instagram says outside services are “bad for the community” and disrupt what it calls “genuine interactions” for its users. The company also notes that giving your Instagram account access to third-party apps makes it less secure.

So how will all this work?

Instagram says it’s built machine learning tools that allow it to identify accounts who use third-party apps. From there,  it will send users a message (on Instagram) notifying them that likes and follows from those apps have been removed.

The allure of large Instagram followings has become a high priority for companies and users on the platform. People often use follower and like counts as a barometer when judging how much of an influence a person has in a given community.

Techcrunch’s John Constine notes that if there’s uncertainty about where  influence (likes, comments, follows) is coming from, then it could potentially hurt Instagram’s legitimacy moving forward.

“If no one can believe those counts are accurate, it throws Instagram’s legitimacy into question. And every time you get a notification about a fake follow or Like, it distracts you from real life, dilutes the quality of conversation on Instagram and makes people less likely to stick with the app,” John Constine said. “Anyone willing to pay for fake followers doesn’t deserve your attention, and Instagram should not hold back from terminating their accounts if they don’t stop.”

The crackdown comes at a time when Instagram’s parent company, Facebook, is under fire after a bombshell report in the New York Times detailed how the company handled disinformation on its platform following the 2016 elections.

Instagram hasn’t said what will happen to accounts that continue to use third-party apps, but noted that their “user experience may be impacted.”