Tech Companies Are Scrambling To Remove Video of The Christchurch Shooting

Since the Christchurch massacre, social media platforms have scrambled to keep video of it off their platforms. Days after the attack, it’s not difficult to find clips or still images from it. To many, this opens up questions about tech companies’ failures to regulate hate on their platforms, and who shares responsibility in moments like this.

After the shooting, where at least fifty Muslims were killed in two New Zealand mosques, archives of the alleged shooter’s page revealed only 10 people had tuned into his Facebook Live broadcast of the event, according to The Wall Street Journal.

Although the original video didn’t have many viewers, it exploded across social media in the days following the attack. Facebook, which has faced the brunt of criticism due to its site hosting the livestream, says it removed 1.5 million videos of the New Zealand shooting in the 24 hours after the shooting was broadcast.

In a thread on Twitter, the company said it blocked over 1.2 million of the videos before they were uploaded. However, that means about 300,000 videos managed to appear on Facebook, and even that number is still far too big.

On Sunday, New Zealand’s Prime Minister, Jacinda Arden, told reporters during a press conference in Wellington that Facebook’s chief operating officer Sheryl Sandberg reached out and the two haves plans to discuss the livestream.

“Certainly, I have had some contact with Sheryl Sandberg. I haven’t spoken to her directly but she has reached out, an acknowledgement of what has occurred here in New Zealand,” Arden said.

She went on to add, “This is an issue I will look to be discussing directly with Facebook. We did as much as we could to remove, or seek to have removed, some of the footage that was being circulated in the aftermath of this terrorist attack. But ultimately, it has been up to those platforms to facilitate their removal.”

In addition to a livestream, the alleged shooter uploaded a 17-minute video to Facebook, Instagram, Twitter, and YouTube.

An Uncontrollable Spread

Part of the issue is companies have begun to over-rely on artificial intelligence software that can’t actually detect violent content as it’s being broadcasted, as noted by The Wall Street Journal. Although some platforms, like Facebook, have human content moderating teams, they’re sometimes overworked, traumatized, and sometimes end up radicalized themselves.

Once a video goes up, it’s not difficult for people to upload it themselves, create copies, and slightly doctor them in order to repost. For example, The Wall Street Journal reported that a version of the video was edited to look like a first-person shooter game and then uploaded on Discord, a messaging app for videogamers. 

Since the attack, YouTube said its removed thousands of uploads of the video, but even the company supported by Google couldn’t stop the spread of the footage quick enough. Elizabeth Dwoskin and Craig Timberg of the Washington Post say reported that the tech giant had to take drastic measures:

As its efforts faltered, the team finally took unprecedented steps — including temporarily disabling several search functions and cutting off human review features to speed the removal of videos flagged by automated systems. Many of the new clips were altered in ways that outsmarted the company’s detection systems.

-The Washington Post

But even as tech companies, with all their engineering  support, took down the video, people who wanted to see it and distribute it knew exactly where to go.

Back in 2018, Reddit quarantined their infamous subreddit r/watchpeopledie, which allowed people to do exactly what it said — watch videos of people dying. According to TechCrunch, the subreddit shared extremely graphic videos like the 2018 murder of two female tourists in Morocco.

Despite the quarantine, people could still access the subreddit directly. It became active as people sought out videos of the Christchurch shooting. TechCrunch reported one of the subreddit’s moderators locked a thread about the video and posted the following statement:

“Sorry guys but we’re locking the thread out of necessity here. The video stays up until someone censors us. This video is being scrubbed from major social media platforms but hopefully Reddit believes in letting you decide for yourself whether or not you want to see unfiltered reality. Regardless of what you believe, this is an objective look into a terrible incident like this.

Remember to love each other.”

Late Friday morning, Reddit finally banned the subreddit entirely and similar ones such as r/gore and r/wpdtalk (“watch people die talk”). A spokesperson told TechCrunch, “We are very clear in our site terms of service that posting content that incites or glorifies violence will get users and communities banned from Reddit.”

The gaming platform, Valve, also had to remove over 100 profiles that praised the shooter.  According to Kotaku,  dozens of users on the site were offering tribute to the alleged shooter. One profile even showed a GIF of the attack and others called the shooter a “saint”, “hero”, or referred to him as “Kebab Remover”.

The concern of social media’s role in promoting the attack isn’t contained only to New Zealand. The leader of Britain’s Labour Party, Jeremy Corbyn, told Sky News on Sunday, “The social media platforms which were actually playing a video made by this person who is accused of murder…all over the world, that surely has got to stop.”

Corbyn went to on to explain how that although the responsibility rests in the hands of the operators of social media platforms, it calls into question how social media companies are regulated.

The spreading of hateful messages is one of social media’s biggest, oldest problems.

Before Cambridge Analytica or any other misinformation battle, hate speech and harassment were at the forefront on these platforms and groups were able to use them as a megaphone to spread their messages. Facebook is one of the wealthiest companies in the entire world. They supposedly employ the smartest people and the best engineers. So why has this problem, one that’s festered for so long, not been fixed?  That’s something tech companies are going to have to start answering for.

 

Howard University and Other Top Schools Partner in Building Public Interest Tech Network

Technology is a part of everyday life in the United States, but people aren’t always equipped to understand it’s impacts on society. There’s a lot of social implications behind the rising use of artificial intelligence, for example, or even social media.

In efforts to better equip students to start talking about these consequences, top universities from across the country have partnered together to launch the Public Interest Technology Universities Network, as reported by The New York Times.

The network aims to develop curriculum, research agendas, and experiential learning programs to train students so they better understand tech, according to a press release.

“We think about two halves of the pipeline,” Alexandra Givens, executive director of the Institute for Technology Law and Policy at Georgetown Law School, told The New York Times. “One is helping technologists think about the social, ethical, legal and policy implications of their work.”

This is important because it’s easy to make tech without fully considering the implications it’ll have once it’s released. Take self-driving cars, for example, whose existence now opens up a lot of questions about who is accountable when an accident occurs.

The network also addresses the importance of making sure people who aren’t in the tech field still know how to understand it.

“We spend a ton of time telling students: ‘If you care about civil rights in America today, or if you care about criminal justice reform, you have to understand technology and speak up about how technology is being deployed,'” Professor Givens told The New York Times.

The failure to train people on how to understand tech’s impacts on our daily lives can have big consequences.

In the United States, for example, Congress is often criticized for their inability to keep up with tech and is only now taking steps towards a federal data privacy law.

Knowing how to navigate tech is also increasingly important as big companies such as Facebook and Youtube come under fire for spreading vaccine misinformation and conspiracy theories.

Currently, the network is made up of 21 universities and colleges, including Arizona State University, the City University of New York, Harvard University, Howard University, M.I.T., Stanford University, and the University of California, Berkeley.

President Trump is Expected to Sign an Executive Order on AI. Here’s Why It Matters

The United States fell behind when 18 countries around the world launched programs to stimulate AI development. Now, President Trump is expected to sign an executive order launching the American Artificial Intelligence (AI) Initiative.

A senior administration official reportedly told CNN that the initiative outlines “bold, decisive actions to ensure that AI continues to be fueled by American ingenuity, reflects American values and is applied for the benefit of the American people.”

Goals of the AI initiative will be split into the following 5 areas, as reported by multiple different outlets:  Research and Development, Resources, Ethical Standards, Automation, and International Outreach.

America is still the world’s leader in AI research, but recent investments in the technology from China, France, and South Korea is what’s more than likely fueling this new order from the president.

“This executive order is about ensuring continued America leadership in AI, which includes ensuring AI technologies reflect American values, policies, and priorities,” an administration official told Axios. 

While major voices in the tech community have applauded the initiative for making AI a policy priority, it fails to reference some key concerns. AI technologies such as facial recognition have the potential to infringe upon privacy and civil liberties.

Certain aspects of AI have been under fire over the past few years. One of the most notable was when Amazon’s Rekognition technology falsely matched 28 members of Congress–most of them people of color–with public mugshots. Several civil rights groups have called on the tech industry to not sell its AI technology to the government and companies like Microsoft have called for federal regulation of facial recognition technology, claiming that AI is amplifying widespread surveillance.

Jason Furman –a now Harvard professor who served as chairman of the Council of Economic Advisors under President Obama and helped draft that administration’s 2016 report on AI –told Technology Review“The Administration’s American AI Initiative includes all of the right elements, the critical test will be to see if they follow through in a vigorous manner.”

The administration has not provided many details on the plan such as which projects will be launched, or how much money will go into funding the different initiatives.

Additional information will be released over the next six months.

 

 

Uber Partners With Bay Area Tech Training Programs

Uber has partnered with several organizations in San Francisco to help increase access to high-quality jobs.

The ride-sharing giant–announced in a blog post–its plans to commit $100,000 to nonprofits <dev/Mission>, Code Tenderloin, and Opportunities for All in an effort to train the next generation of technologists.

In addition to the donation, the company will provide office space and offer volunteer time for employees to work with students on coding and interview skills.

“The donations build on long-standing relationships we’ve established with each of these groups that date back to their founding days,” the company said.

<dev/Mission> will use the grant to add 30 students and 10 new internships to the program. Code Tenderloin will add over 50 students to its Job Readiness Class and Coding program, which teaches students how to build a resume and prepares them to interview.

Opportunities for All, an initiative led by San Francisco Mayor London Breed to expand access to youth employment, will bring additional interns into their program.

IBM Releases Dataset to Help Reduce Bias in Facial Recognition Systems

IBM wants to make facial recognition systems more fair and accurate.

The company just released a research paper along with a substantial dataset of 1 million images with intrinsic facial features including facial symmetry, skin color, age, and gender.

The tech giant hopes to use the Diversity in Faces (DiF) dataset to advance the study of diversity in facial recognition and further aid the development of the technology.

“Face recognition is a long-standing challenge in the field of Artificial Intelligence (AI),” the authors of the paper wrote. “However, with recent advances in neural networks, face recognition has achieved unprecedented accuracy, built largely on data-driven deep learning methods.”

Lead scientist at IBM, John Smith told CNBC that many prominent datasets lack balance and coverage of facial images.

“In order for the technology to advance it needs to be built on diverse training data,” he said. “The data does not reflect the faces we see in the world.”

Bias in facial recognition technology is an ongoing issue in the industry and tech companies are starting to take steps to address the problem. In December, Microsoft president, Brad Smith, wrote a company blog post outlining risks and potential abuses of facial recognition technology, including privacy, democratic freedoms, and discrimination.

The company also wrote that it is calling for new laws that regulate artificial intelligence software to prevent bias.

Joy Buolamwini, a researcher at the M.I.T. Media Lab, researched how biases affect artificial intelligence and found the technology misidentified the gender of darker-skinned women 35 percent of the time.

“You can’t have ethical A.I. that’s not inclusive,” Buolamwini said in the New York Times. “And whoever is creating the technology is setting the standards.”

IBM’s Diversity in Faces dataset is available to the public and researchers are urging others to build on this work.

“We selected a solid starting point by using one million publicly available face images and by implementing ten facial coding schemes,” they wrote in the paper. “We hope that others will find ways to grow the data set to include more faces.”

A New Study Shows Screen Time Stunts Childhood Development

A new study by the University of Calgary shows that children 2-5 years old who engage in more screen time are more likely to receive lower performance scores on development screening tests.

“What sets this study apart from previous research is that we looked specifically at the lasting impacts of screen time,” said Dr. Sheri Madigan, Ph.D., assistant professor in the Department of Psychology at the University of Calgary. “What these findings tell us is that one reason there may be disparities in learning and behavior at school entry is because some kids are in front of their screens far too often in early childhood.”

Ninety-eight percent of children in the United States under 8-years-old live in a home with an internet-connected device and spend an average of two hours a day on screens, according to a report by Common Sense Media.

One out of every four children entering school is inadequately prepared for learning and academic success, a gap that widens over time if not addressed.

Families who participated in the study reported their children spent an average of 2.4, 3.6 and 1.6 hours of screen time per day at two, three and five years of age, respectively.

“A lot of the positive stimulation that helps kids with their physical and cognitive development comes from interactions with caregivers,” said Dr. Madigan. “When they’re in front of their screens, these important parent-child interactions aren’t happening, and this can delay or derail children’s development.”

The study notes that when children are engaging in screen time, they’re inactive and missing out on crucial opportunities to walk and run, which helps practice motor and communication skills.

5 Apps to Jumpstart Your Productivity

If you’re not getting enough work done in a day, you may not not be managing your time well enough.

Procrastination, disorganization, and poorly defined goals can all lead to mismanagement of time. Wasted time often equates to poor work-life balance, missing deadlines and high-anxiety and stress.

In most cases you don’t need more time in the day, you need to manage your time better. With today’s technology, there are so many different apps that can help you effectively manage your time and increase your productivity.

Here is a list of 5 apps that will help you manage your time more effectively so you can focus on getting work done:

  1. Focus Booster
    The Focus Booster app uses the Pomodoro Technique to help individuals focus. Dedicated to helping you stop procrastinating, this app allows you to plan the amount of time you need to focus and take breaks.
  2. Evernote
    Evernote helps individuals stay super organized across different platforms. Create shortcuts that help you quickly access the most frequently used content in your account like notes, tags, and searches. Use the Web Clipper, to save sections of websites and annotate them.
  3. Workflow
    The Workflow app is a customizable program that shaves off the time it takes to complete a task. Make one-click shortcuts that complete multiple tasks like having Google Maps automatically pull up directions to the next meeting on your calendar.
  4. RescueTime
    RescueTime app tracks the amount of time spent on applications and websites. It provides a full picture of your daily productivity. Download this app to figure out how to get things done quicker, and eliminate the distractions holding you back.
  5. Clear
    Clear is a to-do list app that keeps you organized and on task. Use it to organize your daily tasks into separate categories using themed lists, giving you a quick look at exactly what you need to do next, and the exact step you need to take along the way.

 

Viacom Buys Nas backed Pluto TV for $340 Million

Nas’ Queensbridge Venture Partners has reached a deal with media giant Viacom to acquire free streaming television service Pluto TV for $340 million.

The platform has developed over 130 partnerships with media networks since its founding in 2013 and streams over 100 channels and thousands of hours of on-demand content, including sports. news, and cartoons.

“Today marks an important step forward in Viacom’s evolution, as we work to move both our company and the industry forward,” said Viacom President and CEO Bob Bakish in a press release. “Pluto TV’s unique and market-leading product, combined with Viacom’s brands, content, advanced advertising capabilities, and global scale, creates a great opportunity for consumers, partners, and Viacom.”

Queensbridge Venture Partners invested in Pluto TV in 2014. 

It’s another win for Nas, whose firm has also invested  in Lyft, Casper, Genius, and Dropbox.

/dev/color Is Bringing In Its Largest Cohort of Black Engineers Ever

Tech nonprofit /dev/color just announced the induction of the largest “Squad” of Black engineers into their community, growing its membership from 225 to 370. They have also unveiled plans to operate cohorts in two new cities this year.

/dev/color convenes a visible force of Black software engineers to uplift and empower one another within the overwhelmingly white tech industry. Nearly four years old, the organization began with just 11 members in San Francisco, and now has chapters in Atlanta, Seattle, and New York City.

Their flagship A* program offers professional engineers a year-long membership including monthly meetings with peer groups (called “Squads”), access to exclusive events, and tools to design an individualized career roadmap.

“It’s rare for folks to take retention into their own hands,” said Lajuanda Asemota, Interim Executive Director of /dev/color. “It’s not just learning and development opportunities that keep people at companies and in the industry. It’s also their sense of belonging and sense of confidence.”

The conversations around tech diversity often centers the acknowledgment that the industry is made up of nearly 8 percent Black workers and recruitment strategies (or lack thereof) have failed to address this gross underrepresentation, though numerous organizations are working to remedy these faults.

Changing the narrative around diversity, equity, and inclusion is part of /dev/color’s work to ensure these connections exist beyond this organization.

The nonprofit reports that as a result of its A* Program, over 70 percent of members received an increase in compensation in 2017. Of that, 34 percent received a salary increase of 15 percent or more. 

“We’re contributing to intergenerational wealth and community growth,” said Asemota. “Folks are able to achieve their goals and grow their careers.”

Black tech workers are the lowest paid in the tech sector, according to a 2018 report from Hired. Across the industry, Black employees are paid the least at $130,000, an average of $6,000 less than their white counterparts.

The Women in the Workplace 2018 study published by LeanIn.org and McKinsey & Company found that Black women are asking for promotions and raises at the same rate as their white counterparts, but are not getting the same outcomes.

“The diversity in tech conversation has gotten a little bit repetitive,” said Asemota. “I’m really hopeful that people will capitalize on the history of the work, and really think critically about how to do things creatively that will actually have an impact.”

This Program Is Teaching Incarcerated Youth How To Code

A new coding program at a juvenile correctional facility in California is teaching incarcerated youth how to build websites and apps in an effort to reduce recidivism rates.

Code.7370 is an 18-month training program supported by The Last Mile, a non-profit organization working to provide offenders with marketable job skills that lead to employment.

The program—based at the O.H. Close Youth Correctional Facility in Stockton—is part of California Gov. Gavin Newsom’s juvenile justice reform proposals working to move the state’s Juvenile Justice Divison out of corrections officials’ control and into the health and human services providers. 

“If we’re going to get serious about changing the trajectory of the lives of these young children, I think we need to do it through a different lens,” said Gov. Newsom in an interview at the facility. 

According to the Division of Juvenile Justice, an early 2017 report showed 74.2 percent of California youth were re-arrested, 53.8 percent were reconvicted of new offenses, and 37.3 percent had returned to state custody within three years of release.

The Last Mile is supported by a $2 million grant from Google.org.