Facebook is Giving More Details on How it Handled Video of the Christchurch shooting

On Wednesday, in response to continued pressure from multiple countries, Facebook published a blog by Guy Rosen, Facebook’s VP of product management, providing further details on the company’s response to the Christchurch shooting in New Zealand that left 50 people dead.

Companies like Facebook often use artificial intelligence to identify content that should be removed. However, an over reliance on AI can lead to exactly what happened with the Christchurch video. The shooting was allowed to be broadcast on Facebook Live and after, videos continued to spread across the internet.

Although Facebook continues to cite that the video was only viewed about 200 times during its live broadcast — and that nobody flagged it to moderators — those excuses aren’t cutting it for users and lawmakers.

Now, the company is saying it tried to use an experimental audio technology, in order to catch copies of the video that it’s AI missed. Facebook wrote that it “employed audio matching technology to detect videos which had visually changed beyond our systems’ ability to recognize automatically but which had the same soundtrack.”

In the post Facebook noted that AI requires “training data”. Essentially it learns to ban a specific type of content by seeing it a certain number of times. That meant the AI could also be confused if the video was slightly altered visually, like being doctored or someone recording their own screen and then posting that.

Facebook didn’t provide too many details on the audio technology beyond that. But, it’s clear that the company is trying to be more transparent due to increased government pressure to explain its process.

New Zealand’s prime minister Jacinda Arden has made it clear that she’s been unimpressed with Facebook so far. She has been in contact with Facebook’s chief operating officer, but increased her criticism of the platform on Tuesday in Parliament.

Facebook is also facing potential consequences here in the United States.

Yesterday, Chairman of the House Homeland Security Committee, Rep. Bennie G. Thompson wrote a letter to executives from top companies, including Facebook and YouTube, where the video continued to exist. 

In it, Thompson said they must “do better” and called for executives, including Facebook’s CEO Mark Zuckerberg, to appear before Congress in order to explain their process and ensure something like this won’t happen again.

What is perhaps most frustrating about the entire conversation so far is that it continues to center on reactivity. Facebook says it couldn’t stop the original video because nobody reported it, but the type of online hate that the shooter identified as inspiration has existed on the platform for decades.

Social media companies have historically taken a relaxed approach to confronting online hate. But Christchurch, and the lack of appropriate response to it, isn’t an event that can easily be swept under a rug.

Why a Philosophy Professor is Leading an AI program at Stanford

Artificial intelligence is everywhere, but that doesn’t mean it’s always being applied appropriately. There have been rising concerns about AI’s performance and its potential for only worsening — or introducing — social issues.

It all comes down to figuring out how to use AI in an ethical manner. Now, it seems some universities are taking on that responsibility themselves.

On Monday, Stanford University announced the launch of a new program dedicated to guiding and developing human-centered artificial tech and applications.

The Stanford Institute for Human-Centered Artificial Intelligence (HAI) will incorporate a focus on multidisciplinary collaboration. This is important because the lack of diversity in AI is glaringly apparent. That includes the lack of racial diversity leading to algorithms that do things like mistake Black people for gorillas.

It also pushes back against the notion that tech should exist separate from everything else. When programs are developed, people outside of the tech world aren’t always brought in to consult.

Tech impacts our social lives and it can impact health, laws, and more. That means the future needs to see people at the table who may not be in the tech field itself, but understand how its application impacts other areas.

“Now is our opportunity to shape that future by putting humanists and social scientists alongside people who are developing artificial intelligence,” Stanford President Marc Tessier-Lavigne said.

Stanford will partner with industry, governments, and non-governmental organizations who “share the goal of a better future for humanity through AI.”

Stanford’s new institute shouldn’t come as a surprise, considering they recently partnered with other top universities to launch the Public Interest Technology Universities Network.

That program aims to develop curriculum, research agendas, and experiential learning programs so students are trained to better understand tech. In addition, the program is devoted to teaching students to understand where tech intersects with other issues, like criminal justice, civil rights, and more.

There’s no guarantee these programs will lead to radical results, but it is still a start. Hopefully, they will prompt important conversations about tech’s role in people’s lives.

Stanford’s HAI will be led by John Etchemendy, a professor of philosophy and former Stanford University provost, and Fei-Fei Li, a professor of computer science and former director of the Stanford AI Lab.

 

A New Harvard-MIT Program Is Granting $750k To Projects Creating Ethical AI

Artificial intelligence is becoming increasingly common in people’s day-to-day lives. That also means people are now more aware of what can happen if AI isn’t responsibly created.

For example, AI has the potential to spew out loads of misinformation, as seen when the former non-profit OpenAI tested a text generator and deemed it too dangerous to release. That can be particularly dangerous when people don’t know where to go to fact check information.

A joint Harvard-MIT program hopes to combat some of AI’s issues by working to ensure future AI developments are ethical. Today, the program announced the winners of the AI and the News Open Challenge. Winners will receive $750,000 in total.

The challenge was put on by the Ethics and Governance in AI Initiative. Launched in 2017, it’s a “hybrid research effort and philanthropic fund” that’s funded by MIT’s Media Lab and Harvard’s Berkman-Klein Center.

“As researchers and companies continue to advance the technical state of the art, we believe that it is necessary to ensure that AI serves the public good,” the AI initiative shared in a blog post. “This means not only working to address the problems presented by existing AI systems, but articulating what realistic, better alternatives might look like.”

In general, the projects selected look at tech and its role in keeping people informed. Even in looking at just a few of the winners, it’s clear important work is being done.

For example, MuckRock’s Foundation project Sidekick is a machine learning tool that will help journalists go through massive documents. Then there’s Legal Robot, a tool that will mass-request and then quickly extract data from government contracts.

Some of the projects, like Tattle, are also tackling misinformation. The tool will be used to specifically address misinformation on WhatsApp, and it’ll support fact-checkers working in India.

This isn’t the first time the initiative has given out grants, but it is the first time they’ve given them out in response to an open call for ideas.

“It’s naive to believe that the big corporate leaders in AI will ensure that these technologies are being leveraged in the public interest,” the initiative’s director, Tim Hwang, said, according to TechCrunch. “Philanthropic funding has an important role to play in filling in the gaps and supporting initiatives that envision the possibilities for AI outside the for-profit context.”

Google’s ‘TensorFlow’ Addition Encourages AI Developers to Keep Data Private

With conversations around data protection and privacy becoming more frequent, big tech companies have to step up and participate. Now, it seems Google is making attempts to develop ethical AI.
Recently, Google introduced TensorFlow Privacy, a new tool that makes it easier for developers to improve the privacy of AI models. It’s an addition to TensorFlow,  a popular tool used in creating text, audio, image recognition algorithms, and more.
TensorFlow Privacy uses a technique based on the theory of “differential privacy.” Essentially, this approach trains AI to not encode personally identifiable information. This is important because nobody wants AI to put all of their business into the world.
Google developing this program means the company is actually following the principles for responsible AI development that it outlined in a blog post last year. In the post, Google’s CEO Sundar Pichai wrote, “We will incorporate our privacy principles in the development and use of our AI technologies.”
Differential privacy is already used by tech companies. Google itself incorporated it into Gmail’s Smart Reply, as noted by The Verge. That’s why when AI makes suggestions for completing a sentence, it doesn’t broadcast anyone’s personal information.
This is also Google’s way of making sure the technology is available for everyone else. Google’s product manager Carey Radebaugh told The Verge, “If we don’t get something like differential privacy into TensorFlow, then we just know it won’t be as easy for teams inside and outside of Google to make use of it.”
There are still some kinks to be worked out with differential privacy because it can sometimes remove useful or interesting data. However, kinks can’t be worked out if nobody ever uses the program.
Radebaugh told The Verge, “So for us it’s important to get into TensorFlow, to open source it, and to create community around it.”

Report: D&I Technology Should Be Used As A Guide, Not An Entire Solution

Diversity and inclusion have become increasingly relevant in the corporate world. More executive officers are being hired to improve workplace culture and provide safe spaces for underrepresented groups.

Companies are turning to diversity and inclusion technologies to help identify and solve their issues. According to a study by RedThread Research and Mercer,  the D&I technology market is quickly expanding with a growing market size of $100 million.

The report noted that there are some dangers in using artificial intelligence in diversity and inclusion technologies because machine learning can amplify stereotypes, adversely impacting underrepresented and marginalized groups.

Human biases can unintentionally be embedded into algorithms causing discriminatory features in AI products and tools.

The report suggests that organizations be cognizant of some of the flaws in AI because they are programmed by humans with innate biases. Before purchasing diversity and inclusion technology, companies request algorithmic audits and risk assessments to see how the tools impact underrepresented groups.

Companies using artificial intelligence have come under fire over the past couple years after after it was revealed that some products discriminated against certain groups. Last year, Amazon nixed an automated recruitment tool that discriminated against women, according to Reuters.

The report also suggested that companies used AI results and data “directionally” and not depend on the technology to completely solve diversity and inclusion.

“Diversity and inclusion has long been a priority for many of our clients and other organizations,” Carole Jackson, co-author of the report and senior principal in Mercer’s Diversity & Inclusion consulting practice, said to CIO.com. “But it wasn’t always a top ‘business priority’ for CEOs. It was often considered ‘the right thing to do’ and with that came nominal budgets and superficial support from leaders.”

The NBA is Upping Its Tech Game With These ‘Smart Jerseys’

The NBA is introducing new “smart jerseys” along with a list of other technologies to keep fans enticed.

During the NBA’s Technology Summit during All-Star Weekend, NBA Commissioner Adam Silver demoed a smart jersey that allows users to change the player name and number through an app.

So if your favorite player is not doing well or you just change your alliances from say James Harden to Giannis Antetokounmpo, you can change your gear at the touch of a button.

Silver also noted fans would soon be able to enter games with facial recognition and enjoy hologram mascots by 2038.

No, these will not be available wherever you buy team gear in the near future, but it’s still cool that the NBA is thinking of new ways to engage fans using technology.

Check out the demo here:

President Trump is Expected to Sign an Executive Order on AI. Here’s Why It Matters

The United States fell behind when 18 countries around the world launched programs to stimulate AI development. Now, President Trump is expected to sign an executive order launching the American Artificial Intelligence (AI) Initiative.

A senior administration official reportedly told CNN that the initiative outlines “bold, decisive actions to ensure that AI continues to be fueled by American ingenuity, reflects American values and is applied for the benefit of the American people.”

Goals of the AI initiative will be split into the following 5 areas, as reported by multiple different outlets:  Research and Development, Resources, Ethical Standards, Automation, and International Outreach.

America is still the world’s leader in AI research, but recent investments in the technology from China, France, and South Korea is what’s more than likely fueling this new order from the president.

“This executive order is about ensuring continued America leadership in AI, which includes ensuring AI technologies reflect American values, policies, and priorities,” an administration official told Axios. 

While major voices in the tech community have applauded the initiative for making AI a policy priority, it fails to reference some key concerns. AI technologies such as facial recognition have the potential to infringe upon privacy and civil liberties.

Certain aspects of AI have been under fire over the past few years. One of the most notable was when Amazon’s Rekognition technology falsely matched 28 members of Congress–most of them people of color–with public mugshots. Several civil rights groups have called on the tech industry to not sell its AI technology to the government and companies like Microsoft have called for federal regulation of facial recognition technology, claiming that AI is amplifying widespread surveillance.

Jason Furman –a now Harvard professor who served as chairman of the Council of Economic Advisors under President Obama and helped draft that administration’s 2016 report on AI –told Technology Review“The Administration’s American AI Initiative includes all of the right elements, the critical test will be to see if they follow through in a vigorous manner.”

The administration has not provided many details on the plan such as which projects will be launched, or how much money will go into funding the different initiatives.

Additional information will be released over the next six months.

 

 

IBM Releases Dataset to Help Reduce Bias in Facial Recognition Systems

IBM wants to make facial recognition systems more fair and accurate.

The company just released a research paper along with a substantial dataset of 1 million images with intrinsic facial features including facial symmetry, skin color, age, and gender.

The tech giant hopes to use the Diversity in Faces (DiF) dataset to advance the study of diversity in facial recognition and further aid the development of the technology.

“Face recognition is a long-standing challenge in the field of Artificial Intelligence (AI),” the authors of the paper wrote. “However, with recent advances in neural networks, face recognition has achieved unprecedented accuracy, built largely on data-driven deep learning methods.”

Lead scientist at IBM, John Smith told CNBC that many prominent datasets lack balance and coverage of facial images.

“In order for the technology to advance it needs to be built on diverse training data,” he said. “The data does not reflect the faces we see in the world.”

Bias in facial recognition technology is an ongoing issue in the industry and tech companies are starting to take steps to address the problem. In December, Microsoft president, Brad Smith, wrote a company blog post outlining risks and potential abuses of facial recognition technology, including privacy, democratic freedoms, and discrimination.

The company also wrote that it is calling for new laws that regulate artificial intelligence software to prevent bias.

Joy Buolamwini, a researcher at the M.I.T. Media Lab, researched how biases affect artificial intelligence and found the technology misidentified the gender of darker-skinned women 35 percent of the time.

“You can’t have ethical A.I. that’s not inclusive,” Buolamwini said in the New York Times. “And whoever is creating the technology is setting the standards.”

IBM’s Diversity in Faces dataset is available to the public and researchers are urging others to build on this work.

“We selected a solid starting point by using one million publicly available face images and by implementing ten facial coding schemes,” they wrote in the paper. “We hope that others will find ways to grow the data set to include more faces.”

Neutrogena Is Using AI To Launch Personalized, 3D Printed Face Masks

Neutrogena–a household name for beauty products–is launching a new iOS App called MaskiD that will help with problem spots on user’s faces.

MaskiD relies on TrueDepth cameras in the iPhone X, XS, and XR to create 3D printed masks that fit the user’s face measurements. MaskiD can be paired with Neutrogena’s Skin360, one of the company’s other tools that uses artificial intelligence, to help with skin care.

Skin360 tracks skin’s progress over time and analyzes its health and needs. Skin360 comes in two parts: the skin scanner and the app. The scanner is paired with an iPhone to magnify the phone’s camera lens and enhances the magnification with eight high-powered LED lights. The scanner also has a “moisture meter” to determine which areas of the face require more attention.

Skin360 scans different areas of a user’s face and when paired with the MaskiD provides a better skin care suggestion. MaskiD users can select from a variety of ingredients used to improve their skin including stabilized vitamin C, purified hyaluronic acid and N-Acetylglucosamine. Each section of the mask uses the ingredients to target user’s problem areas.

MaskiD and Skin360 are two of Neutrogena’s latest products to use artificial intelligence. The company told the Verge their plan is to roll out even more products using AI.

This Researcher Is Using AI To Generate Tribal Masks

One researcher is using technology to recreate some of the world’s oldest and most beautiful works of art.

Human-computer interaction researcher Victor Dibia is using artificial intelligence to generate African masks based on his custom curated dataset.

The Carnegie Mellon graduate was inspired to explore merging tribal art and AI after attending the Deep Learning Indaba conference in South Africa where Google provided attendees Tensor Processing Units (TPUs)—Google’s custom-developed AI accelerator application.

He trained a generative adversarial network (GAN)—a two-part neural network consisting of generators that produce samples and discriminators that attempt to distinguish between the generated samples and real-world samples—to generate images based on the dataset he built.

Dibia explains in a blog post that he manually created a dataset of over 9000 diverse images depicting African masks in different shapes and textures.

“The goal is not to generate a perfectly realistic mask, but more towards observing any creative or artistic elements encoded in the resulting GAN,” he wrote.

The researcher trained the GAN by using a larger set of non-curated images from a web search with initial results showing the model generating images “distinct from their closest relatives in the dataset.”

“GANs can be useful for artistic exploration,” he wrote of his findings. “In this case, while some of the generated images are not complete masks, they excel at capturing the texture or feel of African art.”

Dibia plans to expand the Africa masks dataset and continue experiments with “conditioned GANs” and its relationship to artistic properties.