This Robot Protects Images of Indigenous Peoples From Being Drowned Out by Western Culture

Photographer, Jimmy Nelson, wants to use artificial intelligence to ensure stories and images of indigenous peoples aren’t lost while western culture continues to dominate the internet.

The Preservation Robot — unveiled this week at SXSW —  is an automated platform amplifying images of indigenous people on the internet with the goal of protecting these seemingly forgotten cultures.

“Indigenous culture is visually underrepresented – and often misrepresented,” Nelson said during his unveiling. “I’m taking a stand by launching The Preservation Robot. Using technology to invert homogenization.”

Westernization and the rise of globalization have put indigenous peoples in danger of having their cultures erased, according to Nelson. This program wants to help combat that.

The robot uses autonomy and artificial intelligence to place images of indigenous peoples in open spaces on the internet. These spaces include social platforms and free cloud spaces.

The Preservation Robot also uses automation in collaboration with search engine optimization to ensure the photos appear more frequently in search results.

After traveling the world to photograph indigenous groups, Nelson worked to amplify his images and educate people.

The inaugural set of images in the project were taken by Nelson, but later images will be from other photographers through the Jimmy Nelson Foundation.

Here are some of Nelson’s photos that will be featured in The Preservation Robot:

Photo Credit: Jimmy Nelson

 

Photo: Jimmy Nelson
Photo: Jimmy Nelson
Photo: Jimmy Nelson
Photo: Jimmy Nelson

A New Harvard-MIT Program Is Granting $750k To Projects Creating Ethical AI

Artificial intelligence is becoming increasingly common in people’s day-to-day lives. That also means people are now more aware of what can happen if AI isn’t responsibly created.

For example, AI has the potential to spew out loads of misinformation, as seen when the former non-profit OpenAI tested a text generator and deemed it too dangerous to release. That can be particularly dangerous when people don’t know where to go to fact check information.

A joint Harvard-MIT program hopes to combat some of AI’s issues by working to ensure future AI developments are ethical. Today, the program announced the winners of the AI and the News Open Challenge. Winners will receive $750,000 in total.

The challenge was put on by the Ethics and Governance in AI Initiative. Launched in 2017, it’s a “hybrid research effort and philanthropic fund” that’s funded by MIT’s Media Lab and Harvard’s Berkman-Klein Center.

“As researchers and companies continue to advance the technical state of the art, we believe that it is necessary to ensure that AI serves the public good,” the AI initiative shared in a blog post. “This means not only working to address the problems presented by existing AI systems, but articulating what realistic, better alternatives might look like.”

In general, the projects selected look at tech and its role in keeping people informed. Even in looking at just a few of the winners, it’s clear important work is being done.

For example, MuckRock’s Foundation project Sidekick is a machine learning tool that will help journalists go through massive documents. Then there’s Legal Robot, a tool that will mass-request and then quickly extract data from government contracts.

Some of the projects, like Tattle, are also tackling misinformation. The tool will be used to specifically address misinformation on WhatsApp, and it’ll support fact-checkers working in India.

This isn’t the first time the initiative has given out grants, but it is the first time they’ve given them out in response to an open call for ideas.

“It’s naive to believe that the big corporate leaders in AI will ensure that these technologies are being leveraged in the public interest,” the initiative’s director, Tim Hwang, said, according to TechCrunch. “Philanthropic funding has an important role to play in filling in the gaps and supporting initiatives that envision the possibilities for AI outside the for-profit context.”

Virtual Assistants Can Reinforce Sexist Stereotypes. These Researchers Want To Change That

If you have virtual assistance, chances are high that it’s been gendered. From leading assistants like Apple’s Siri, Amazon’s Alexa, and Microsoft’s Cortana, virtual assistants generally default to a female voice, and that’s a problem.

Although it may seem trivial, tech can reinforce old stereotypes, even as it innovates. That includes stereotypes of women who only exist to follow orders and please others. In some ways, virtual assistants are like having the ideal stereotype of a secretary sitting on your dresser.

To combat this trend, a Denmark-based team recently presented a new voice at the South by Southwest (SXSW) Conference & Festivals in Texas, as reported by Reuters. Unlike leading voices, this one is designed to be neither male nor female.

Everybody, meet Q.

“Hi, I’m Q, the world’s first genderless voice assistant,” Q says for an introduction. “I’m created for a future where we are no longer defined by gender, but rather how we define ourselves.”

Q is a joint venture between Vice Media’s Virtue creative agency and Copenhagen Pride. For the project, the team purposefully recorded 22 transgender and non-binary people as the voice’s basis, according to Reuters.

“Technology companies often choose to gender technology believing it will make people more comfortable adopting it,” Q’s website reads. “Unfortunately this reinforces a binary perception of gender, and perpetuates stereotypes that many have fought hard to progress.”

By creating a voice meant to be genderless, and including both trans and non-binary people in the process, the team behind Q has taken one big step towards tackling gender biases in tech.

Before launch, Q was tested by more than 4,000 volunteers, Reuters reported. About half of them said they couldn’t decipher by gender for the voice and, for the half who tried, they were evenly split between guessing if it was male or female.

“We aim to get the attention of leading technological companies that work with AI to ensure they are aware that a gender binary normativity excludes many people and to inspire them by showing how easy it would actually be to recognize that more than two genders exist when developing artificial intelligence,” Thomas Rasmussen, head of communication for Copenhagen Pride said, according to CNBC.

Right now, people can only interact with Q on a website. Hopefully, it will lead other companies to start including genderless voices in their digital assistance.

The Pentagon Has Awarded a Contract For Its Controversial Project Maven Program

Palmer Luckey is perhaps best-known for founding the virtual reality firm Oculus Rift  that was bought by Facebook for $3 billion.  Now, his new firm, Anduril Industries, has made its way into the spotlight once again.

Anduril has won a contract to work on Project Maven, the Pentagon’s highly controversial drone AI program, as reported by The Intercept.

Luckey’s firm has a distinct focus on military technology, so it’s not surprising that he’s the one who will be working on the project.  Founded in 2017, Anduril “invents and builds technology to secure America and its interests,” according to the company’s website.

According to The Verge, Luckey began working on Project Maven in 2018. The Intercept noted that Luckey actually hinted about the project last November at the Web Summit, a tech conference in Lisbon, Portugal.

“We’re deployed at several military bases. We’re deployed in multiple spots along the U.S. border,” Luckey said. “We’re deployed around some other infrastructure I can’t talk about.”

Luckey went on to add, “Practically speaking, in the future, I think soldiers are going to be superheroes who have the power of perfect omniscience over their area of operations, where they know where every enemy is, every friend is, every asset is.”

That “power of perfect omniscience” is essentially the goal of the Pentagon’s Project Maven. It’s an artificial intelligence program focusing on computer vision to extract “objects of interests” from video or still images. Ultimately, as The Intercept outlined, the project wants to use AI tech from the private sector for the military.

Anduril already has a system called Lattice, a “virtual border wall” using machine learning to identify objects for border monitoring, as outlined by Engadget.

According to The Verge, Lattice has helped border agents catch numerous people trying to cross the border. It also gives soldiers in combat zones 3D imagery and has completed its “first phase” of research, according to The Intercept, with plans to deploy in Afghanistan.

Anduril isn’t the first company to show interest in this particular project. Google originally had a contract with the Department of Defense but dropped it after employee backlash.

Palmer Luckey, co-founder of Oculus VR and founder of Anduril Industries.

While the controversy around Project Mayven is one of the more well known disputes around the military’s use of AI, it’s a not a one off.

Many have expressed concern over how tech companies joining up with the military will speed up the development of autonomous weapons, as reported by Gizmodo. In fact, when more than 90 academics in AI, ethics, and computer science released an open letter calling for Google to end its Project Maven work, they also wanted an “international treaty to prohibit autonomous weapons systems.”

Technology columnist Navneet Alang outlined this  military-technology complex and  issues that come with it in a 2018 article for The Week. He argues that “a broader involvement by the tech world in creating instruments of surveillance and tools for the military” seems to want to replace the military-industrial complex.

As people worry about tech’s potential to exacerbate social problems in the United States, there’s clearly reason to be concerned about what it can do to people outside U.S. borders and how it will be weaponized in the future.

Google’s ‘TensorFlow’ Addition Encourages AI Developers to Keep Data Private

With conversations around data protection and privacy becoming more frequent, big tech companies have to step up and participate. Now, it seems Google is making attempts to develop ethical AI.
Recently, Google introduced TensorFlow Privacy, a new tool that makes it easier for developers to improve the privacy of AI models. It’s an addition to TensorFlow,  a popular tool used in creating text, audio, image recognition algorithms, and more.
TensorFlow Privacy uses a technique based on the theory of “differential privacy.” Essentially, this approach trains AI to not encode personally identifiable information. This is important because nobody wants AI to put all of their business into the world.
Google developing this program means the company is actually following the principles for responsible AI development that it outlined in a blog post last year. In the post, Google’s CEO Sundar Pichai wrote, “We will incorporate our privacy principles in the development and use of our AI technologies.”
Differential privacy is already used by tech companies. Google itself incorporated it into Gmail’s Smart Reply, as noted by The Verge. That’s why when AI makes suggestions for completing a sentence, it doesn’t broadcast anyone’s personal information.
This is also Google’s way of making sure the technology is available for everyone else. Google’s product manager Carey Radebaugh told The Verge, “If we don’t get something like differential privacy into TensorFlow, then we just know it won’t be as easy for teams inside and outside of Google to make use of it.”
There are still some kinks to be worked out with differential privacy because it can sometimes remove useful or interesting data. However, kinks can’t be worked out if nobody ever uses the program.
Radebaugh told The Verge, “So for us it’s important to get into TensorFlow, to open source it, and to create community around it.”

Report: D&I Technology Should Be Used As A Guide, Not An Entire Solution

Diversity and inclusion have become increasingly relevant in the corporate world. More executive officers are being hired to improve workplace culture and provide safe spaces for underrepresented groups.

Companies are turning to diversity and inclusion technologies to help identify and solve their issues. According to a study by RedThread Research and Mercer,  the D&I technology market is quickly expanding with a growing market size of $100 million.

The report noted that there are some dangers in using artificial intelligence in diversity and inclusion technologies because machine learning can amplify stereotypes, adversely impacting underrepresented and marginalized groups.

Human biases can unintentionally be embedded into algorithms causing discriminatory features in AI products and tools.

The report suggests that organizations be cognizant of some of the flaws in AI because they are programmed by humans with innate biases. Before purchasing diversity and inclusion technology, companies request algorithmic audits and risk assessments to see how the tools impact underrepresented groups.

Companies using artificial intelligence have come under fire over the past couple years after after it was revealed that some products discriminated against certain groups. Last year, Amazon nixed an automated recruitment tool that discriminated against women, according to Reuters.

The report also suggested that companies used AI results and data “directionally” and not depend on the technology to completely solve diversity and inclusion.

“Diversity and inclusion has long been a priority for many of our clients and other organizations,” Carole Jackson, co-author of the report and senior principal in Mercer’s Diversity & Inclusion consulting practice, said to CIO.com. “But it wasn’t always a top ‘business priority’ for CEOs. It was often considered ‘the right thing to do’ and with that came nominal budgets and superficial support from leaders.”

Neutrogena Is Using AI To Launch Personalized, 3D Printed Face Masks

Neutrogena–a household name for beauty products–is launching a new iOS App called MaskiD that will help with problem spots on user’s faces.

MaskiD relies on TrueDepth cameras in the iPhone X, XS, and XR to create 3D printed masks that fit the user’s face measurements. MaskiD can be paired with Neutrogena’s Skin360, one of the company’s other tools that uses artificial intelligence, to help with skin care.

Skin360 tracks skin’s progress over time and analyzes its health and needs. Skin360 comes in two parts: the skin scanner and the app. The scanner is paired with an iPhone to magnify the phone’s camera lens and enhances the magnification with eight high-powered LED lights. The scanner also has a “moisture meter” to determine which areas of the face require more attention.

Skin360 scans different areas of a user’s face and when paired with the MaskiD provides a better skin care suggestion. MaskiD users can select from a variety of ingredients used to improve their skin including stabilized vitamin C, purified hyaluronic acid and N-Acetylglucosamine. Each section of the mask uses the ingredients to target user’s problem areas.

MaskiD and Skin360 are two of Neutrogena’s latest products to use artificial intelligence. The company told the Verge their plan is to roll out even more products using AI.

This Researcher Is Using AI To Generate Tribal Masks

One researcher is using technology to recreate some of the world’s oldest and most beautiful works of art.

Human-computer interaction researcher Victor Dibia is using artificial intelligence to generate African masks based on his custom curated dataset.

The Carnegie Mellon graduate was inspired to explore merging tribal art and AI after attending the Deep Learning Indaba conference in South Africa where Google provided attendees Tensor Processing Units (TPUs)—Google’s custom-developed AI accelerator application.

He trained a generative adversarial network (GAN)—a two-part neural network consisting of generators that produce samples and discriminators that attempt to distinguish between the generated samples and real-world samples—to generate images based on the dataset he built.

Dibia explains in a blog post that he manually created a dataset of over 9000 diverse images depicting African masks in different shapes and textures.

“The goal is not to generate a perfectly realistic mask, but more towards observing any creative or artistic elements encoded in the resulting GAN,” he wrote.

The researcher trained the GAN by using a larger set of non-curated images from a web search with initial results showing the model generating images “distinct from their closest relatives in the dataset.”

“GANs can be useful for artistic exploration,” he wrote of his findings. “In this case, while some of the generated images are not complete masks, they excel at capturing the texture or feel of African art.”

Dibia plans to expand the Africa masks dataset and continue experiments with “conditioned GANs” and its relationship to artistic properties.

Study Shows Twitter Is Toxic For Women—Especially Black Women

Twitter is a toxic place for women.

That’s according to a new report by Amnesty International and Element AI, which analyzed millions of tweets showing women are targeted with hate speech on social platforms.

The Troll Patrol Report found that 7.1 percent of tweets sent to women can be considered “problematic” or “abusive.” That’s one every 30 seconds on average or a total of 1.1 million.

“We have built the world’s largest crowdsourced dataset about online abuse against women,” Milena Marin, Senior Adviser for Tactical Research at Amnesty International, said in a statement. “We have the data to back up what women have long been telling us—that Twitter is a place where racism, misogyny, and homophobia are allowed to flourish basically unchecked.”

Black women are particularly impacted and are 84 percent more likely to be targeted with hate speech online than their white counterparts. Women of color are 34 percent more likely to be targeted in tweets with abusive language.

The report surveyed millions of tweets received by 778 journalists and politicians from the UK and US last year across the political spectrum.

“We found that, although abuse is targeted at women across the political spectrum, women of color were much more likely to be impacted and black women are disproportionately targeted,” said Marin. “Twitter’s failure to crack down on this problem means it is contributing to the silencing of already marginalized voices.”

The human rights group has repeatedly asked Twitter to release data detailing the abuse taking place on their platform and address the hate speech. While Twitter has yet to release a comprehensive breakdown of violence against women on their platform, the social media company did release its latest transparency report last week.

Encounter AI Wants To Streamline How We Order Food

Artificial intelligence is looking to provide better service to customers, restaurants and fast food chains by making their ordering process faster than ever. Encounter AI is doing so through the use of voice-recognition technology and artificial intelligence.

Founded by CEO Derrick Johnson and Kabah Conda, Encounter AI is marketed toward businesses who use headset and intercom systems to shorten ordering time. For example, while the automated system is handling orders, cashiers are able to focus on monetary transactions.

Johnson said the biggest benefit to using Encounter AI for consumers is the customization and personalization.

“You can say what diet you’re on and the system will show you menu items that are there,” Johnson said. Consumers with allergies and other restricted diets are easily able to identify with menu items that fit their needs within seconds using Encounter AI.

Encounter AI’s ordering system also helps eliminate waste — cashiers do not have to worry about logging the wrong orders and risking customers throwing food away.

Johnson said other technological changes in the food and restaurant industries such as the use of mobile ordering and kiosks have had contrasting impacts. Panera Bread, Wendy’s and McDonald’s currently have ordering kiosks in their restaurants, but the equipment is not making significant changes in ordering times because customers are not using them.

Alternatively, mobile ordering with Seamless, Grubhub, Postmates, UberEats, and in-house apps has made it easier for customers to get their food without going to an actual store.

Johnson said customers use kiosks as a backup plan and usually “choose humans first.” He also said customers aren’t going to use an app once they have already committed time to a store.

Encounter AI is not only for restaurants, but the platform can also be used to streamline inventory management.

Encounter AI is currently being tested in Milwaukee, Chicago, and Atlanta with plans to release in early 2019.

Derrick Johnson and Kabah Conda will be participating in the AfroTech Cup Pitch Competition. Check out the live stream starting at 2 o’clock to see them and other founders pitch their ideas.