The NBA is Upping Its Tech Game With These ‘Smart Jerseys’

The NBA is introducing new “smart jerseys” along with a list of other technologies to keep fans enticed.

During the NBA’s Technology Summit during All-Star Weekend, NBA Commissioner Adam Silver demoed a smart jersey that allows users to change the player name and number through an app.

So if your favorite player is not doing well or you just change your alliances from say James Harden to Giannis Antetokounmpo, you can change your gear at the touch of a button.

Silver also noted fans would soon be able to enter games with facial recognition and enjoy hologram mascots by 2038.

No, these will not be available wherever you buy team gear in the near future, but it’s still cool that the NBA is thinking of new ways to engage fans using technology.

Check out the demo here:

President Trump is Expected to Sign an Executive Order on AI. Here’s Why It Matters

The United States fell behind when 18 countries around the world launched programs to stimulate AI development. Now, President Trump is expected to sign an executive order launching the American Artificial Intelligence (AI) Initiative.

A senior administration official reportedly told CNN that the initiative outlines “bold, decisive actions to ensure that AI continues to be fueled by American ingenuity, reflects American values and is applied for the benefit of the American people.”

Goals of the AI initiative will be split into the following 5 areas, as reported by multiple different outlets:  Research and Development, Resources, Ethical Standards, Automation, and International Outreach.

America is still the world’s leader in AI research, but recent investments in the technology from China, France, and South Korea is what’s more than likely fueling this new order from the president.

“This executive order is about ensuring continued America leadership in AI, which includes ensuring AI technologies reflect American values, policies, and priorities,” an administration official told Axios. 

While major voices in the tech community have applauded the initiative for making AI a policy priority, it fails to reference some key concerns. AI technologies such as facial recognition have the potential to infringe upon privacy and civil liberties.

Certain aspects of AI have been under fire over the past few years. One of the most notable was when Amazon’s Rekognition technology falsely matched 28 members of Congress–most of them people of color–with public mugshots. Several civil rights groups have called on the tech industry to not sell its AI technology to the government and companies like Microsoft have called for federal regulation of facial recognition technology, claiming that AI is amplifying widespread surveillance.

Jason Furman –a now Harvard professor who served as chairman of the Council of Economic Advisors under President Obama and helped draft that administration’s 2016 report on AI –told Technology Review“The Administration’s American AI Initiative includes all of the right elements, the critical test will be to see if they follow through in a vigorous manner.”

The administration has not provided many details on the plan such as which projects will be launched, or how much money will go into funding the different initiatives.

Additional information will be released over the next six months.



IBM Releases Dataset to Help Reduce Bias in Facial Recognition Systems

IBM wants to make facial recognition systems more fair and accurate.

The company just released a research paper along with a substantial dataset of 1 million images with intrinsic facial features including facial symmetry, skin color, age, and gender.

The tech giant hopes to use the Diversity in Faces (DiF) dataset to advance the study of diversity in facial recognition and further aid the development of the technology.

“Face recognition is a long-standing challenge in the field of Artificial Intelligence (AI),” the authors of the paper wrote. “However, with recent advances in neural networks, face recognition has achieved unprecedented accuracy, built largely on data-driven deep learning methods.”

Lead scientist at IBM, John Smith told CNBC that many prominent datasets lack balance and coverage of facial images.

“In order for the technology to advance it needs to be built on diverse training data,” he said. “The data does not reflect the faces we see in the world.”

Bias in facial recognition technology is an ongoing issue in the industry and tech companies are starting to take steps to address the problem. In December, Microsoft president, Brad Smith, wrote a company blog post outlining risks and potential abuses of facial recognition technology, including privacy, democratic freedoms, and discrimination.

The company also wrote that it is calling for new laws that regulate artificial intelligence software to prevent bias.

Joy Buolamwini, a researcher at the M.I.T. Media Lab, researched how biases affect artificial intelligence and found the technology misidentified the gender of darker-skinned women 35 percent of the time.

“You can’t have ethical A.I. that’s not inclusive,” Buolamwini said in the New York Times. “And whoever is creating the technology is setting the standards.”

IBM’s Diversity in Faces dataset is available to the public and researchers are urging others to build on this work.

“We selected a solid starting point by using one million publicly available face images and by implementing ten facial coding schemes,” they wrote in the paper. “We hope that others will find ways to grow the data set to include more faces.”

Civil Rights Groups Want To Stop Big Tech From Selling Facial Recognition Software To the Government

Facial recognition technology is the latest tool that big tech is racing to perfect and a coalition of 85 civil rights organizations are trying to stop the country’s largest tech companies from selling it to the government.

The groups, which include the American Civil Liberties Union, Muslim Justice League, Color of Change and the National Immigration Law Center, sent letters today to Google, Microsoft and Amazon urging the companies to not sell their facial recognition technologies to the government.

“History has clearly taught us that the government will exploit technologies like face surveillance to target communities of color, religious minorities, and immigrants,” said Nicole Ozer, Technology and Civil Liberties director for the ACLU of California, in a press release. “We are at a crossroads with face surveillance, and the choices made by these companies now will determine whether the next generation will have to fear being tracked by the government for attending a protest, going to their place of worship, or simply living their lives.”

In January of last year, Google said it “fixed” a flaw in its facial recognition algorithm that misidentified black people as gorillas by blocking the terms “gorilla,” “chimp,” “chimpanzee,” and “monkey.”

Google’s CEO Sundar Pichai outlined the tech giant’s AI principles in a blog,  saying the company wanted to avoid creating and reinforcing unfair biases, aimed to be socially beneficial,  and wanted to avoid injury to people.

In a December interview with the Washington Post, Pichai called fears about artificial intelligence legitimate. Google received backlash from its employees last year after the company worked with the Department of Defense to provide AI that could identify buildings and car tags. The company said that it would not sell its facial recognition technology until its dangers were addressed.

“Google has a responsibility to follow its AI principles,” the coalition said in its letter to the company. “Selling a face surveillance product that could be used by the government will never be consistent with these Principles.”

In a December blog post, Microsoft President Brad Smith highlighted some of the opportunities and issues that come with facial recognition technologies.

“Especially in its current state of development, certain uses of facial recognition technology increase the risk of decisions and, more generally, outcomes that are biased and, in some cases, in violation of laws prohibiting discrimination,” Smith said.

Smith also noted that facial recognition technologies bring new intrusions to people’s privacy and the use of AI by governments “can encroach on democratic freedoms.”

In June, more than 100 Microsoft employees protested the company’s working with ICE after the agency was separating children from their parents at the Southwest border. Microsoft’s employees wrote a letter calling for the end of a $19.4 million contract with the agency.

“As the people who build the technologies that Microsoft profits from,
we refuse to be complicit,” the employees said. “We are part of a growing movement, comprised of many across the industry who recognize the grave responsibility that those creating powerful technology have to ensure what they build is used for good, and not for harm.”

The coalition commended Microsoft for addressing the issues with facial recognition technology and its work with ICE, but called for more action.

“The dangers of face surveillance can only be fully addressed by stopping its use by governments,” the coalition said in its letter to Microsoft. “This technology provides the government with an unprecedented ability to track who we are, where we go, what we do, and who we know.”

Amazon currently sells its Rekognition product to the American government and has worked law enforcement agencies in the past. The ACLU, along with various other civil rights organizations, sent another letter to Amazon CEO Jeff Bezos in May highlighting their concerns over the use of Rekognition on vulnerable communities, protestors and immigrants.

“People should be free to walk down the street without being watched by the government. Facial recognition in American communities threatens this freedom,” the coalition said in its May letter. “In overpoliced communities of color, it could effectively eliminate it.”

Amazon has also pushed for U.S. Immigration and Customs Enforcement to use Rekognition, a move that the coalition called “a threat to the safety of community members.”

In September, seven members of Congress sent letters to the Federal Trade Commission, the Federal Bureau of Investigation and the Equal Employment Opportunity Commission after the ACLU tested Amazon’s face surveillance technology on members of Congress against 25,000 mugshots, which resulted in 28 false matches.  Of those lawmakers mistakenly identified, 39 percent were people of color, including Representatives John Lewis (D-GA), Lacy Clay (D-MO) and Luis Gutiérrez (D-IL).

In Amazon blog post, the company explains that the ACLU’s test was conducted on an 80 percent confidence level, which has a 5 percent misidentification rate. When the test was replicated with a confidence level of 99 percent, the false positive results dropped to zero.

“In real-world public safety and law enforcement scenarios, Amazon Rekognition is almost exclusively used to help narrow the field and allow humans to expeditiously review and consider options using their judgment (and not to make fully autonomous decisions),” said Dr. Matt Wood in the post.

Large tech companies have come under fire throughout 2018 for their roles in endangering people of color and other minority groups using facial recognition and 2019 is looking to be the same as civil rights groups continue to highlight issues and technologies that could negatively impact minorities.


Days after the coalition sent its letter to Amazon, the company’s shareholders filed a resolution prohibiting the sale of facial recognition products to governments and law enforcement unless it is determined that “the technology does not cause or contribute to actual or potential violations of civil and human rights” under an independent evaluation.

This version also notes that the ACLU’s settings during the facial recognition test of congress members negatively impacted results.

Microsoft Calls for New AI Laws to Prevent Bias

Microsoft announced it’s adopting a set of facial recognition principles and is calling for new laws that regulate artificial intelligence software to prevent bias.

In a company blog post, Microsoft’s president Brad Smith outlined risks and potential for abuse associated with facial recognition technology. This included citing issues relating to privacy, democratic freedoms, and discrimination.

“Governments and the tech sector both play a vital role in ensuring that facial recognition technology creates broad societal benefits while curbing the risk of abuse,” said Smith.

As the issues with young technology become clearer, “we need to tackle the initial questions now and learn as we go,” he added.

Microsoft believes legislation can better influence the outcomes of facial recognition testing for accuracy and unfair bias. The company calls for laws requiring testing services to provide documentation clearly explaining the limitations of the software and for companies to start third-party testing.

“We readily recognize that we don’t yet have all the answers. Given the early stage of facial recognition technology, we don’t even know all the questions,” said Smith. “But we believe that taking a principled approach will provide valuable experience that will enable us to learn faster.”

Lawmakers Call on Amazon to Release Information About Bias in Facial Recognition Software

Congress is calling on Amazon CEO Jeff Bezos to release more information about the tech giant’s facial recognition software, Rekognition, after requests from lawmakers were unmet earlier this year.

In a letterLawmakers revealed the company “failed to provide sufficient answers” regarding Rekognition’s technology and have serious concerns about the product and, most notably, who is using it.

According to Axios, Amazon confirmed it met with Immigration and Customs Enforcement (ICE) officials over the summer to pitch their facial recognition software.

The company has numerous government contracts—including operating private cloud services for the CIA—and is actively marketing its technology to police departments.

Several lawmakers, including Rep. John Lewis, Rep. Ro Khanna, and Sen. Edward J. Markey, all expressed their concern in the letter.

“We have serious concerns that this type of product has significant accuracy issues, places disproportionate burdens on communities of color, and could stifle Americans’ willingness to exercise their First Amendment rights in public.”

They request Amazon provide the results of any internal accuracy or bias assessments performed on Rekognition and details on how they test for facial recognition accuracy and bias.

The company has until December 13 to respond.

Amazon Had To Ditch An AI Experiment After The Tool Showed Bias Against Women

Amazon is getting rid of a project that was trying to incorporate artificial intelligence with hiring. The tool was supposed to streamline the hiring process, but the technology showed bias against women, according to Reuters.

Amazon’s program penalized applicants who included the word “women’s” and who attended all-women’s colleges. Although it was only being tested internally, it’s still unclear, as The Verge points out, whether or not the program was actually used to make personnel decisions.

One of the biggest arguments against the use of artificial intelligence is how it perpetuates biases from data.

In September, seven members of Congress wrote letters to the Federal Trade Commission, the Federal Bureau of Investigation and the Equal Employment Opportunity Commission highlighting the risks of facial recognition technology, another form of AI.

“While they can offer many benefits, we are concerned by the mounting evidence that these technologies can perpetuate gender, racial, age, and other biases,” said the senators in their letter to the FTC.

Although companies are using AI and facial recognition to avoid bias in the hiring process, flaws in the technology are making companies question how to move forward.


Several Members of Congress Are Raising Questions About Facial Recognition Technology

Facial recognition technology is now being used to unlock smartphones, automatically tag friends on Facebook and certain sectors of law enforcement are even finding uses for it. While it is a major advancement in the way we live our everyday lives, some senators believe this form of artificial intelligence poses a threat to civil rights.

Seven members of Congress sent letters to the Federal Trade Commission, the Federal Bureau of Investigation and the Equal Employment Opportunity Commission highlighting the risks of facial recognition technology.

“While they can offer many benefits, we are concerned by the mounting evidence that these technologies can perpetuate gender, racial, age, and other biases,” said the senators in their letter to the FTC.

In the letter to the EEOC, senators — including Kamala Harris and Elizabeth Warren — questioned if the technologies could violate the Civil Rights Act of 1964, the Equal Pay Act of 1963, or the Americans with Disabilities Act of 1990.

In the letter to the FTC, senators said that facial recognition technology could lead to discrimination, saying people could be misidentified for crimes and charged for them.  The letter to the FBI mainly asked for updates on the recommendations made by the Government Accountability Office to address concerns about facial recognition technology.

Concerns over facial recognition are not new and have come up in national headlines several times over the past couple of years.

In July, the American Civil Liberties Union tested Amazon’s face surveillance technology on members of Congress against 25,000 mugshots, which resulted in 28 false matches.

In January, Google said it “fixed” a flaw in its facial recognition algorithm that misidentified black people as gorillas by blocking the terms “gorilla,” “chimp,” “chimpanzee,” and “monkey.”

And in February, a study by Joy Buolamwini at MIT showed that many major facial recognition technologies have issues accurately identifying the genders of darker skinned women.

Each letter requests that the agencies respond by the end of September.

Delta Airlines Is Putting Face Scanners in Atlanta International Airport

On Thursday, Delta Airlines announced that it will provide facial recognition scanners in its terminals at Hartsfield–Jackson Atlanta International Airport. The scanners are offered to customers flying directly to an international destination.

The company is also offering the technology to people flying with partner airlines Aeromexico, Air France-KLM and Virgin Atlantic Airways.

Delta Airlines has previously partnered with CLEAR, a biometric company that allows flyers to check in with a fingerprint or iris scanner. In July, Delta began offering biometric self-service bag drops in Minneapolis-St. Paul International Airport.

“Ever since 9/11, airlines, airports and TSA have been trying to keep the aviation system safe without causing too much hassle and pain for travelers,” said freelance aviation security writer Benét Wilson. “They’ve tried a lot of things and facial recognition is just a natural progression coming into play.”

Atlanta International Airport is known as the world’s busiest airport and Delta said it wants to make the process easier for its passengers.

“I think it’s a great idea and a really impressive advancement towards the future of travel,” said Atlanta-resident and frequent Delta flyer Justin Williams. He said that the airline’s implementation of the technology is also admirable.

Customers who opt out of using the face scanners will be required to go through traditional screenings.