Write to us
enquiry@myfinb.com

Bright minds, artificial and real, strain to fight bias in AI

"The startup is one of many organizations, including more than a dozen startups and some of the biggest names in tech, offering tools and services designed to identify and remove bias from AI systems."

In 2018, Liz O’Sullivan and her colleagues at a prominent artificial intelligence startup began work on a system that could automatically remove nudity and other explicit images from the internet.

They sent millions of online photos to workers in India, who spent weeks adding tags to explicit material. The data paired with the photos would be used to teach AI software how to recognize indecent images. But once the photos were tagged, O’Sullivan and her team noticed a problem: The Indian workers had classified all images of same-sex couples as indecent.

For O’Sullivan, the moment showed how easily — and often — bias could creep into artificial intelligence. It was a “cruel game of Whack-a-Mole,” she said.

In June, O’Sullivan, a 36-year-old New Yorker, was named CEO of a new company, Parity. The startup is one of many organizations, including more than a dozen startups and some of the biggest names in tech, offering tools and services designed to identify and remove bias from AI systems.

Soon, businesses may need that help. In April, the Federal Trade Commission warned against the sale of AI systems that were racially biased or could prevent individuals from receiving employment, housing, insurance or other benefits. A week later, the European Union unveiled draft regulations that could punish companies for offering such technology.

It is unclear how regulators might police bias.

This past week, the National Institute of Standards and Technology, a government research lab whose work often informs policy, released a proposal detailing how businesses can fight bias in AI, including changes in the way technology is conceived and built.

Many in the tech industry believe businesses must start preparing for a crackdown. “Some sort of legislation or regulation is inevitable,” said Christian Troncoso, senior director of legal policy for the Software Alliance, a trade group that represents some of the biggest and oldest software companies. “Every time there is one of these terrible stories about AI, it chips away at public trust and faith.”

Over the past several years, studies have shown that facial recognition services, health care systems and even talking digital assistants can be biased against women, people of color and other marginalized groups. Amid a growing chorus of complaints over the issue, some local regulators have already taken action.

In late 2019, state regulators in New York opened an investigation of UnitedHealth Group after a study found that an algorithm used by a hospital prioritized care for white patients over Black patients, even when the white patients were healthier. Last year, the state investigated the Apple Card credit service after claims it was discriminating against women. Regulators ruled that Goldman Sachs, which operated the card, did not discriminate, while the status of the UnitedHealth investigation is unclear.

A spokesperson for UnitedHealth, Tyler Mason, said the company’s algorithm had been misused by one of its partners and was not racially biased. Apple declined to comment.

More than $100 million has been invested over the past six months in companies exploring ethical issues involving artificial intelligence, after $186 million last year, according to PitchBook, a research firm that tracks financial activity.

But efforts to address the problem reached a tipping point recently when the Software Alliance offered a detailed framework for fighting bias in AI, including the recognition that some automated technologies require regular oversight from humans.

The trade group believes the document can help companies change their behavior and can show regulators and lawmakers how to control the problem.

Although they have been criticized for bias in their own systems, Amazon, IBM, Google and Microsoft also offer tools for fighting it.

O’Sullivan said there was no simple solution to bias in AI. A thornier issue is that some in the industry question whether the problem is as widespread or as harmful as she believes it is.

As O’Sullivan saw after the tagging done in India, bias can creep into a system when designers choose the wrong data or sort through it in the wrong way. Studies show that face-recognition services can be biased against women and people of color when they are trained on photo collections dominated by white men.

Source: Chicago Tribune

Share:

Facebook
Twitter
LinkedIn
Telegram
WhatsApp
Email

AI Blog – Latest news on Artificial Intelligence and its applications on the globe. 

Browse more

Related Posts

Gender Diversification Strategy Brings Positive Impact to ai venture group

Over the next decade, the World Bank estimates one billion young people will try to enter the job market, but less than half of them will find formal jobs. This will leave the majority of young people, many in minority and marginalized groups, unemployed or experiencing working poverty. The predicted rise in economic inequality and inadequate job opportunities has the potential to negatively impact a generation of young people around the world.

Read More »