Write to us
enquiry@myfinb.com

How a Chinese scientist in the US is helping defend AI systems from cyberattacks

"Artificial intelligence (AI) has been used to defeat world-renowned Go players or beat humans in video games, but the technology has weaknesses, which can be exploited by cyberattacks."

* Artificial intelligence can be vulnerable to attacks that are invisible to humans but wreak havoc on computer systems

* Li Bo, an award-winning scientist, leads a team that works with IBM and financial institutions to defend against cyberattacks

 

Artificial intelligence (AI) has been used to defeat world-renowned Go players or beat humans in video games, but the technology has weaknesses, which can be exploited by cyberattacks.

During Li Bo’s postdoctoral research three years ago at the University of California, Berkeley, the Chinese scholar and her collaborators from other institutions designed a scenario that could easily fool AI-based autonomous driving systems.

The team pasted custom-made stickers on a stop sign that introduced new patterns, while leaving the original letters visible. Humans could still read the sign perfectly, but the object recognition algorithm picked up something entirely different: a 45-mile speed limit road sign. If this happened in real life, it could have led to severe consequences.

“This experiment has made the public realise how important security is for artificial intelligence,” said Li. “AI is likely to change the fate of human beings, but how safe is it?”

Born and raised in China, Li currently serves as an associate professor at the University of Illinois at Urbana-Champaign, where she is at the forefront of US research on so-called adversarial learning. This field pits AI systems against each other based on game theory, which helps in the development of groundbreaking methodologies to improve the robustness of AI.

One goal of Li’s work is to make machine learning safer and more trustworthy by incorporating human knowledge and reasoning. Specifically, it explores optimal adversarial strategies – the kind of hacks that fool AI, but often slip under the radar because they appear harmless to the human eye. Machine learning, a branch of AI, uses algorithms to find patterns in massive amounts of data.

“Right now, AI is facing a bottleneck because it is based on statistics,” said Li. “It will be smarter if it uses logical thinking like humans to predict and learn if it is under attack.”

Her research has had real-world applications. As more financial services companies use facial recognition in their payment systems, Li said her team has been working with these businesses to secure their applications.

Her team also helps IBM build software to protect Watson, the company’s data analytics processing system that is capable of answering questions posed in natural language. Their work helps Watson avoid absorbing insults or curse words into its lexicon, so that it can maintain a polite conversation at all times and deliver appropriate answers to users‘ queries.

 

Source: South China Morning Post

Share:

Facebook
Twitter
LinkedIn
Telegram
WhatsApp
Email

AI Blog – Latest news on Artificial Intelligence and its applications on the globe. 

Browse more

Related Posts

Gender Diversification Strategy Brings Positive Impact to ai venture group

Over the next decade, the World Bank estimates one billion young people will try to enter the job market, but less than half of them will find formal jobs. This will leave the majority of young people, many in minority and marginalized groups, unemployed or experiencing working poverty. The predicted rise in economic inequality and inadequate job opportunities has the potential to negatively impact a generation of young people around the world.

Read More »