Write to us
enquiry@myfinb.com
Call us
+65 6978 4048 (SG)
+60 19-919 9149 (MY)

How a Chinese scientist in the US is helping defend AI systems from cyberattacks

"Artificial intelligence (AI) has been used to defeat world-renowned Go players or beat humans in video games, but the technology has weaknesses, which can be exploited by cyberattacks."

* Artificial intelligence can be vulnerable to attacks that are invisible to humans but wreak havoc on computer systems

* Li Bo, an award-winning scientist, leads a team that works with IBM and financial institutions to defend against cyberattacks

 

Artificial intelligence (AI) has been used to defeat world-renowned Go players or beat humans in video games, but the technology has weaknesses, which can be exploited by cyberattacks.

During Li Bo’s postdoctoral research three years ago at the University of California, Berkeley, the Chinese scholar and her collaborators from other institutions designed a scenario that could easily fool AI-based autonomous driving systems.

The team pasted custom-made stickers on a stop sign that introduced new patterns, while leaving the original letters visible. Humans could still read the sign perfectly, but the object recognition algorithm picked up something entirely different: a 45-mile speed limit road sign. If this happened in real life, it could have led to severe consequences.

“This experiment has made the public realise how important security is for artificial intelligence,” said Li. “AI is likely to change the fate of human beings, but how safe is it?”

Born and raised in China, Li currently serves as an associate professor at the University of Illinois at Urbana-Champaign, where she is at the forefront of US research on so-called adversarial learning. This field pits AI systems against each other based on game theory, which helps in the development of groundbreaking methodologies to improve the robustness of AI.

One goal of Li’s work is to make machine learning safer and more trustworthy by incorporating human knowledge and reasoning. Specifically, it explores optimal adversarial strategies – the kind of hacks that fool AI, but often slip under the radar because they appear harmless to the human eye. Machine learning, a branch of AI, uses algorithms to find patterns in massive amounts of data.

“Right now, AI is facing a bottleneck because it is based on statistics,” said Li. “It will be smarter if it uses logical thinking like humans to predict and learn if it is under attack.”

Her research has had real-world applications. As more financial services companies use facial recognition in their payment systems, Li said her team has been working with these businesses to secure their applications.

Her team also helps IBM build software to protect Watson, the company’s data analytics processing system that is capable of answering questions posed in natural language. Their work helps Watson avoid absorbing insults or curse words into its lexicon, so that it can maintain a polite conversation at all times and deliver appropriate answers to users‘ queries.

 

Source: South China Morning Post

Share:

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on telegram
Telegram
Share on whatsapp
WhatsApp
Share on email
Email

AI Blog – Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. 

Browse more

Related Posts

AI:10 - Weekly AI News under 10 minutes

AI:10 May – 18th Issue

“Artificial intelligence is impacting the future of virtually every industry and every human being. Artificial intelligence has acted as the main driver of emerging technologies like big data, blockchain, robotics and IoT. AI:10 provides a quick round-up of news on the latest, emerging stories on AI adoption and experience by public, private and non-profit segments.”

Read More »