Write to us

How a Chinese scientist in the US is helping defend AI systems from cyberattacks

"Artificial intelligence (AI) has been used to defeat world-renowned Go players or beat humans in video games, but the technology has weaknesses, which can be exploited by cyberattacks."

* Artificial intelligence can be vulnerable to attacks that are invisible to humans but wreak havoc on computer systems

* Li Bo, an award-winning scientist, leads a team that works with IBM and financial institutions to defend against cyberattacks


Artificial intelligence (AI) has been used to defeat world-renowned Go players or beat humans in video games, but the technology has weaknesses, which can be exploited by cyberattacks.

During Li Bo’s postdoctoral research three years ago at the University of California, Berkeley, the Chinese scholar and her collaborators from other institutions designed a scenario that could easily fool AI-based autonomous driving systems.

The team pasted custom-made stickers on a stop sign that introduced new patterns, while leaving the original letters visible. Humans could still read the sign perfectly, but the object recognition algorithm picked up something entirely different: a 45-mile speed limit road sign. If this happened in real life, it could have led to severe consequences.

“This experiment has made the public realise how important security is for artificial intelligence,” said Li. “AI is likely to change the fate of human beings, but how safe is it?”

Born and raised in China, Li currently serves as an associate professor at the University of Illinois at Urbana-Champaign, where she is at the forefront of US research on so-called adversarial learning. This field pits AI systems against each other based on game theory, which helps in the development of groundbreaking methodologies to improve the robustness of AI.

One goal of Li’s work is to make machine learning safer and more trustworthy by incorporating human knowledge and reasoning. Specifically, it explores optimal adversarial strategies – the kind of hacks that fool AI, but often slip under the radar because they appear harmless to the human eye. Machine learning, a branch of AI, uses algorithms to find patterns in massive amounts of data.

“Right now, AI is facing a bottleneck because it is based on statistics,” said Li. “It will be smarter if it uses logical thinking like humans to predict and learn if it is under attack.”

Her research has had real-world applications. As more financial services companies use facial recognition in their payment systems, Li said her team has been working with these businesses to secure their applications.

Her team also helps IBM build software to protect Watson, the company’s data analytics processing system that is capable of answering questions posed in natural language. Their work helps Watson avoid absorbing insults or curse words into its lexicon, so that it can maintain a polite conversation at all times and deliver appropriate answers to users‘ queries.


Source: South China Morning Post


Share on facebook
Share on twitter
Share on linkedin
Share on telegram
Share on whatsapp
Share on email

AI Blog – Latest news on Artificial Intelligence and its applications on the globe. 

Browse more

Related Posts

Board Directors Can Do More With AI

Corporate governance is an evolving area that changes with policy matters and economic reforms in a country. There is notably an increasing pressure on the board of directors and management to implement policies; while maintaining a healthy culture and good corporate governance responsibility across the organisation.

Read More »

New Way Forward to Formulate Strategic Plans

The Centre for AI Innovation (CEAI), the social innovation arm of MyFinB Group, has launched a series of capacity building programme to help diverse types of organisations with strategic planning, using AI. This was implemented in view of the volatility of the business environment that causes many organisations to adopt reactive strategies rather than proactive ones. However, reactive strategies or “fire-fighting” are typically only viable as a short-term solution, even though they may require spending significant resources and time to execute them.

Read More »

Professional Firms Get AI Boost for their Clients

InfoTrust, a Singapore-based, award-winning ICT company, is embarking on a journey to build internal competencies in AI to disrupt traditional professional services such as corporate reporting solutions. The primary goal is for InfoTrust to build a suite of AI solutions using predictive and prescriptive analytics, to boost its existing suite of solutions for the Singapore market as a start.

Read More »