Adversarial AI: When Artificial Intelligence Goes Bad
Fri, December 3, 2021

Adversarial AI: When Artificial Intelligence Goes Bad

Rahul Kashyap of business news platform Forbes stated that AI is the facade of new technologies today such as mobile phones, home devices, banking tools, and more. In the cybersecurity field, AI is helping to detect malicious behavior and sophisticated threats / Photo by: Kittipong Jirasukhanont via 123RF

 

Rahul Kashyap of business news platform Forbes stated that AI is the facade of new technologies today such as mobile phones, home devices, banking tools, and more. In the cybersecurity field, AI is helping to detect malicious behavior and sophisticated threats. Complex models can now determine attack trends faster than previous models. 

But what if cyberattackers utilize the power of AI to wreak havoc? Is it possible for us to subvert AI as well as cybersecurity products to avoid detection? It’s highly plausible. This is called adversarial AI or adversarial machine learning, which should concern businesses and consumers alike as algorithms become better and better.

 

Alarming Statistics on Cybersecurity and Adversarial AI 

Across all seven industries and 850 senior executives studied by French multinational corporation Capgemini, 80% of telecom firms stated that they are relying on AI to identify threats and counter cyberattacks, as cited by Louis Columbus of Forbes. 

Moreover, 75% of banking executives acknowledged that they will need AI to repel cyberattacks, and 69% of senior executives said they will not be able to counter cyberattacks without the help of AI. Apparently, 59% of utility executives see AI as an essential tool for thwarting a cyberattack as the utility sector is one of the most vulnerable industries to such a threat. 

In another report published by Capgemini, 51% of businesses rely on AI for threat detection, leading protection, and response (47%). However, a small number of enterprises have progressed past detection to prediction and response. IT service management company Gartner projected that $137.4 billion will be spent on information and risk management in 2019 and skyrocket to $175.5 billion in 2023 with a CAGR of 9.1%.

Further, cloud security, data security, and infrastructure protection will be the fastest-growing areas of security expenditure until 2023. Also, 71% of today’s organizations said they have spent more on AI and machine learning for cybersecurity than they did two years ago, according to cybersecurity intelligence firm Webroot. It added that 26% and 28% of US and Japanese IT professionals, respectively, said their company could be doing more. 

The survey also showed that 84% of respondents reported that cyber-criminals are also using AI and machine learning to execute their attacks. These statistics proved that AI/machine learning-based cybersecurity is no longer something nice to have but is something that is crucial to keeping data secure from bad elements. 

Adversarial AI in Action 

Professor and cybersecurity researcher at the University of California, Berkeley Dawn Song said adversarial machine learning can be used to compromise any system built on the technology, according to IT magazine MIT Technology Review. Song’s team explored how adversarial learning can be utilized. For example, they showed how attackers can take advantage of machine learning algorithms designed to automate email responses to send out sensitive data like credit card numbers. 

Song also demonstrated how computer vision systems in vehicles can be tricked by embedding stickers on road signs. This results in dataset corruption and tricking the algorithms powering the AVs into thinking the stop signs were speed limits.

Prateek Mittal and colleagues at Princeton tackled how adversarial tactics applied to AI can leave systems open to attacks, as mentioned in Adam Hadhazy’s article for Princeton University.  

They noted, “Just as software is prone to being hacked and infected by computer viruses, or its users targeted by scammers through phishing and other security-breaching ploys, AI-powered applications have their own vulnerabilities.” Sadly, deploying adequate safeguards has lagged. 

There are three types of adversarial AI. The first is data poisoning in which attackers leverage AI to mark or execute their attacks. The second is adversarial inputs at runtime, enabling attackers to alter the training data used for security AI. Finally, we have privacy attacks that adversaries try to launch to access private information. From these categories alone, we can glean that adversarial attacks manifest in various forms, including false flag attacks.

A common example of adversarial attacks is deepfakes, which can be used to manipulate videos and commit fraud. To illustrate, a deepfake was used to fool an executive in a UK energy firm to wire money to a supplier. The victim received a phone call telling him to initiate the transfer. The executive thought it was his boss. It turned out that the call and email he received was from an AI that imitated his boss’s mannerisms, accent, and diction. 

Adversarial Attacks Can Occur Everywhere

Is it possible that the 2020 US election results can be influenced by adversarial AI? Indeed. AI can be utilized to enable fraud in businesses and daily life. Emails purloined from candidates can create believable messages that are contrary to a candidate’s position. Other than those, people who know the inner workings of AI can be participants of adversarial attacks if they are coerced or properly motivated into doing so.

Therefore, it is best for security experts and product developers to factor in the potential for abuse when developing AI models. Developers should be expected to assume the worst-case scenario when building models.  

As AI permeates into our everyday life more and more, we all become vulnerable to security risks associated with the technology. It can be weaponized to extract personal information or perhaps, influence an election’s outcome. We must, therefore, be prepared to minimize the risks of adversarial AI

As AI permeates into our everyday life more and more, we all become vulnerable to security risks associated with the technology / Photo by: gioiak2 via 123RF

.