Monday, 20 September 2021 12:02 GMT

Researchers demonstrate that malware can be concealed in neural networks


(MENAFN- NewsBytes) In our pursuit to develop better software that powers our everyday lives, we seem to have forgotten that malware can be hidden in almost anything that runs on a computer. According to a recent study, malware can be embedded into the neural networks of a machine learning model so effectively that it cannot be detected, all while the neural network continues to operate normally.

In this article
  • Malware-embedded neural network model had just 1% more errors
  • Malware remained undetected by 50 common antivirus programs
  • Malware would need to be extracted, recompiled on victim's computer
  • Once reassembled, malware could be detected by real-time protection systems
  • Antivirus probably isn't optimized for scanning neural networks yet
  • Researchers hope their work contributes to future cybersecurity efforts
Works as expected Malware-embedded neural network model had just 1% more errors

A research paper published recently by Zhi Wang, Chaoge Liu, and Xiang Cui presented a method of hiding 36.9MB of malware in a 178MB AlexNet model designed to classify images. The malware-embedded model was able to perform within a 1% error margin of the original malware-free image classification model. The malware embedded in the model also managed to avoid detection.

Under the radar Malware remained undetected by 50 common antivirus programs

The researchers who are all from the University of Chinese Academy of Sciences explained that replacing up to 50% of the AlexNet model also kept the model's accuracy rate above 93.1%. Such models were reportedly tested against 50 common antivirus systems but the malware remained undetected. According to the researchers, the malware is "disassembled" for being embedded into the network's neurons.

The other half Malware would need to be extracted, recompiled on victim's computer

The researchers noted that hiding the malware was just half the job for bad actors. They would also need a malicious receiver program that reassembled the malware on the victim's computer. This means that the malware attack can be stopped if the victim verifies the neural network model before launching it. Additionally, it can be detected by "traditional methods" including static and dynamic analysis.

Information Once reassembled, malware could be detected by real-time protection systems

The malware is hidden using a classic method called steganography where one object or entity is concealed within another. However, there is a risk of detection by the victim's antivirus program once the malware has been reassembled to infect the computer.

Nobody's looking Antivirus probably isn't optimized for scanning neural networks yet


Dr. Lukasz Olejnik

In conversation with Motherboard , cybersecurity researcher Dr. Lukasz Olejnik explained that the antivirus software couldn't find the malware hidden in the neural network just because "nobody is looking in there." Dr. Olejnik probably implies that conventional antivirus is designed to find malware bundled into applications and consumer-grade programs while machine learning algorithms are a niche use case for consumers today.

Potential Researchers hope their work contributes to future cybersecurity efforts

On the flipside, neural network models can often be rather large and complex, giving potential bad actors the ability to conceal larger malware. The well-intentioned researchers who developed the method, however, are optimistic. Their paper notes that "AI-assisted attacks will emerge and bring new challenges for computer security." So, they hope "the proposed scenario will contribute to future protection efforts."

MENAFN31072021000165011035ID1102547456


Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.