Hugging Face, a popular platform for sharing and collaborating on AI models, has been found to host malicious software. Security researchers at ReversingLabs discovered two AI models that contained hidden code designed to infect devices with malware.
The “nullifAI” Attack:
This attack leverages a technique called “Pickle file serialization.” AI models are often stored in various formats, including Pickle, a Python module for saving and loading model data. While convenient, Pickle has a significant security flaw: it can execute Python code during the loading process.
On Hugging Face, where anyone can upload and download models, this vulnerability can be exploited. Attackers can embed malicious Python code within the Pickle file, which will then execute when a user attempts to load the model. This “nullifAI” attack effectively hides malware within seemingly harmless AI models.

Evading Detection:
The researchers discovered that the malicious models were stored in PyTorch format, essentially compressed Pickle files. This compression, using the 7z format, prevented standard security checks, including Hugging Face’s own Picklescan tool, from detecting the hidden malware.
The Impact:
This discovery highlights a serious security risk for developers who rely on Hugging Face for AI models. Unwittingly downloading and loading these malicious models can lead to malware infections on their systems, potentially compromising sensitive data and disrupting workflows.
Hugging Face’s Response:
Hugging Face promptly responded to the researchers’ report, removing the malicious models within 24 hours. They also reportedly updated the Picklescan tool to better detect and prevent such attacks in the future.
A Call for Caution:
This incident serves as a stark reminder of the importance of security best practices when working with AI models, especially those obtained from open-source platforms. Developers should exercise caution when downloading and loading models, carefully scrutinizing their source and using reputable platforms.
Key Takeaways:
- Hugging Face, a popular AI model repository, was found to host malicious software.
- Attackers used Pickle file serialization to embed malware within AI models, evading existing security checks.
- This “nullifAI” attack poses a significant risk to developers who rely on Hugging Face for AI models.
- Hugging Face has taken steps to address the issue, but developers must remain vigilant and exercise caution when working with AI models.
This incident underscores the need for robust security measures and continuous vigilance in the rapidly evolving world of AI. As AI technology continues to advance, so too must our efforts to ensure the security and integrity of AI systems and the platforms that support them.