CyberSecurity updates
2025-02-23 10:36:51 Pacfic

Malicious ML Models on Hugging Face Exploit Pickle Format - 14d
Malicious ML Models on Hugging Face Exploit Pickle Format

Researchers have identified malicious machine learning (ML) models on Hugging Face, leveraging a novel technique involving ‘broken’ pickle files to evade detection. This attack, dubbed ‘nullifAI’, abuses the Pickle file serialization process, allowing Python code execution during ML model deserialization. The malicious models were not initially flagged as unsafe by Hugging Face’s security mechanisms.

Hugging Face Model Loading Vulnerabilities Enable Code Execution - 23d
Hugging Face Model Loading Vulnerabilities Enable Code Execution

Vulnerabilities in Hugging Face’s model-loading libraries allow for malicious model uploads leading to code execution. The use of deprecated methods and a lack of robust validation for legacy model formats create opportunities for attackers to inject and execute arbitrary code. This affects various ML frameworks integrated with Hugging Face.