Critical PickleScan Zero-Day Vulnerabilities Exposed: AI Supply Chain Risks (2025)

Your AI models might not be as safe as you think. A shocking new report reveals that three critical zero-day vulnerabilities have been uncovered in PickleScan—a popular open-source tool used to scan Python pickle files and PyTorch models for threats. These flaws don’t just expose weaknesses in a single program; they reveal cracks in the entire AI model supply chain. And here’s where it gets even more concerning: attackers could use these loopholes to sneak malicious code into machine learning models without triggering any alarms.

According to a newly published advisory by the JFrog Security Research Team (December 2, 2025), these vulnerabilities—each rated a severe 9.3 on the CVSS scale—could allow cybercriminals to bypass PickleScan’s security checks and distribute infected AI models that appear completely legitimate. You can read the full technical breakdown on JFrog’s official blog.

The Three Hidden Flaws

1. CVE-2025-10155: The file extension trick. This first vulnerability might sound simple, but it’s surprisingly dangerous. Researchers discovered that by renaming a malicious pickle file with a common PyTorch extension like .pt or .bin, PickleScan would misidentify the file type and hand it over to PyTorch’s internal parser. Since PickleScan trusted file extensions more than the actual file contents, it would skip proper scanning—while PyTorch unknowingly loaded the dangerous file. It’s a textbook example of why relying on surface-level checks can lead to big risks.

2. CVE-2025-10156: The ZIP archive blind spot. This one digs deeper into how different tools handle ZIP files. PickleScan used Python’s zipfile module, which throws errors when it spots data corruption (CRC errors). PyTorch, however, looks the other way—it loads the file even if CRC checks fail. JFrog’s team found that by deliberately corrupting parts of a model archive’s CRC values, attackers could trick PickleScan into failing silently while PyTorch continued to load the dangerous payload. In other words, PickleScan’s error handling became an unintentional shield for attackers.

3. CVE-2025-10157: Outsmarting the blacklist. The third and arguably most clever flaw allowed attackers to sneak around PickleScan’s list of forbidden imports. Instead of directly referencing a blacklisted Python module, malicious code could invoke a subclass or an indirect reference—causing the scanner to mark it as merely “Suspicious” rather than outright “Dangerous.” In one proof-of-concept demo, researchers used internal asyncio classes to run arbitrary commands during the model-deserialization process—all while slipping past the supposedly secure filter.

What This Means for AI Security

These vulnerabilities expose a much bigger issue: the fragility of security tools within the AI supply chain. When tools like PickleScan and frameworks like PyTorch interpret files differently, the gap between them becomes a perfect entry point for attackers. JFrog warns that such reliance on a single scanning tool is risky, especially as AI model hubs like Hugging Face grow larger and more open to community uploads.

The researchers flagged several systemic concerns:

  • Overdependence on one security scanner.
  • Different file-handling behaviors between ML frameworks and scanners.
  • Increased potential for supply chain compromises through shared model repositories.

The good news? The vulnerabilities were responsibly disclosed to PickleScan maintainers on June 29, 2025, and patches were rolled out on September 2, 2025. Users are strongly urged to upgrade to PickleScan version 0.0.31 as soon as possible. JFrog also recommends adopting layered security defenses and shifting toward safer serialization formats like Safetensors, which don’t rely on Python’s inherently risky pickle mechanism.

But here’s the debate: if even trusted tools like PickleScan can be abused, are open AI model-sharing platforms too convenient for their own good? Should developers focus more on reinventing safer ML frameworks—or on improving the scanning tech that protects them?

What’s your take? Are these flaws an inevitable growing pain of the AI era, or a warning that the model supply chain needs a total rethink? Share your thoughts in the comments—this is a discussion worth having.

Critical PickleScan Zero-Day Vulnerabilities Exposed: AI Supply Chain Risks (2025)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Jonah Leffler

Last Updated:

Views: 5971

Rating: 4.4 / 5 (45 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Jonah Leffler

Birthday: 1997-10-27

Address: 8987 Kieth Ports, Luettgenland, CT 54657-9808

Phone: +2611128251586

Job: Mining Supervisor

Hobby: Worldbuilding, Electronics, Amateur radio, Skiing, Cycling, Jogging, Taxidermy

Introduction: My name is Jonah Leffler, I am a determined, faithful, outstanding, inexpensive, cheerful, determined, smiling person who loves writing and wants to share my knowledge and understanding with you.