In Apache Spark 2.4.5 and earlier, a standalone resource manager's master may be configured to require authentication (spark.authenticate) via a shared secret. When enabled, however, a specially-crafted RPC to the master can succeed in starting an application's resources on the Spark cluster, even without the shared key. This can be leveraged to execute shell commands on the host machine. This does not affect Spark clusters using other resource managers (YARN, Mesos, etc).
{ "nvd_published_at": "2020-06-23T22:15:00Z", "cwe_ids": [ "CWE-287", "CWE-306" ], "severity": "CRITICAL", "github_reviewed": true, "github_reviewed_at": "2021-05-11T21:39:30Z" }