All the vulnerabilites related to lodash - lodash
var-202007-1448
Vulnerability from variot

Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20. lodash Is vulnerable to resource allocation without restrictions or throttling.Information is tampered with and service operation is interrupted (DoS) It may be put into a state. lodash is an open source JavaScript utility library. An input validation error vulnerability exists in lodash 4.17.15 and earlier versions. A remote attacker could exploit this vulnerability to execute arbitrary code on the system via the 'merge', 'mergeWith' and 'defaultsDeep' functions. These packages include redhat-release-virtualization-host, ovirt-node, and rhev-hypervisor. RHVH features a Cockpit user interface for monitoring the host's resources and performing administrative tasks. These packages include redhat-release-virtualization-host, ovirt-node, and rhev-hypervisor. RHVH features a Cockpit user interface for monitoring the host's resources and performing administrative tasks.

Bug Fix(es):

  • Previously, upgrade from Red Had Virtualization (RHV) 4.4.1 to RHV 4.4.2 failed due to dangling symlinks from the iSCSI Storage Domain that weren't cleaned up. In this release, the upgrade succeeds. (BZ#1895356)

  • Previously, when migrating a Windows virtual machine from a VMware environment to Red Hat Virtualization 4.4.3, the migration failed due to a file permission error. In this release, the migration succeeds. (BZ#1901423)

  • Bugs fixed (https://bugzilla.redhat.com/):

1835685 - [Hosted-Engine]"Installation Guide" and "RHV Documents" didn't jump to the correct pages in hosted engine page. 1857412 - CVE-2020-8203 nodejs-lodash: prototype pollution in zipObjectDeep function 1895356 - Upgrade to 4.4.2 will fail due to dangling symlinks 1895762 - cockpit ovirt(downstream) docs links point to upstream docs. 1896536 - CVE-2015-8011 lldpd: buffer overflow in the lldp_decode function in daemon/protocols/lldp.c 1898023 - Rebase RHV-H 4.4.3 on RHEL 8.3.0.1 1898024 - Rebase RHV-H 4.4.3 on RHGS-3.5.z Batch #3 1901423 - [v2v] leaking USER and HOME environment from root causes virt-v2v error: failure: Unexpected file type which prevents VM migration 1902301 - Upgrade cockpit-ovirt to 0.14.14

  1. Solution:

For OpenShift Container Platform 4.6 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel ease-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.6/updating/updating-cluster - -cli.html.

Bug Fix(es):

  • send --nowait to libvirt when we collect qemu stats, to consume bz#1552092 (BZ#1613514)

  • Block moving HE hosts into different Data Centers and make HE host moved to different cluster NonOperational after activation (BZ#1702016)

  • If an in-use MAC is held by a VM on a different cluster, the engine does not attempt to get the next free MAC. (BZ#1760170)

  • Search backend cannot find VMs which name starts with a search keyword (BZ#1797717)

  • [Permissions] DataCenterAdmin role defined on DC level does not allow Cluster creation (BZ#1808320)

  • enable-usb-autoshare is always 0 in console.vv and usb-filter option is listed two times (BZ#1811466)

  • NumaPinningHelper is not huge pages aware, denies migration to suitable host (BZ#1812316)

  • Adding quota to group doesn't propagate to users (BZ#1822372)

  • Engine adding PCI-E elements on XML of i440FX SeaBIOS VM created from Q35 Template (BZ#1829691)

  • Live Migration Bandwidth unit is different from Engine configuration (Mbps) and VDSM (MBps) (BZ#1845397)

  • RHV-M shows successful operation if OVA export/import failed during "qemu-img convert" phase (BZ#1854888)

  • Cannot hotplug disk reports libvirtError: Requested operation is not valid: Domain already contains a disk with that address (BZ#1855305)

  • rhv-log-collector-analyzer --json fails with TypeError (BZ#1859314)

  • RHV 4.4 on AMD EPYC 7742 throws an NUMA related error on VM run (BZ#1866862)

  • Issue with dashboards creation when sending metrics to external Elasticsearch (BZ#1870133)

  • HostedEngine VM is broken after Cluster changed to UEFI (BZ#1871694)

  • [CNV&RHV]Notification about VM creation contain string (BZ#1873136)

  • VM stuck in Migrating status after migration completed due to incorrect status reported by VDSM after restart (BZ#1877632)

  • Use 4.5 as compatibility level for the Default DataCenter and the Default Cluster during installation (BZ#1879280)

  • unable to create/add index pattern in step 5 from kcs articles#4921101 (BZ#1881634)

  • [CNV&RHV] Remove warning about no active storage domain for Kubevirt VMs (BZ#1883844)

  • Deprecate and remove ovirt-engine-api-explorer (BZ#1884146)

  • [CNV&RHV] Disable creating new disks for Kubevirt VM (BZ#1884634)

  • Require ansible-2.9.14 in ovirt-engine (BZ#1888626)

Enhancement(s):

  • [RFE] Virtualization support for NVDIMM - RHV (BZ#1361718)

  • [RFE] - enable renaming HostedEngine VM name (BZ#1657294)

  • [RFE] Enabling Icelake new NIs - RHV (BZ#1745024)

  • [RFE] Show vCPUs and allocated memory in virtual machines summary (BZ#1752751)

  • [RFE] RHV-M Deployment/Install Needs it's own UUID (BZ#1825020)

  • [RFE] Destination Host in migrate VM dialog has to be searchable and sortable (BZ#1851865)

  • [RFE] Expose the "reinstallation required" flag of the hosts in the API (BZ#1856671)

  • Bugs fixed (https://bugzilla.redhat.com/):

1613514 - send --nowait to libvirt when we collect qemu stats, to consume bz#1552092 1657294 - [RFE] - enable renaming HostedEngine VM name 1691253 - ovirt-engine-extension-aaa-ldap-setup does not escape special characters in password 1702016 - Block moving HE hosts into different Data Centers and make HE host moved to different cluster NonOperational after activation 1752751 - [RFE] Show vCPUs and allocated memory in virtual machines summary 1760170 - If an in-use MAC is held by a VM on a different cluster, the engine does not attempt to get the next free MAC. 1797717 - Search backend cannot find VMs which name starts with a search keyword 1808320 - [Permissions] DataCenterAdmin role defined on DC level does not allow Cluster creation 1811466 - enable-usb-autoshare is always 0 in console.vv and usb-filter option is listed two times 1812316 - NumaPinningHelper is not huge pages aware, denies migration to suitable host 1822372 - Adding quota to group doesn't propagate to users 1825020 - [RFE] RHV-M Deployment/Install Needs it's own UUID 1828241 - Deleting snapshot do not display a lock for it's disks under "Disk Snapshots" tab. 1829691 - Engine adding PCI-E elements on XML of i440FX SeaBIOS VM created from Q35 Template 1842344 - Status loop due to host initialization not checking network status, monitoring finding the network issue and auto-recovery. 1845432 - [CNV&RHV] Communicatoin with CNV cluster spamming engine.log when token is expired 1851865 - [RFE] Destination Host in migrate VM dialog has to be searchable and sortable 1854888 - RHV-M shows successful operation if OVA export/import failed during "qemu-img convert" phase 1855305 - Cannot hotplug disk reports libvirtError: Requested operation is not valid: Domain already contains a disk with that address 1856671 - [RFE] Expose the "reinstallation required" flag of the hosts in the API 1857412 - CVE-2020-8203 nodejs-lodash: prototype pollution in zipObjectDeep function 1859314 - rhv-log-collector-analyzer --json fails with TypeError 1862101 - rhv-image-discrepancies does show size of the images on the storage as size of the image in db and vice versa 1866981 - obj must be encoded before hashing 1870133 - Issue with dashboards creation when sending metrics to external Elasticsearch 1871694 - HostedEngine VM is broken after Cluster changed to UEFI 1872911 - RHV Administration Portal fails with 404 error even after updating to RHV 4.3.9 1873136 - [CNV&RHV]Notification about VM creation contain string 1876923 - PostgreSQL 12 in RHV 4.4 - engine-setup menu ref URL needs updating 1877632 - VM stuck in Migrating status after migration completed due to incorrect status reported by VDSM after restart 1877679 - Synchronize advanced virtualization module with RHEL version during host upgrade 1879199 - ovirt-engine-extension-aaa-ldap-setup fails on cert import 1879280 - Use 4.5 as compatibility level for the Default DataCenter and the Default Cluster during installation 1879377 - [DWH] Rebase bug - for the 4.4.3 release 1881634 - unable to create/add index pattern in step 5 from kcs articles#4921101 1882256 - CVE-2019-20922 nodejs-handlebars: an endless loop while processing specially-crafted templates leads to DoS 1882260 - CVE-2019-20920 nodejs-handlebars: lookup helper fails to properly validate templates allowing for arbitrary JavaScript execution 1883844 - [CNV&RHV] Remove warning about no active storage domain for Kubevirt VMs 1884146 - Deprecate and remove ovirt-engine-api-explorer 1884634 - [CNV&RHV] Disable creating new disks for Kubevirt VM 1885976 - rhv-log-collector-analyzer - argument must be str, not bytes 1887268 - Cannot perform yum update on my RHV manager (ansible conflict) 1888626 - Require ansible-2.9.14 in ovirt-engine 1889522 - metrics playbooks are broken due to typo

  1. Description:

Red Hat OpenShift Service Mesh is Red Hat's distribution of the Istio service mesh project, tailored for installation into an on-premise OpenShift Container Platform installation. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Moderate: Red Hat Virtualization security, bug fix, and enhancement update Advisory ID: RHSA-2020:3807-01 Product: Red Hat Virtualization Advisory URL: https://access.redhat.com/errata/RHSA-2020:3807 Issue date: 2020-09-23 CVE Names: CVE-2020-8203 CVE-2020-11022 CVE-2020-11023 CVE-2020-14333 ==================================================================== 1. Summary:

An update is now available for Red Hat Virtualization Engine 4.4.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

RHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4 - noarch

  1. Description:

The org.ovirt.engine-root is a core component of oVirt.

The following packages have been upgraded to a later upstream version: ansible-runner-service (1.0.5), org.ovirt.engine-root (4.4.2.3), ovirt-engine-dwh (4.4.2.1), ovirt-engine-extension-aaa-ldap (1.4.1), ovirt-engine-ui-extensions (1.2.3), ovirt-log-collector (4.4.3), ovirt-web-ui (1.6.4), rhvm-branding-rhv (4.4.5), rhvm-dependencies (4.4.1), vdsm-jsonrpc-java (1.5.5). (BZ#1674420, BZ#1866734)

A list of bugs fixed in this update is available in the Technical Notes book:

https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/ht ml-single/technical_notes

Security Fix(es):

  • nodejs-lodash: prototype pollution in zipObjectDeep function (CVE-2020-8203)

  • jquery: Cross-site scripting due to improper injQuery.htmlPrefilter method (CVE-2020-11022)

  • jQuery: passing HTML containing

  • ovirt-engine: Reflected cross site scripting vulnerability (CVE-2020-14333)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Bug Fix(es):

  • Cannot assign direct LUN from FC storage - grayed out (BZ#1625499)

  • VM portal always asks how to open console.vv even it has been set to default application. (BZ#1638217)

  • RESTAPI Not able to remove the QoS from a disk profile (BZ#1643520)

  • On OVA import, qemu-img fails to write to NFS storage domain (BZ#1748879)

  • Possible missing block path for a SCSI host device needs to be handled in the UI (BZ#1801206)

  • Scheduling Memory calculation disregards huge-pages (BZ#1804037)

  • Engine does not reduce scheduling memory when a VM with dynamic hugepages runs. (BZ#1804046)

  • In Admin Portal, "Huge Pages (size: amount)" needs to be clarified (BZ#1806339)

  • Refresh LUN is using host from different Data Center to scan the LUN (BZ#1838051)

  • Unable to create Windows VM's with Mozilla Firefox version 74.0.1 and greater for RHV-M GUI/Webadmin portal (BZ#1843234)

  • [RHV-CNV] - NPE when creating new VM in cnv cluster (BZ#1854488)

  • [CNV&RHV] Add-Disk operation failed to complete. (BZ#1855377)

  • Cannot create KubeVirt VM as a normal user (BZ#1859460)

  • Welcome page - remove Metrics Store links and update "Insights Guide" link (BZ#1866466)

  • [RHV 4.4] Change in CPU model name after RHVH upgrade (BZ#1869209)

  • VM vm-name is down with error. Exit message: unsupported configuration: Can't add USB input device. USB bus is disabled. (BZ#1871235)

  • spec_ctrl host feature not detected (BZ#1875609)

Enhancement(s):

  • [RFE] API for changed blocks/sectors for a disk for incremental backup usage (BZ#1139877)

  • [RFE] Improve workflow for storage migration of VMs with multiple disks (BZ#1749803)

  • [RFE] Move the Remove VM button to the drop down menu when viewing details such as snapshots (BZ#1763812)

  • [RFE] enhance search filter for Storage Domains with free argument (BZ#1819260)

  • Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/2974891

  1. Bugs fixed (https://bugzilla.redhat.com/):

1625499 - Cannot assign direct LUN from FC storage - grayed out 1638217 - VM portal always asks how to open console.vv even it has been set to default application. 1643520 - RESTAPI Not able to remove the QoS from a disk profile 1674420 - [RFE] - add support for Cascadelake-Server CPUs (and IvyBridge) 1748879 - On OVA import, qemu-img fails to write to NFS storage domain 1749803 - [RFE] Improve workflow for storage migration of VMs with multiple disks 1758024 - Long running Ansible tasks timeout and abort for RHV-H hosts with STIG/Security Profiles applied 1763812 - [RFE] Move the Remove VM button to the drop down menu when viewing details such as snapshots 1778471 - Using more than one asterisk in LDAP search string is not working when searching for AD users. 1787854 - RHV: Updating/reinstall a host which is part of affinity labels is removed from the affinity label. 1801206 - Possible missing block path for a SCSI host device needs to be handled in the UI 1803856 - [Scale] ovirt-vmconsole takes too long or times out in a 500+ VM environment. 1804037 - Scheduling Memory calculation disregards huge-pages 1804046 - Engine does not reduce scheduling memory when a VM with dynamic hugepages runs. 1806339 - In Admin Portal, "Huge Pages (size: amount)" needs to be clarified 1816951 - [CNV&RHV] CNV VM migration failure is not handled correctly by the engine 1819260 - [RFE] enhance search filter for Storage Domains with free argument 1826255 - [CNV&RHV]Change name of type of provider - CNV -> OpenShift Virtualization 1828406 - CVE-2020-11022 jquery: Cross-site scripting due to improper injQuery.htmlPrefilter method 1831949 - RESTAPI javadoc contains missing information about assigning IP address to NIC 1831952 - RESTAPI contains malformed link around JSON representation fo the cluster 1831954 - RESTAPI javadoc contains malformed link around oVirt guest agent 1831956 - RESTAPI javadoc contains malformed link around time zone representation 1838051 - Refresh LUN is using host from different Data Center to scan the LUN 1841112 - not able to upload vm from OVA when there are 2 OVA from the same vm in same directory 1843234 - Unable to create Windows VM's with Mozilla Firefox version 74.0.1 and greater for RHV-M GUI/Webadmin portal 1850004 - CVE-2020-11023 jQuery: passing HTML containing

  1. Package List:

RHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4:

Source: ansible-runner-service-1.0.5-1.el8ev.src.rpm ovirt-engine-4.4.2.3-0.6.el8ev.src.rpm ovirt-engine-dwh-4.4.2.1-1.el8ev.src.rpm ovirt-engine-extension-aaa-ldap-1.4.1-1.el8ev.src.rpm ovirt-engine-ui-extensions-1.2.3-1.el8ev.src.rpm ovirt-log-collector-4.4.3-1.el8ev.src.rpm ovirt-web-ui-1.6.4-1.el8ev.src.rpm rhvm-branding-rhv-4.4.5-1.el8ev.src.rpm rhvm-dependencies-4.4.1-1.el8ev.src.rpm vdsm-jsonrpc-java-1.5.5-1.el8ev.src.rpm

noarch: ansible-runner-service-1.0.5-1.el8ev.noarch.rpm ovirt-engine-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-backend-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-dbscripts-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-dwh-4.4.2.1-1.el8ev.noarch.rpm ovirt-engine-dwh-grafana-integration-setup-4.4.2.1-1.el8ev.noarch.rpm ovirt-engine-dwh-setup-4.4.2.1-1.el8ev.noarch.rpm ovirt-engine-extension-aaa-ldap-1.4.1-1.el8ev.noarch.rpm ovirt-engine-extension-aaa-ldap-setup-1.4.1-1.el8ev.noarch.rpm ovirt-engine-health-check-bundler-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-restapi-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-setup-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-setup-base-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-setup-plugin-cinderlib-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-setup-plugin-imageio-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-setup-plugin-ovirt-engine-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-setup-plugin-ovirt-engine-common-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-setup-plugin-websocket-proxy-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-tools-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-tools-backup-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-ui-extensions-1.2.3-1.el8ev.noarch.rpm ovirt-engine-vmconsole-proxy-helper-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-webadmin-portal-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-engine-websocket-proxy-4.4.2.3-0.6.el8ev.noarch.rpm ovirt-log-collector-4.4.3-1.el8ev.noarch.rpm ovirt-web-ui-1.6.4-1.el8ev.noarch.rpm python3-ovirt-engine-lib-4.4.2.3-0.6.el8ev.noarch.rpm rhvm-4.4.2.3-0.6.el8ev.noarch.rpm rhvm-branding-rhv-4.4.5-1.el8ev.noarch.rpm rhvm-dependencies-4.4.1-1.el8ev.noarch.rpm vdsm-jsonrpc-java-1.5.5-1.el8ev.noarch.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. References:

https://access.redhat.com/security/cve/CVE-2020-8203 https://access.redhat.com/security/cve/CVE-2020-11022 https://access.redhat.com/security/cve/CVE-2020-11023 https://access.redhat.com/security/cve/CVE-2020-14333 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2020 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBX2t0HtzjgjWX9erEAQhpWg/+KolNmhmQCrst8TmYsC2IgSdHP+q0LKLj gdPZYu0ixOpwLLiAhrsoDXqL3H3w7UDSKkSISgPMEqEde4Vp+zI37O1q3E/P7CAj rfLGuL1UDEiy0q0g1BP13GrPlg6K4fR5wQAnTB6vD/ZY+wd50Z0T+NGAxd2w68bM R5q1kSOUPc4AZt25FORU2cmp775Y7DWazMWHC77uiJHgyCwVqLtdO09iEnglZDKJ BynwyT8exZKXxmmpE4QZ4X7wNo3Y0mTiRZo5eyxxQpwj9X+qw1V+pBdtMH/C1yhk J+X1f+wDoe2jCx2bqPXqp6EgFSHnJNt96jV0oTdD0f8rMgWcBDStNXdagPBmBCBp t+Kq3BZx0Oqkig4f+DCEmoS0V0fB9UQLg0Q/M9p1bTfYQkbn+BMHL7CAp8UyAzPH A1HlnP7TtQgplFvoap82xt2pXh97VvI6x3sBGHyW4Fz0SykhRYx3dAgmqy5nEssl 5ApWZ87M3l+2tUh4ZOJAtzRDt9sL5KQsXjp1jZaK/gWBsL4Suzr9AIrs4NmRmXnY TzxdXgIY6C+dWmB4TPhcJE5etcvtorqvs93d47yBdpRyO/IlbEw0vLUBdVZZuj9N mqp6RcHqDKm6Yv4B73Ud5my44wSRWVWtBxO6fivQOQG7iqCyIlA3M3LUMkVy+fxc bvmOI0eIsZw=Jhpi -----END PGP SIGNATURE-----

-- RHSA-announce mailing list RHSA-announce@redhat.com https://www.redhat.com/mailman/listinfo/rhsa-announce . JIRA issues fixed (https://issues.jboss.org/):

PROJQUAY-1417 - zstd compressed layers PROJQUAY-1449 - As a Quay admin I want to rely on the Operator to auto-scale all stateless parts of Quay PROJQUAY-1535 - As a user I can create and use nested repository name structures PROJQUAY-1583 - add "disconnected" annotation to operators PROJQUAY-1609 - Operator communicates status per managed component PROJQUAY-1610 - Operator does not make Quay deployment wait on Clair deployment PROJQUAY-1791 - v1beta CRD EOL PROJQUAY-1883 - Support OCP Re-encrypt routes PROJQUAY-1887 - allow either sha or tag in related images PROJQUAY-1926 - As an admin, I want an API to create first user, so I can automate deployment. PROJQUAY-1998 - note database deprecations in 3.6 Config Tool PROJQUAY-2050 - Support OCP Edge-Termination PROJQUAY-2100 - A customer can update the Operator from 3.3 to 3.6 directly PROJQUAY-2102 - add clair-4.2 enrichment data to quay UI PROJQUAY-672 - MutatingAdmissionWebhook Created Automatically for QBO During Install

6

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202007-1448",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "banking corporate lending process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "communications session border controller",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.4"
      },
      {
        "model": "communications cloud native core policy",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "1.11.0"
      },
      {
        "model": "banking supply chain finance",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "communications billing and revenue management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "12.0.0.3.0"
      },
      {
        "model": "banking extensibility workbench",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "primavera gateway",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "17.12.0"
      },
      {
        "model": "primavera gateway",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "19.12.11"
      },
      {
        "model": "primavera gateway",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "20.12.7"
      },
      {
        "model": "banking liquidity management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "communications billing and revenue management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "7.5.0.23.0"
      },
      {
        "model": "banking credit facilities process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "banking virtual account management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "banking extensibility workbench",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "banking corporate lending process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "banking trade finance process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "communications subscriber-aware load balancer",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "cz8.3"
      },
      {
        "model": "banking credit facilities process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "banking virtual account management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "primavera gateway",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "18.8.12"
      },
      {
        "model": "communications session border controller",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "cz8.4"
      },
      {
        "model": "banking supply chain finance",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "banking trade finance process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "lodash",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "lodash",
        "version": "4.17.20"
      },
      {
        "model": "enterprise communications broker",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "3.3.0"
      },
      {
        "model": "jd edwards enterpriseone tools",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "9.2.6.0"
      },
      {
        "model": "primavera gateway",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "20.12.0"
      },
      {
        "model": "communications session router",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "cz8.4"
      },
      {
        "model": "primavera gateway",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "17.12.11"
      },
      {
        "model": "peoplesoft enterprise peopletools",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.59"
      },
      {
        "model": "communications subscriber-aware load balancer",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "cz8.4"
      },
      {
        "model": "banking supply chain finance",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "banking extensibility workbench",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "banking liquidity management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "primavera gateway",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "18.8.0"
      },
      {
        "model": "enterprise communications broker",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "pcz3.3"
      },
      {
        "model": "banking corporate lending process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "communications session border controller",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "9.0"
      },
      {
        "model": "blockchain platform",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "21.1.2"
      },
      {
        "model": "enterprise communications broker",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "3.2.0"
      },
      {
        "model": "primavera gateway",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "19.12.0"
      },
      {
        "model": "banking credit facilities process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "banking virtual account management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "peoplesoft enterprise peopletools",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.58"
      },
      {
        "model": "banking liquidity management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "banking trade finance process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "lodash",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "lodash",
        "version": "4.17.15"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-008656"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-8203"
      }
    ]
  },
  "configurations": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/configurations#",
      "children": {
        "@container": "@list"
      },
      "cpe_match": {
        "@container": "@list"
      },
      "data": {
        "@container": "@list"
      },
      "nodes": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "CVE_data_version": "4.0",
        "nodes": [
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:a:lodash:lodash:*:*:*:*:*:node.js:*:*",
                "cpe_name": [],
                "versionEndExcluding": "4.17.20",
                "vulnerable": true
              }
            ],
            "operator": "OR"
          },
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.58:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_billing_and_revenue_management:12.0.0.3.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_billing_and_revenue_management:7.5.0.23.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:enterprise_communications_broker:3.2.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_extensibility_workbench:14.3.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_virtual_account_management:14.3.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_trade_finance_process_management:14.3.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_credit_facilities_process_management:14.3.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_corporate_lending_process_management:14.3.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.59:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:primavera_gateway:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndIncluding": "17.12.11",
                "versionStartIncluding": "17.12.0",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:enterprise_communications_broker:pcz3.3:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_subscriber-aware_load_balancer:cz8.3:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_subscriber-aware_load_balancer:cz8.4:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_session_router:cz8.4:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_session_border_controller:cz8.4:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_session_border_controller:8.4:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_session_border_controller:9.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:primavera_gateway:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndIncluding": "20.12.7",
                "versionStartIncluding": "20.12.0",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:primavera_gateway:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndIncluding": "19.12.11",
                "versionStartIncluding": "19.12.0",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:primavera_gateway:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndIncluding": "18.8.12",
                "versionStartIncluding": "18.8.0",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_virtual_account_management:14.2.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_virtual_account_management:14.5.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_supply_chain_finance:14.2.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_trade_finance_process_management:14.5.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_credit_facilities_process_management:14.2.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_credit_facilities_process_management:14.5.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_corporate_lending_process_management:14.2.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_corporate_lending_process_management:14.5.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_supply_chain_finance:14.5.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_supply_chain_finance:14.3.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_trade_finance_process_management:14.2.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_extensibility_workbench:14.2.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_extensibility_workbench:14.5.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:enterprise_communications_broker:3.3.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_policy:1.11.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_liquidity_management:14.2.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_liquidity_management:14.5.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_liquidity_management:14.3.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_enterpriseone_tools:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndIncluding": "9.2.6.0",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:blockchain_platform:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "21.1.2",
                "vulnerable": true
              }
            ],
            "operator": "OR"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-8203"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "160589"
      },
      {
        "db": "PACKETSTORM",
        "id": "159727"
      },
      {
        "db": "PACKETSTORM",
        "id": "160209"
      },
      {
        "db": "PACKETSTORM",
        "id": "158797"
      },
      {
        "db": "PACKETSTORM",
        "id": "158796"
      },
      {
        "db": "PACKETSTORM",
        "id": "159275"
      },
      {
        "db": "PACKETSTORM",
        "id": "164555"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202007-1043"
      }
    ],
    "trust": 1.3
  },
  "cve": "CVE-2020-8203",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "acInsufInfo": false,
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "NVD",
            "availabilityImpact": "PARTIAL",
            "baseScore": 5.8,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 8.6,
            "impactScore": 4.9,
            "integrityImpact": "PARTIAL",
            "obtainAllPrivilege": false,
            "obtainOtherPrivilege": false,
            "obtainUserPrivilege": false,
            "severity": "MEDIUM",
            "trust": 1.0,
            "userInteractionRequired": false,
            "vectorString": "AV:N/AC:M/Au:N/C:N/I:P/A:P",
            "version": "2.0"
          },
          {
            "acInsufInfo": null,
            "accessComplexity": "Medium",
            "accessVector": "Network",
            "authentication": "None",
            "author": "NVD",
            "availabilityImpact": "Partial",
            "baseScore": 5.8,
            "confidentialityImpact": "None",
            "exploitabilityScore": null,
            "id": "JVNDB-2020-008656",
            "impactScore": null,
            "integrityImpact": "Partial",
            "obtainAllPrivilege": null,
            "obtainOtherPrivilege": null,
            "obtainUserPrivilege": null,
            "severity": "Medium",
            "trust": 0.8,
            "userInteractionRequired": null,
            "vectorString": "AV:N/AC:M/Au:N/C:N/I:P/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 5.8,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 8.6,
            "id": "VHN-186328",
            "impactScore": 4.9,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:N/I:P/A:P",
            "version": "2.0"
          },
          {
            "acInsufInfo": null,
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULMON",
            "availabilityImpact": "PARTIAL",
            "baseScore": 5.8,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 8.6,
            "id": "CVE-2020-8203",
            "impactScore": 4.9,
            "integrityImpact": "PARTIAL",
            "obtainAllPrivilege": null,
            "obtainOtherPrivilege": null,
            "obtainUserPrivilege": null,
            "severity": "MEDIUM",
            "trust": 0.1,
            "userInteractionRequired": null,
            "vectorString": "AV:N/AC:M/Au:N/C:N/I:P/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "HIGH",
            "attackVector": "NETWORK",
            "author": "NVD",
            "availabilityImpact": "HIGH",
            "baseScore": 7.4,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 2.2,
            "impactScore": 5.2,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "High",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 7.4,
            "baseSeverity": "High",
            "confidentialityImpact": "None",
            "exploitabilityScore": null,
            "id": "JVNDB-2020-008656",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:H/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "NVD",
            "id": "CVE-2020-8203",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "NVD",
            "id": "JVNDB-2020-008656",
            "trust": 0.8,
            "value": "High"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202007-1043",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "VULHUB",
            "id": "VHN-186328",
            "trust": 0.1,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2020-8203",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-186328"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-8203"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-008656"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202007-1043"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-8203"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20. lodash Is vulnerable to resource allocation without restrictions or throttling.Information is tampered with and service operation is interrupted (DoS) It may be put into a state. lodash is an open source JavaScript utility library. An input validation error vulnerability exists in lodash 4.17.15 and earlier versions. A remote attacker could exploit this vulnerability to execute arbitrary code on the system via the \u0027merge\u0027, \u0027mergeWith\u0027 and \u0027defaultsDeep\u0027 functions. These packages include redhat-release-virtualization-host,\novirt-node, and rhev-hypervisor. RHVH features a Cockpit user\ninterface for monitoring the host\u0027s resources and performing administrative\ntasks. These\npackages include redhat-release-virtualization-host, ovirt-node, and\nrhev-hypervisor. RHVH features a Cockpit user interface for\nmonitoring the host\u0027s resources and performing administrative tasks. \n\nBug Fix(es):\n\n* Previously, upgrade from Red Had Virtualization (RHV) 4.4.1 to RHV 4.4.2\nfailed due to dangling symlinks from the iSCSI Storage Domain that weren\u0027t\ncleaned up. In this release, the upgrade succeeds. (BZ#1895356)\n\n* Previously, when migrating a Windows virtual machine from a VMware\nenvironment to Red Hat Virtualization 4.4.3, the migration failed due to a\nfile permission error. In this release, the migration succeeds. \n(BZ#1901423)\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1835685 - [Hosted-Engine]\"Installation Guide\" and \"RHV Documents\" didn\u0027t jump to the correct pages in hosted engine page. \n1857412 - CVE-2020-8203 nodejs-lodash: prototype pollution in zipObjectDeep function\n1895356 - Upgrade to 4.4.2 will fail due to dangling symlinks\n1895762 - cockpit ovirt(downstream) docs links point to upstream docs. \n1896536 - CVE-2015-8011 lldpd: buffer overflow in the lldp_decode function in daemon/protocols/lldp.c\n1898023 - Rebase RHV-H 4.4.3 on RHEL 8.3.0.1\n1898024 - Rebase RHV-H 4.4.3 on RHGS-3.5.z Batch #3\n1901423 - [v2v] leaking USER and HOME environment from root causes virt-v2v error: failure: Unexpected file type which prevents VM migration\n1902301 - Upgrade cockpit-ovirt to 0.14.14\n\n6. Solution:\n\nFor OpenShift Container Platform 4.6 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.6/updating/updating-cluster\n- -cli.html. \n\nBug Fix(es):\n\n* send --nowait to libvirt when we collect qemu stats, to consume\nbz#1552092 (BZ#1613514)\n\n* Block moving HE hosts into different Data Centers and make HE host moved\nto different cluster NonOperational after activation (BZ#1702016)\n\n* If an in-use MAC is held by a VM on a different cluster, the engine does\nnot attempt to get the next free MAC. (BZ#1760170)\n\n* Search backend cannot find VMs which name starts with a search keyword\n(BZ#1797717)\n\n* [Permissions] DataCenterAdmin role defined on DC level does not allow\nCluster creation (BZ#1808320)\n\n* enable-usb-autoshare is always 0 in console.vv and usb-filter option is\nlisted two times (BZ#1811466)\n\n* NumaPinningHelper is not huge pages aware, denies migration to suitable\nhost (BZ#1812316)\n\n* Adding quota to group doesn\u0027t propagate to users (BZ#1822372)\n\n* Engine adding PCI-E elements on XML of i440FX SeaBIOS VM created from Q35\nTemplate (BZ#1829691)\n\n* Live Migration Bandwidth unit is different from Engine configuration\n(Mbps) and VDSM (MBps) (BZ#1845397)\n\n* RHV-M shows successful operation if OVA export/import failed during\n\"qemu-img convert\" phase (BZ#1854888)\n\n* Cannot hotplug disk reports libvirtError: Requested operation is not\nvalid: Domain already contains a disk with that address (BZ#1855305)\n\n* rhv-log-collector-analyzer --json fails with TypeError (BZ#1859314)\n\n* RHV 4.4 on AMD EPYC 7742 throws an NUMA related error on VM run\n(BZ#1866862)\n\n* Issue with dashboards creation when sending metrics to external\nElasticsearch (BZ#1870133)\n\n* HostedEngine VM is broken after Cluster changed to UEFI (BZ#1871694)\n\n* [CNV\u0026RHV]Notification about VM creation contain \u003cUNKNOWN\u003e string\n(BZ#1873136)\n\n* VM stuck in Migrating status after migration completed due to incorrect\nstatus reported by VDSM after restart (BZ#1877632)\n\n* Use 4.5 as compatibility level for the Default DataCenter and the Default\nCluster during installation (BZ#1879280)\n\n* unable to create/add index pattern in step 5 from kcs articles#4921101\n(BZ#1881634)\n\n* [CNV\u0026RHV] Remove warning about no active storage domain for Kubevirt VMs\n(BZ#1883844)\n\n* Deprecate and remove ovirt-engine-api-explorer (BZ#1884146)\n\n* [CNV\u0026RHV] Disable creating new disks for Kubevirt VM (BZ#1884634)\n\n* Require ansible-2.9.14 in ovirt-engine (BZ#1888626)\n\nEnhancement(s):\n\n* [RFE] Virtualization support for NVDIMM - RHV (BZ#1361718)\n\n* [RFE] - enable renaming HostedEngine VM name (BZ#1657294)\n\n* [RFE] Enabling Icelake new NIs - RHV (BZ#1745024)\n\n* [RFE] Show vCPUs and allocated memory in virtual machines summary\n(BZ#1752751)\n\n* [RFE] RHV-M Deployment/Install Needs it\u0027s own UUID (BZ#1825020)\n\n* [RFE] Destination Host in migrate VM dialog has to be searchable and\nsortable (BZ#1851865)\n\n* [RFE] Expose the \"reinstallation required\" flag of the hosts in the API\n(BZ#1856671)\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1613514 - send --nowait to libvirt when we collect qemu stats, to consume bz#1552092\n1657294 - [RFE] - enable renaming HostedEngine VM name\n1691253 - ovirt-engine-extension-aaa-ldap-setup does not escape special characters in password\n1702016 - Block moving HE hosts into different Data Centers and make HE host moved to different cluster NonOperational after activation\n1752751 - [RFE] Show vCPUs and allocated memory in virtual machines summary\n1760170 - If an in-use MAC is held by a VM on a different cluster, the engine does not attempt to get the next free MAC. \n1797717 - Search backend cannot find VMs which name starts with a search keyword\n1808320 - [Permissions] DataCenterAdmin role defined on DC level does not allow Cluster creation\n1811466 - enable-usb-autoshare is always 0 in console.vv and usb-filter option is listed two times\n1812316 - NumaPinningHelper is not huge pages aware, denies migration to suitable host\n1822372 - Adding quota to group doesn\u0027t propagate to users\n1825020 - [RFE] RHV-M Deployment/Install Needs it\u0027s own UUID\n1828241 - Deleting snapshot do not display a lock for it\u0027s disks under \"Disk Snapshots\" tab. \n1829691 - Engine adding PCI-E elements on XML of i440FX SeaBIOS VM created from Q35 Template\n1842344 - Status loop due to host initialization not checking network status, monitoring finding the network issue and auto-recovery. \n1845432 - [CNV\u0026RHV] Communicatoin with CNV cluster spamming engine.log when token is expired\n1851865 - [RFE] Destination Host in migrate VM dialog has to be searchable and sortable\n1854888 - RHV-M shows successful operation if OVA export/import failed during \"qemu-img convert\" phase\n1855305 - Cannot hotplug disk reports libvirtError: Requested operation is not valid: Domain already contains a disk with that address\n1856671 - [RFE] Expose the \"reinstallation required\" flag of the hosts in the API\n1857412 - CVE-2020-8203 nodejs-lodash: prototype pollution in zipObjectDeep function\n1859314 - rhv-log-collector-analyzer --json fails with TypeError\n1862101 - rhv-image-discrepancies does show size of the images on the storage as size of the image in db and vice versa\n1866981 - obj must be encoded before hashing\n1870133 - Issue with dashboards creation when sending metrics to external Elasticsearch\n1871694 - HostedEngine VM is broken after Cluster changed to UEFI\n1872911 - RHV Administration Portal fails with 404 error even after updating to RHV 4.3.9\n1873136 - [CNV\u0026RHV]Notification about VM creation contain \u003cUNKNOWN\u003e string\n1876923 - PostgreSQL 12 in RHV 4.4 - engine-setup menu ref URL needs updating\n1877632 - VM stuck in Migrating status after migration completed due to incorrect status reported by VDSM after restart\n1877679 - Synchronize advanced virtualization module with RHEL version during host upgrade\n1879199 - ovirt-engine-extension-aaa-ldap-setup fails on cert import\n1879280 - Use 4.5 as compatibility level for the Default DataCenter and the Default Cluster during installation\n1879377 - [DWH] Rebase bug - for the 4.4.3 release\n1881634 - unable to create/add index pattern in step 5 from kcs articles#4921101\n1882256 - CVE-2019-20922 nodejs-handlebars: an endless loop while processing specially-crafted templates leads to DoS\n1882260 - CVE-2019-20920 nodejs-handlebars: lookup helper fails to properly validate templates allowing for arbitrary JavaScript execution\n1883844 - [CNV\u0026RHV] Remove warning about no active storage domain for Kubevirt VMs\n1884146 - Deprecate and remove ovirt-engine-api-explorer\n1884634 - [CNV\u0026RHV] Disable creating new disks for Kubevirt VM\n1885976 - rhv-log-collector-analyzer - argument must be str, not bytes\n1887268 - Cannot perform yum update on my RHV manager (ansible conflict)\n1888626 - Require ansible-2.9.14 in ovirt-engine\n1889522 - metrics playbooks are broken due to typo\n\n6. Description:\n\nRed Hat OpenShift Service Mesh is Red Hat\u0027s distribution of the Istio\nservice mesh project, tailored for installation into an on-premise\nOpenShift Container Platform installation. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Moderate: Red Hat Virtualization security, bug fix, and enhancement update\nAdvisory ID:       RHSA-2020:3807-01\nProduct:           Red Hat Virtualization\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2020:3807\nIssue date:        2020-09-23\nCVE Names:         CVE-2020-8203 CVE-2020-11022 CVE-2020-11023\n                   CVE-2020-14333\n====================================================================\n1. Summary:\n\nAn update is now available for Red Hat Virtualization Engine 4.4. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4 - noarch\n\n3. Description:\n\nThe org.ovirt.engine-root is a core component of oVirt. \n\nThe following packages have been upgraded to a later upstream version:\nansible-runner-service (1.0.5), org.ovirt.engine-root (4.4.2.3),\novirt-engine-dwh (4.4.2.1), ovirt-engine-extension-aaa-ldap (1.4.1),\novirt-engine-ui-extensions (1.2.3), ovirt-log-collector (4.4.3),\novirt-web-ui (1.6.4), rhvm-branding-rhv (4.4.5), rhvm-dependencies (4.4.1),\nvdsm-jsonrpc-java (1.5.5). (BZ#1674420, BZ#1866734)\n\nA list of bugs fixed in this update is available in the Technical Notes\nbook:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/ht\nml-single/technical_notes\n\nSecurity Fix(es):\n\n* nodejs-lodash: prototype pollution in zipObjectDeep function\n(CVE-2020-8203)\n\n* jquery: Cross-site scripting due to improper injQuery.htmlPrefilter\nmethod (CVE-2020-11022)\n\n* jQuery: passing HTML containing \u003coption\u003e elements to manipulation methods\ncould result in untrusted code execution (CVE-2020-11023)\n\n* ovirt-engine: Reflected cross site scripting vulnerability\n(CVE-2020-14333)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fix(es):\n\n* Cannot assign direct LUN from FC storage - grayed out (BZ#1625499)\n\n* VM portal always asks how to open console.vv even it has been set to\ndefault application. (BZ#1638217)\n\n* RESTAPI Not able to remove the QoS from a disk profile (BZ#1643520)\n\n* On OVA import, qemu-img fails to write to NFS storage domain (BZ#1748879)\n\n* Possible missing block path for a SCSI host device needs to be handled in\nthe UI (BZ#1801206)\n\n* Scheduling Memory calculation disregards huge-pages (BZ#1804037)\n\n* Engine does not reduce scheduling memory when a VM with dynamic hugepages\nruns. (BZ#1804046)\n\n* In Admin Portal, \"Huge Pages (size: amount)\" needs to be clarified\n(BZ#1806339)\n\n* Refresh LUN is using host from different Data Center to scan the LUN\n(BZ#1838051)\n\n* Unable to create Windows VM\u0027s with Mozilla Firefox version 74.0.1 and\ngreater for RHV-M GUI/Webadmin portal (BZ#1843234)\n\n* [RHV-CNV] - NPE when creating new VM in cnv cluster (BZ#1854488)\n\n* [CNV\u0026RHV] Add-Disk operation failed to complete. (BZ#1855377)\n\n* Cannot create KubeVirt VM as a normal user (BZ#1859460)\n\n* Welcome page - remove Metrics Store links and update \"Insights Guide\"\nlink (BZ#1866466)\n\n* [RHV 4.4] Change in CPU model name after RHVH upgrade (BZ#1869209)\n\n* VM vm-name is down with error. Exit message: unsupported configuration:\nCan\u0027t add USB input device. USB bus is disabled. (BZ#1871235)\n\n* spec_ctrl host feature not detected (BZ#1875609)\n\nEnhancement(s):\n\n* [RFE] API for changed blocks/sectors for a disk for incremental backup\nusage (BZ#1139877)\n\n* [RFE] Improve workflow for storage migration of VMs with multiple disks\n(BZ#1749803)\n\n* [RFE] Move the Remove VM button to the drop down menu when viewing\ndetails such as snapshots (BZ#1763812)\n\n* [RFE] enhance search filter for Storage Domains with free argument\n(BZ#1819260)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/2974891\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1625499 - Cannot assign direct LUN from FC storage - grayed out\n1638217 - VM portal always asks how to open console.vv even it has been set to default application. \n1643520 - RESTAPI Not able to remove the QoS from a disk profile\n1674420 - [RFE] - add support for Cascadelake-Server CPUs (and IvyBridge)\n1748879 - On OVA import, qemu-img fails to write to NFS storage domain\n1749803 - [RFE] Improve workflow for storage migration of VMs with multiple disks\n1758024 - Long running Ansible tasks timeout and abort for RHV-H hosts with STIG/Security Profiles applied\n1763812 - [RFE] Move the Remove VM button to the drop down menu when viewing details such as snapshots\n1778471 - Using more than one asterisk in LDAP search string is not working when searching for  AD users. \n1787854 - RHV: Updating/reinstall a host which is part of affinity labels is removed from the affinity label. \n1801206 - Possible missing block path for a SCSI host device needs to be handled in the UI\n1803856 - [Scale] ovirt-vmconsole takes too long or times out in a 500+ VM environment. \n1804037 - Scheduling Memory calculation disregards huge-pages\n1804046 - Engine does not reduce scheduling memory when a VM with dynamic hugepages runs. \n1806339 - In Admin Portal, \"Huge Pages (size: amount)\" needs to be clarified\n1816951 - [CNV\u0026RHV] CNV VM migration failure is not handled correctly by the engine\n1819260 - [RFE] enhance search filter for Storage Domains with free argument\n1826255 - [CNV\u0026RHV]Change name of type of provider - CNV -\u003e OpenShift Virtualization\n1828406 - CVE-2020-11022 jquery: Cross-site scripting due to improper injQuery.htmlPrefilter method\n1831949 - RESTAPI javadoc contains missing information about assigning IP address to NIC\n1831952 - RESTAPI contains malformed link around JSON representation fo the cluster\n1831954 - RESTAPI javadoc contains malformed link around oVirt guest agent\n1831956 - RESTAPI javadoc contains malformed link around time zone representation\n1838051 - Refresh LUN is using host from different Data Center to scan the LUN\n1841112 - not able to upload vm from OVA when there are 2 OVA from the same vm in same directory\n1843234 - Unable to create Windows VM\u0027s with Mozilla Firefox version 74.0.1 and greater for RHV-M GUI/Webadmin portal\n1850004 - CVE-2020-11023 jQuery: passing HTML containing \u003coption\u003e elements to manipulation methods could result in untrusted code execution\n1854488 - [RHV-CNV] - NPE when creating new VM in cnv cluster\n1855377 - [CNV\u0026RHV] Add-Disk operation failed to complete. \n1857412 - CVE-2020-8203 nodejs-lodash: prototype pollution in zipObjectDeep function\n1858184 - CVE-2020-14333 ovirt-engine: Reflected cross site scripting vulnerability\n1859460 - Cannot create KubeVirt VM as a normal user\n1860907 - Upgrade bundled GWT to 2.9.0\n1866466 - Welcome page - remove Metrics Store links and update \"Insights Guide\" link\n1866734 - [DWH] Rebase bug - for the 4.4.2 release\n1869209 - [RHV 4.4] Change in CPU model name after RHVH upgrade\n1869302 - ansible 2.9.12 - host deploy fixes\n1871235 - VM vm-name is down with error. Exit message: unsupported configuration: Can\u0027t add USB input device. USB bus is disabled. \n1875609 - spec_ctrl host feature not detected\n1875851 - Web Admin interface broken on Firefox ESR 68.11\n\n6. Package List:\n\nRHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4:\n\nSource:\nansible-runner-service-1.0.5-1.el8ev.src.rpm\novirt-engine-4.4.2.3-0.6.el8ev.src.rpm\novirt-engine-dwh-4.4.2.1-1.el8ev.src.rpm\novirt-engine-extension-aaa-ldap-1.4.1-1.el8ev.src.rpm\novirt-engine-ui-extensions-1.2.3-1.el8ev.src.rpm\novirt-log-collector-4.4.3-1.el8ev.src.rpm\novirt-web-ui-1.6.4-1.el8ev.src.rpm\nrhvm-branding-rhv-4.4.5-1.el8ev.src.rpm\nrhvm-dependencies-4.4.1-1.el8ev.src.rpm\nvdsm-jsonrpc-java-1.5.5-1.el8ev.src.rpm\n\nnoarch:\nansible-runner-service-1.0.5-1.el8ev.noarch.rpm\novirt-engine-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-backend-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-dbscripts-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-dwh-4.4.2.1-1.el8ev.noarch.rpm\novirt-engine-dwh-grafana-integration-setup-4.4.2.1-1.el8ev.noarch.rpm\novirt-engine-dwh-setup-4.4.2.1-1.el8ev.noarch.rpm\novirt-engine-extension-aaa-ldap-1.4.1-1.el8ev.noarch.rpm\novirt-engine-extension-aaa-ldap-setup-1.4.1-1.el8ev.noarch.rpm\novirt-engine-health-check-bundler-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-restapi-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-setup-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-setup-base-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-setup-plugin-cinderlib-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-setup-plugin-imageio-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-setup-plugin-ovirt-engine-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-setup-plugin-ovirt-engine-common-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-setup-plugin-vmconsole-proxy-helper-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-setup-plugin-websocket-proxy-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-tools-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-tools-backup-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-ui-extensions-1.2.3-1.el8ev.noarch.rpm\novirt-engine-vmconsole-proxy-helper-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-webadmin-portal-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-engine-websocket-proxy-4.4.2.3-0.6.el8ev.noarch.rpm\novirt-log-collector-4.4.3-1.el8ev.noarch.rpm\novirt-web-ui-1.6.4-1.el8ev.noarch.rpm\npython3-ovirt-engine-lib-4.4.2.3-0.6.el8ev.noarch.rpm\nrhvm-4.4.2.3-0.6.el8ev.noarch.rpm\nrhvm-branding-rhv-4.4.5-1.el8ev.noarch.rpm\nrhvm-dependencies-4.4.1-1.el8ev.noarch.rpm\nvdsm-jsonrpc-java-1.5.5-1.el8ev.noarch.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2020-8203\nhttps://access.redhat.com/security/cve/CVE-2020-11022\nhttps://access.redhat.com/security/cve/CVE-2020-11023\nhttps://access.redhat.com/security/cve/CVE-2020-14333\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2020 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBX2t0HtzjgjWX9erEAQhpWg/+KolNmhmQCrst8TmYsC2IgSdHP+q0LKLj\ngdPZYu0ixOpwLLiAhrsoDXqL3H3w7UDSKkSISgPMEqEde4Vp+zI37O1q3E/P7CAj\nrfLGuL1UDEiy0q0g1BP13GrPlg6K4fR5wQAnTB6vD/ZY+wd50Z0T+NGAxd2w68bM\nR5q1kSOUPc4AZt25FORU2cmp775Y7DWazMWHC77uiJHgyCwVqLtdO09iEnglZDKJ\nBynwyT8exZKXxmmpE4QZ4X7wNo3Y0mTiRZo5eyxxQpwj9X+qw1V+pBdtMH/C1yhk\nJ+X1f+wDoe2jCx2bqPXqp6EgFSHnJNt96jV0oTdD0f8rMgWcBDStNXdagPBmBCBp\nt+Kq3BZx0Oqkig4f+DCEmoS0V0fB9UQLg0Q/M9p1bTfYQkbn+BMHL7CAp8UyAzPH\nA1HlnP7TtQgplFvoap82xt2pXh97VvI6x3sBGHyW4Fz0SykhRYx3dAgmqy5nEssl\n5ApWZ87M3l+2tUh4ZOJAtzRDt9sL5KQsXjp1jZaK/gWBsL4Suzr9AIrs4NmRmXnY\nTzxdXgIY6C+dWmB4TPhcJE5etcvtorqvs93d47yBdpRyO/IlbEw0vLUBdVZZuj9N\nmqp6RcHqDKm6Yv4B73Ud5my44wSRWVWtBxO6fivQOQG7iqCyIlA3M3LUMkVy+fxc\nbvmOI0eIsZw=Jhpi\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://www.redhat.com/mailman/listinfo/rhsa-announce\n. JIRA issues fixed (https://issues.jboss.org/):\n\nPROJQUAY-1417 - zstd compressed layers\nPROJQUAY-1449 - As a Quay admin I want to rely on the Operator to auto-scale all stateless parts of Quay\nPROJQUAY-1535 -  As a user I can create and use nested repository name structures\nPROJQUAY-1583 - add \"disconnected\" annotation to operators\nPROJQUAY-1609 - Operator communicates status per managed component\nPROJQUAY-1610 - Operator does not make Quay deployment wait on Clair deployment\nPROJQUAY-1791 - v1beta CRD EOL\nPROJQUAY-1883 - Support OCP Re-encrypt routes\nPROJQUAY-1887 - allow either sha or tag in related images\nPROJQUAY-1926 - As an admin, I want an API to create first user, so I can automate deployment. \nPROJQUAY-1998 - note database deprecations in 3.6 Config Tool\nPROJQUAY-2050 - Support OCP Edge-Termination\nPROJQUAY-2100 - A customer can update the Operator from 3.3 to 3.6 directly\nPROJQUAY-2102 - add clair-4.2 enrichment data to quay UI\nPROJQUAY-672 - MutatingAdmissionWebhook Created Automatically for QBO During Install\n\n6",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-8203"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-008656"
      },
      {
        "db": "VULHUB",
        "id": "VHN-186328"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-8203"
      },
      {
        "db": "PACKETSTORM",
        "id": "160589"
      },
      {
        "db": "PACKETSTORM",
        "id": "159727"
      },
      {
        "db": "PACKETSTORM",
        "id": "160209"
      },
      {
        "db": "PACKETSTORM",
        "id": "158797"
      },
      {
        "db": "PACKETSTORM",
        "id": "158796"
      },
      {
        "db": "PACKETSTORM",
        "id": "159275"
      },
      {
        "db": "PACKETSTORM",
        "id": "164555"
      }
    ],
    "trust": 2.43
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2020-8203",
        "trust": 3.3
      },
      {
        "db": "HACKERONE",
        "id": "712065",
        "trust": 1.8
      },
      {
        "db": "PACKETSTORM",
        "id": "158797",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "160589",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "160209",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "159275",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-008656",
        "trust": 0.8
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202007-1043",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "164555",
        "trust": 0.7
      },
      {
        "db": "CS-HELP",
        "id": "SB2021072725",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021072145",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022041931",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021042310",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.4460",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.2715",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3700",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3255",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.3143",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3472",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5150",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.4180",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5790",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "158796",
        "trust": 0.2
      },
      {
        "db": "VULHUB",
        "id": "VHN-186328",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-8203",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "159727",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-186328"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-8203"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-008656"
      },
      {
        "db": "PACKETSTORM",
        "id": "160589"
      },
      {
        "db": "PACKETSTORM",
        "id": "159727"
      },
      {
        "db": "PACKETSTORM",
        "id": "160209"
      },
      {
        "db": "PACKETSTORM",
        "id": "158797"
      },
      {
        "db": "PACKETSTORM",
        "id": "158796"
      },
      {
        "db": "PACKETSTORM",
        "id": "159275"
      },
      {
        "db": "PACKETSTORM",
        "id": "164555"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202007-1043"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-8203"
      }
    ]
  },
  "id": "VAR-202007-1448",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-186328"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2024-01-21T21:15:51.312000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "CVE-2020-8203 is not modified in /.internal/baseSet.js #4874",
        "trust": 0.8,
        "url": "https://github.com/lodash/lodash/issues/4874"
      },
      {
        "title": "lodash Enter the fix for the verification error vulnerability",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=124909"
      },
      {
        "title": "Debian CVElist Bug Report Logs: node-lodash: CVE-2020-8203",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=e2a3a37cadf3658ad136a09d0edc4403"
      },
      {
        "title": "Red Hat: Important: Red Hat Virtualization security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20205611 - security advisory"
      },
      {
        "title": "Red Hat: Low: Red Hat Virtualization security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20205179 - security advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Virtualization security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20203807 - security advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat OpenShift Service Mesh security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20203369 - security advisory"
      },
      {
        "title": "IBM: Security Bulletin: Security Vulnerabilities affect IBM Cloud Pak for Data \u2013 Node.js (CVE-2020-8203)",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=0d7ed837a314c7bb63d61727a6cea7fa"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6.1 image security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20204298 - security advisory"
      },
      {
        "title": "node-elm-compiler",
        "trust": 0.1,
        "url": "https://github.com/rtfeldman/node-elm-compiler "
      },
      {
        "title": "CloudGuard-ShiftLeft-CICD",
        "trust": 0.1,
        "url": "https://github.com/chkp-dhouari/cloudguard-shiftleft-cicd "
      },
      {
        "title": "CloudGuard-ShiftLeft-CICD-mams",
        "trust": 0.1,
        "url": "https://github.com/mamadoudemb/cloudguard-shiftleft-cicd-mams "
      },
      {
        "title": "shiftleft-cicd-demo",
        "trust": 0.1,
        "url": "https://github.com/ecarbon277/shiftleft-cicd-demo "
      },
      {
        "title": "",
        "trust": 0.1,
        "url": "https://github.com/p3sky/cloudguard-shifleft-cicd "
      },
      {
        "title": "shiftleftv3",
        "trust": 0.1,
        "url": "https://github.com/puryersc/shiftleftv3 "
      },
      {
        "title": "shiftleftv2",
        "trust": 0.1,
        "url": "https://github.com/puryersc/shiftleftv2 "
      },
      {
        "title": "shiftleftv4",
        "trust": 0.1,
        "url": "https://github.com/puryersc/shiftleftv4 "
      },
      {
        "title": "Web-CTF-Cheatsheet",
        "trust": 0.1,
        "url": "https://github.com/duckstroms/web-ctf-cheatsheet "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2020-8203"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-008656"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202007-1043"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-1321",
        "trust": 1.0
      },
      {
        "problemtype": "CWE-770",
        "trust": 0.9
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-186328"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-008656"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-8203"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.4,
        "url": "https://www.oracle.com/security-alerts/cpuapr2021.html"
      },
      {
        "trust": 2.4,
        "url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
      },
      {
        "trust": 2.0,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8203"
      },
      {
        "trust": 1.8,
        "url": "https://security.netapp.com/advisory/ntap-20200724-0006/"
      },
      {
        "trust": 1.8,
        "url": "https://github.com/lodash/lodash/issues/4874"
      },
      {
        "trust": 1.8,
        "url": "https://hackerone.com/reports/712065"
      },
      {
        "trust": 1.8,
        "url": "https://www.oracle.com//security-alerts/cpujul2021.html"
      },
      {
        "trust": 1.8,
        "url": "https://www.oracle.com/security-alerts/cpuapr2022.html"
      },
      {
        "trust": 1.8,
        "url": "https://www.oracle.com/security-alerts/cpujan2022.html"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2020-8203"
      },
      {
        "trust": 0.7,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-security-vulnerabilities-affect-ibm-cloud-pak-for-data-node-js-cve-2020-8203/"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-8203"
      },
      {
        "trust": 0.7,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.6,
        "url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.4460/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.3143"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021072145"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/164555/red-hat-security-advisory-2021-3917-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022041931"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3472"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/158797/red-hat-security-advisory-2020-3369-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/159275/red-hat-security-advisory-2020-3807-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/160589/red-hat-security-advisory-2020-5611-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-a-vulnerability-in-javascript-affects-ibm-license-metric-tool-v9-cve-2020-8203/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-oss-security-scan-issues-for-concerto-installer/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-oss-scan-fixes-for-content-pos/"
      },
      {
        "trust": 0.6,
        "url": "https://www.oracle.com/security-alerts/cpujul2021.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-affect-ibm-planning-analytics/"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021042310"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/160209/red-hat-security-advisory-2020-5179-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3700/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.4180/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-vulnerabilities-in-jquery-spring-dom4j-mongodb-linux-kernel-targetcli-fb-jackson-node-js-and-apache-commons-affect-ibm-spectrum-protect-plus/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5150"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021072725"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5790"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.2715/"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/node-js-lodash-privilege-escalation-via-prototype-pollution-33309"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3255/"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/articles/2974891"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-11023"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-9283"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15366"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-14040"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-11022"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20922"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/updates/classification/#low"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20920"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-20922"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-20920"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9283"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11023"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/1321.html"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=965283"
      },
      {
        "trust": 0.1,
        "url": "https://github.com/rtfeldman/node-elm-compiler"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2015-8011"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2015-8011"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5611"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8768"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-20852"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8535"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10743"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15718"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20657"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19126"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-1712"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8518"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-12448"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8611"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6251"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8676"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-1549"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-9251"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17451"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-20060"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-19519"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11070"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-7150"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-1547"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-7664"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8607"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12052"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-5482"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-14973"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8623"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8690"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20060"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13752"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8601"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-3822"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-11324"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19925"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-3823"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-7146"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-1010204"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7013"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11324"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11236"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8524"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2016-10739"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-18751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16890"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-5481"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8536"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8686"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8671"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-12447"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8544"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12049"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8571"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-19519"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15719"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2013-0169"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8677"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-5436"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-18624"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8595"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13753"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8558"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-11459"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11358"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-12447"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8679"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-12795"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-20657"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-5094"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-3844"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6454"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20852"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-12450"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20483"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14336"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8619"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:4298"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8622"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-1010180"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7598"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8681"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-3825"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8523"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-18074"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2013-0169"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6237"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6706"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-20483"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20337"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8673"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8559"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8687"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13822"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.6/updating/updating-cluster"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19923"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-16769"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8672"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-11358"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-14822"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14404"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8608"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7662"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8615"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-12449"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-7665"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8666"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8457"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-5953"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8689"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15847"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14498"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8735"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-11236"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19924"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12245"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14404"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8726"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-1010204"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8596"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8696"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8610"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18408"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13636"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-1563"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16890"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-11070"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14498"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-7149"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-12450"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-16056"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10739"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-20337"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-18074"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11110"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8584"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19959"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8675"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8563"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10531"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13232"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-3843"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-1010180"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-12449"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10715"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8609"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8587"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-18751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8506"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-18624"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8583"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-9251"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-12448"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11008"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11459"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8597"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5179"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14040"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12666"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:3369"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12666"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.5/jaeger/jaeger_install/rhb"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:3370"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/ht"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:3807"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14333"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14333"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11022"
      },
      {
        "trust": 0.1,
        "url": "https://issues.jboss.org/):"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27922"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-1109"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7608"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26237"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-21270"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22924"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25292"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26237"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25289"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25648"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-3728"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-34552"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22922"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-35653"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-37750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25289"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35654"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1109"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25648"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-3721"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23368"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1107"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-3774"
      },
      {
        "trust": 0.1,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7608"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2017-16137"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36222"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-21270"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23382"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26291"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15366"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25291"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16492"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27921"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-3774"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27515"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-1010266"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-35654"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22922"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27923"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25290"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22923"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23364"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16492"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-1010266"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-1107"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:3917"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26291"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35653"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22924"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23382"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22923"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2017-16138"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-3728"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-3721"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27516"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2017-16138"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2017-16137"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25293"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23364"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23368"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-186328"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-8203"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-008656"
      },
      {
        "db": "PACKETSTORM",
        "id": "160589"
      },
      {
        "db": "PACKETSTORM",
        "id": "159727"
      },
      {
        "db": "PACKETSTORM",
        "id": "160209"
      },
      {
        "db": "PACKETSTORM",
        "id": "158797"
      },
      {
        "db": "PACKETSTORM",
        "id": "158796"
      },
      {
        "db": "PACKETSTORM",
        "id": "159275"
      },
      {
        "db": "PACKETSTORM",
        "id": "164555"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202007-1043"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-8203"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-186328"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-8203"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-008656"
      },
      {
        "db": "PACKETSTORM",
        "id": "160589"
      },
      {
        "db": "PACKETSTORM",
        "id": "159727"
      },
      {
        "db": "PACKETSTORM",
        "id": "160209"
      },
      {
        "db": "PACKETSTORM",
        "id": "158797"
      },
      {
        "db": "PACKETSTORM",
        "id": "158796"
      },
      {
        "db": "PACKETSTORM",
        "id": "159275"
      },
      {
        "db": "PACKETSTORM",
        "id": "164555"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202007-1043"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-8203"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-07-15T00:00:00",
        "db": "VULHUB",
        "id": "VHN-186328"
      },
      {
        "date": "2020-07-15T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-8203"
      },
      {
        "date": "2020-09-18T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-008656"
      },
      {
        "date": "2020-12-17T17:36:24",
        "db": "PACKETSTORM",
        "id": "160589"
      },
      {
        "date": "2020-10-27T16:59:02",
        "db": "PACKETSTORM",
        "id": "159727"
      },
      {
        "date": "2020-11-24T15:30:15",
        "db": "PACKETSTORM",
        "id": "160209"
      },
      {
        "date": "2020-08-07T18:27:30",
        "db": "PACKETSTORM",
        "id": "158797"
      },
      {
        "date": "2020-08-07T18:27:14",
        "db": "PACKETSTORM",
        "id": "158796"
      },
      {
        "date": "2020-09-24T00:30:36",
        "db": "PACKETSTORM",
        "id": "159275"
      },
      {
        "date": "2021-10-19T15:32:20",
        "db": "PACKETSTORM",
        "id": "164555"
      },
      {
        "date": "2020-07-15T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202007-1043"
      },
      {
        "date": "2020-07-15T17:15:11.797000",
        "db": "NVD",
        "id": "CVE-2020-8203"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-05-12T00:00:00",
        "db": "VULHUB",
        "id": "VHN-186328"
      },
      {
        "date": "2022-05-12T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-8203"
      },
      {
        "date": "2020-09-18T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-008656"
      },
      {
        "date": "2023-06-05T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202007-1043"
      },
      {
        "date": "2024-01-21T02:37:13.193000",
        "db": "NVD",
        "id": "CVE-2020-8203"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202007-1043"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "lodash Vulnerability in resource allocation without restrictions or throttling in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-008656"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "input validation error",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202007-1043"
      }
    ],
    "trust": 0.6
  }
}

var-202102-1466
Vulnerability from variot

Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function. Lodash Contains a command injection vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. There is a security vulnerability in Lodash. Please keep an eye on CNNVD or vendor announcements. Description:

The ovirt-engine package provides the manager for virtualization environments. This manager enables admins to define hosts and networks, as well as to add storage, create VMs and manage user permissions.

Bug Fix(es):

  • This release adds the queue attribute to the virtio-scsi driver in the virtual machine configuration. This improvement enables multi-queue performance with the virtio-scsi driver. (BZ#911394)

  • With this release, source-load-balancing has been added as a new sub-option for xmit_hash_policy. It can be configured for bond modes balance-xor (2), 802.3ad (4) and balance-tlb (5), by specifying xmit_hash_policy=vlan+srcmac. (BZ#1683987)

  • The default DataCenter/Cluster will be set to compatibility level 4.6 on new installations of Red Hat Virtualization 4.4.6.; (BZ#1950348)

  • With this release, support has been added for copying disks between regular Storage Domains and Managed Block Storage Domains. It is now possible to migrate disks between Managed Block Storage Domains and regular Storage Domains. (BZ#1906074)

  • Previously, the engine-config value LiveSnapshotPerformFreezeInEngine was set by default to false and was supposed to be uses in cluster compatibility levels below 4.4. The value was set to general version. With this release, each cluster level has it's own value, defaulting to false for 4.4 and above. This will reduce unnecessary overhead in removing time outs of the file system freeze command. (BZ#1932284)

  • With this release, running virtual machines is supported for up to 16TB of RAM on x86_64 architectures. (BZ#1944723)

  • This release adds the gathering of oVirt/RHV related certificates to allow easier debugging of issues for faster customer help and issue resolution. Information from certificates is now included as part of the sosreport. Note that no corresponding private key information is gathered, due to security considerations. (BZ#1845877)

  • Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/2974891

  1. Bugs fixed (https://bugzilla.redhat.com/):

1113630 - [RFE] indicate vNICs that are out-of-sync from their configuration on engine 1310330 - [RFE] Provide a way to remove stale LUNs from hypervisors 1589763 - [downstream clone] Error changing CD for a running VM when ISO image is on a block domain 1621421 - [RFE] indicate vNIC is out of sync on network QoS modification on engine 1717411 - improve engine logging when migration fail 1766414 - [downstream] [UI] hint after updating mtu on networks connected to running VMs 1775145 - Incorrect message from hot-plugging memory 1821199 - HP VM fails to migrate between identical hosts (the same cpu flags) not supporting TSC. 1845877 - [RFE] Collect information about RHV PKI 1875363 - engine-setup failing on FIPS enabled rhel8 machine 1906074 - [RFE] Support disks copy between regular and managed block storage domains 1910858 - vm_ovf_generations is not cleared while detaching the storage domain causing VM import with old stale configuration 1917718 - [RFE] Collect memory usage from guests without ovirt-guest-agent and memory ballooning 1919195 - Unable to create snapshot without saving memory of running VM from VM Portal. 1919984 - engine-setup failse to deploy the grafana service in an external DWH server 1924610 - VM Portal shows N/A as the VM IP address even if the guest agent is running and the IP is shown in the webadmin portal 1926018 - Failed to run VM after FIPS mode is enabled 1926823 - Integrating ELK with RHV-4.4 fails as RHVH is missing 'rsyslog-gnutls' package. 1928158 - Rename 'CA Certificate' link in welcome page to 'Engine CA certificate' 1928188 - Failed to parse 'writeOps' value 'XXXX' to integer: For input string: "XXXX" 1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 1929211 - Failed to parse 'writeOps' value 'XXXX' to integer: For input string: "XXXX" 1930522 - [RHV-4.4.5.5] Failed to deploy RHEL AV 8.4.0 host to RHV with error "missing groups or modules: virt:8.4" 1930565 - Host upgrade failed in imgbased but RHVM shows upgrade successful 1930895 - RHEL 8 virtual machine with qemu-guest-agent installed displays Guest OS Memory Free/Cached/Buffered: Not Configured 1932284 - Engine handled FS freeze is not fast enough for Windows systems 1935073 - Ansible ovirt_disk module can create disks with conflicting IDs that cannot be removed 1942083 - upgrade ovirt-cockpit-sso to 0.1.4-2 1943267 - Snapshot creation is failing for VM having vGPU. 1944723 - [RFE] Support virtual machines with 16TB memory 1948577 - [welcome page] remove "Infrastructure Migration" section (obsoleted) 1949543 - rhv-log-collector-analyzer fails to run MAC Pools rule 1949547 - rhv-log-collector-analyzer report contains 'b characters 1950348 - Set compatibility level 4.6 for Default DataCenter/Cluster during new installations of RHV 4.4.6 1950466 - Host installation failed 1954401 - HP VMs pinning is wiped after edit->ok and pinned to first physical CPUs. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Moderate: OpenShift Container Platform 4.8.2 bug fix and security update Advisory ID: RHSA-2021:2438-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2021:2438 Issue date: 2021-07-27 CVE Names: CVE-2016-2183 CVE-2020-7774 CVE-2020-15106 CVE-2020-15112 CVE-2020-15113 CVE-2020-15114 CVE-2020-15136 CVE-2020-26160 CVE-2020-26541 CVE-2020-28469 CVE-2020-28500 CVE-2020-28852 CVE-2021-3114 CVE-2021-3121 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3537 CVE-2021-3541 CVE-2021-3636 CVE-2021-20206 CVE-2021-20271 CVE-2021-20291 CVE-2021-21419 CVE-2021-21623 CVE-2021-21639 CVE-2021-21640 CVE-2021-21648 CVE-2021-22133 CVE-2021-23337 CVE-2021-23362 CVE-2021-23368 CVE-2021-23382 CVE-2021-25735 CVE-2021-25737 CVE-2021-26539 CVE-2021-26540 CVE-2021-27292 CVE-2021-28092 CVE-2021-29059 CVE-2021-29622 CVE-2021-32399 CVE-2021-33034 CVE-2021-33194 CVE-2021-33909 =====================================================================

  1. Summary:

Red Hat OpenShift Container Platform release 4.8.2 is now available with updates to packages and images that fix several bugs and add enhancements.

This release includes a security update for Red Hat OpenShift Container Platform 4.8.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

This advisory contains the container images for Red Hat OpenShift Container Platform 4.8.2. See the following advisory for the RPM packages for this release:

https://access.redhat.com/errata/RHSA-2021:2437

Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:

https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel ease-notes.html

Security Fix(es):

  • SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32) (CVE-2016-2183)

  • gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)

  • nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)

  • etcd: Large slice causes panic in decodeRecord method (CVE-2020-15106)

  • etcd: DoS in wal/wal.go (CVE-2020-15112)

  • etcd: directories created via os.MkdirAll are not checked for permissions (CVE-2020-15113)

  • etcd: gateway can include itself as an endpoint resulting in resource exhaustion and leads to DoS (CVE-2020-15114)

  • etcd: no authentication is performed against endpoints provided in the

  • --endpoints flag (CVE-2020-15136)

  • jwt-go: access restriction bypass vulnerability (CVE-2020-26160)

  • nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)

  • nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions (CVE-2020-28500)

  • golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag (CVE-2020-28852)

  • golang: crypto/elliptic: incorrect operations on the P-224 curve (CVE-2021-3114)

  • containernetworking-cni: Arbitrary path injection via type field in CNI configuration (CVE-2021-20206)

  • containers/storage: DoS via malicious image (CVE-2021-20291)

  • prometheus: open redirect under the /new endpoint (CVE-2021-29622)

  • golang: x/net/html: infinite loop in ParseFragment (CVE-2021-33194)

  • go.elastic.co/apm: leaks sensitive HTTP headers during panic (CVE-2021-22133)

Space precludes listing in detail the following additional CVEs fixes: (CVE-2021-27292), (CVE-2021-28092), (CVE-2021-29059), (CVE-2021-23382), (CVE-2021-26539), (CVE-2021-26540), (CVE-2021-23337), (CVE-2021-23362) and (CVE-2021-23368)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Additional Changes:

You may download the oc tool and use it to inspect release image metadata as follows:

(For x86_64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.8.2-x86_64

The image digest is ssha256:0e82d17ababc79b10c10c5186920232810aeccbccf2a74c691487090a2c98ebc

(For s390x architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.8.2-s390x

The image digest is sha256:a284c5c3fa21b06a6a65d82be1dc7e58f378aa280acd38742fb167a26b91ecb5

(For ppc64le architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.8.2-ppc64le

The image digest is sha256:da989b8e28bccadbb535c2b9b7d3597146d14d254895cd35f544774f374cdd0f

All OpenShift Container Platform 4.8 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.8/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor

  1. Solution:

For OpenShift Container Platform 4.8 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel ease-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.8/updating/updating-cluster - -cli.html

  1. Bugs fixed (https://bugzilla.redhat.com/):

1369383 - CVE-2016-2183 SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32) 1725981 - oc explain does not work well with full resource.group names 1747270 - [osp] Machine with name "-worker"couldn't join the cluster 1772993 - rbd block devices attached to a host are visible in unprivileged container pods 1786273 - [4.6] KAS pod logs show "error building openapi models ... has invalid property: anyOf" for CRDs 1786314 - [IPI][OSP] Install fails on OpenStack with self-signed certs unless the node running the installer has the CA cert in its system trusts 1801407 - Router in v4v6 mode puts brackets around IPv4 addresses in the Forwarded header 1812212 - ArgoCD example application cannot be downloaded from github 1817954 - [ovirt] Workers nodes are not numbered sequentially 1824911 - PersistentVolume yaml editor is read-only with system:persistent-volume-provisioner ClusterRole 1825219 - openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with "Unable to connect to the server" 1825417 - The containerruntimecontroller doesn't roll back to CR-1 if we delete CR-2 1834551 - ClusterOperatorDown fires when operator is only degraded; states will block upgrades 1835264 - Intree provisioner doesn't respect PVC.spec.dataSource sometimes 1839101 - Some sidebar links in developer perspective don't follow same project 1840881 - The KubeletConfigController cannot process multiple confs for a pool/ pool changes 1846875 - Network setup test high failure rate 1848151 - Console continues to poll the ClusterVersion resource when the user doesn't have authority 1850060 - After upgrading to 3.11.219 timeouts are appearing. 1852637 - Kubelet sets incorrect image names in node status images section 1852743 - Node list CPU column only show usage 1853467 - container_fs_writes_total is inconsistent with CPU/memory in summarizing cgroup values 1857008 - [Edge] [BareMetal] Not provided STATE value for machines 1857477 - Bad helptext for storagecluster creation 1859382 - check-endpoints panics on graceful shutdown 1862084 - Inconsistency of time formats in the OpenShift web-console 1864116 - Cloud credential operator scrolls warnings about unsupported platform 1866222 - Should output all options when runing operator-sdk init --help 1866318 - [RHOCS Usability Study][Dashboard] Users found it difficult to navigate to the OCS dashboard 1866322 - [RHOCS Usability Study][Dashboard] Alert details page does not help to explain the Alert 1866331 - [RHOCS Usability Study][Dashboard] Users need additional tooltips or definitions 1868755 - [vsphere] terraform provider vsphereprivate crashes when network is unavailable on host 1868870 - CVE-2020-15113 etcd: directories created via os.MkdirAll are not checked for permissions 1868872 - CVE-2020-15112 etcd: DoS in wal/wal.go 1868874 - CVE-2020-15114 etcd: gateway can include itself as an endpoint resulting in resource exhaustion and leads to DoS 1868880 - CVE-2020-15136 etcd: no authentication is performed against endpoints provided in the --endpoints flag 1868883 - CVE-2020-15106 etcd: Large slice causes panic in decodeRecord method 1871303 - [sig-instrumentation] Prometheus when installed on the cluster should have important platform topology metrics 1871770 - [IPI baremetal] The Keepalived.conf file is not indented evenly 1872659 - ClusterAutoscaler doesn't scale down when a node is not needed anymore 1873079 - SSH to api and console route is possible when the clsuter is hosted on Openstack 1873649 - proxy.config.openshift.io should validate user inputs 1874322 - openshift/oauth-proxy: htpasswd using SHA1 to store credentials 1874931 - Accessibility - Keyboard shortcut to exit YAML editor not easily discoverable 1876918 - scheduler test leaves taint behind 1878199 - Remove Log Level Normalization controller in cluster-config-operator release N+1 1878655 - [aws-custom-region] creating manifests take too much time when custom endpoint is unreachable 1878685 - Ingress resource with "Passthrough" annotation does not get applied when using the newer "networking.k8s.io/v1" API 1879077 - Nodes tainted after configuring additional host iface 1879140 - console auth errors not understandable by customers 1879182 - switch over to secure access-token logging by default and delete old non-sha256 tokens 1879184 - CVO must detect or log resource hotloops 1879495 - [4.6] namespace \“openshift-user-workload-monitoring\” does not exist” 1879638 - Binary file uploaded to a secret in OCP 4 GUI is not properly converted to Base64-encoded string 1879944 - [OCP 4.8] Slow PV creation with vsphere 1880757 - AWS: master not removed from LB/target group when machine deleted 1880758 - Component descriptions in cloud console have bad description (Managed by Terraform) 1881210 - nodePort for router-default metrics with NodePortService does not exist 1881481 - CVO hotloops on some service manifests 1881484 - CVO hotloops on deployment manifests 1881514 - CVO hotloops on imagestreams from cluster-samples-operator 1881520 - CVO hotloops on (some) clusterrolebindings 1881522 - CVO hotloops on clusterserviceversions packageserver 1881662 - Error getting volume limit for plugin kubernetes.io/ in kubelet logs 1881694 - Evidence of disconnected installs pulling images from the local registry instead of quay.io 1881938 - migrator deployment doesn't tolerate masters 1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability 1883587 - No option for user to select volumeMode 1883993 - Openshift 4.5.8 Deleting pv disk vmdk after delete machine 1884053 - cluster DNS experiencing disruptions during cluster upgrade in insights cluster 1884800 - Failed to set up mount unit: Invalid argument 1885186 - Removing ssh keys MC does not remove the key from authorized_keys 1885349 - [IPI Baremetal] Proxy Information Not passed to metal3 1885717 - activeDeadlineSeconds DeadlineExceeded does not show terminated container statuses 1886572 - auth: error contacting auth provider when extra ingress (not default) goes down 1887849 - When creating new storage class failure_domain is missing. 1888712 - Worker nodes do not come up on a baremetal IPI deployment with control plane network configured on a vlan on top of bond interface due to Pending CSRs 1889689 - AggregatedAPIErrors alert may never fire 1890678 - Cypress: Fix 'structure' accesibility violations 1890828 - Intermittent prune job failures causing operator degradation 1891124 - CP Conformance: CRD spec and status failures 1891301 - Deleting bmh by "oc delete bmh' get stuck 1891696 - [LSO] Add capacity UI does not check for node present in selected storageclass 1891766 - [LSO] Min-Max filter's from OCS wizard accepts Negative values and that cause PV not getting created 1892642 - oauth-server password metrics do not appear in UI after initial OCP installation 1892718 - HostAlreadyClaimed: The new route cannot be loaded with a new api group version 1893850 - Add an alert for requests rejected by the apiserver 1893999 - can't login ocp cluster with oc 4.7 client without the username 1895028 - [gcp-pd-csi-driver-operator] Volumes created by CSI driver are not deleted on cluster deletion 1895053 - Allow builds to optionally mount in cluster trust stores 1896226 - recycler-pod template should not be in kubelet static manifests directory 1896321 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types 1896751 - [RHV IPI] Worker nodes stuck in the Provisioning Stage if the machineset has a long name 1897415 - [Bare Metal - Ironic] provide the ability to set the cipher suite for ipmitool when doing a Bare Metal IPI install 1897621 - Auth test.Login test.logs in as kubeadmin user: Timeout 1897918 - [oVirt] e2e tests fail due to kube-apiserver not finishing 1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability 1899057 - fix spurious br-ex MAC address error log 1899187 - [Openstack] node-valid-hostname.service failes during the first boot leading to 5 minute provisioning delay 1899587 - [External] RGW usage metrics shown on Object Service Dashboard is incorrect 1900454 - Enable host-based disk encryption on Azure platform 1900819 - Scaled ingress replicas following sharded pattern don't balance evenly across multi-AZ 1901207 - Search Page - Pipeline resources table not immediately updated after Name filter applied or removed 1901535 - Remove the managingOAuthAPIServer field from the authentication.operator API 1901648 - "do you need to set up custom dns" tooltip inaccurate 1902003 - Jobs Completions column is not sorting when there are "0 of 1" and "1 of 1" in the list. 1902076 - image registry operator should monitor status of its routes 1902247 - openshift-oauth-apiserver apiserver pod crashloopbackoffs 1903055 - [OSP] Validation should fail when no any IaaS flavor or type related field are given 1903228 - Pod stuck in Terminating, runc init process frozen 1903383 - Latest RHCOS 47.83. builds failing to install: mount /root.squashfs failed 1903553 - systemd container renders node NotReady after deleting it 1903700 - metal3 Deployment doesn't have unique Pod selector 1904006 - The --dir option doest not work for command oc image extract 1904505 - Excessive Memory Use in Builds 1904507 - vsphere-problem-detector: implement missing metrics 1904558 - Random init-p error when trying to start pod 1905095 - Images built on OCP 4.6 clusters create manifests that result in quay.io (and other registries) rejecting those manifests 1905147 - ConsoleQuickStart Card's prerequisites is a combined text instead of a list 1905159 - Installation on previous unused dasd fails after formatting 1905331 - openshift-multus initContainer multus-binary-copy, etc. are not requesting required resources: cpu, memory 1905460 - Deploy using virtualmedia for disabled provisioning network on real BM(HPE) fails 1905577 - Control plane machines not adopted when provisioning network is disabled 1905627 - Warn users when using an unsupported browser such as IE 1905709 - Machine API deletion does not properly handle stopped instances on AWS or GCP 1905849 - Default volumesnapshotclass should be created when creating default storageclass 1906056 - Bundles skipped via the skips field cannot be pinned 1906102 - CBO produces standard metrics 1906147 - ironic-rhcos-downloader should not use --insecure 1906304 - Unexpected value NaN parsing x/y attribute when viewing pod Memory/CPU usage chart 1906740 - [aws]Machine should be "Failed" when creating a machine with invalid region 1907309 - Migrate controlflow v1alpha1 to v1beta1 in storage 1907315 - the internal load balancer annotation for AWS should use "true" instead of "0.0.0.0/0" as value 1907353 - [4.8] OVS daemonset is wasting resources even though it doesn't do anything 1907614 - Update kubernetes deps to 1.20 1908068 - Enable DownwardAPIHugePages feature gate 1908169 - The example of Import URL is "Fedora cloud image list" for all templates. 1908170 - sriov network resource injector: Hugepage injection doesn't work with mult container 1908343 - Input labels in Manage columns modal should be clickable 1908378 - [sig-network] pods should successfully create sandboxes by getting pod - Static Pod Failures 1908655 - "Evaluating rule failed" for "record: node:node_num_cpu:sum" rule 1908762 - [Dualstack baremetal cluster] multicast traffic is not working on ovn-kubernetes 1908765 - [SCALE] enable OVN lflow data path groups 1908774 - [SCALE] enable OVN DB memory trimming on compaction 1908916 - CNO: turn on OVN DB RAFT diffs once all master DB pods are capable of it 1909091 - Pod/node/ip/template isn't showing when vm is running 1909600 - Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error 1909849 - release-openshift-origin-installer-e2e-aws-upgrade-fips-4.4 is perm failing 1909875 - [sig-cluster-lifecycle] Cluster version operator acknowledges upgrade : timed out waiting for cluster to acknowledge upgrade 1910067 - UPI: openstacksdk fails on "server group list" 1910113 - periodic-ci-openshift-release-master-ocp-4.5-ci-e2e-44-stable-to-45-ci is never passing 1910318 - OC 4.6.9 Installer failed: Some pods are not scheduled: 3 node(s) didn't match node selector: AWS compute machines without status 1910378 - socket timeouts for webservice communication between pods 1910396 - 4.6.9 cred operator should back-off when provisioning fails on throttling 1910500 - Could not list CSI provisioner on web when create storage class on GCP platform 1911211 - Should show the cert-recovery-controller version correctly 1911470 - ServiceAccount Registry Authfiles Do Not Contain Entries for Public Hostnames 1912571 - libvirt: Support setting dnsmasq options through the install config 1912820 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade 1913112 - BMC details should be optional for unmanaged hosts 1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag 1913341 - GCP: strange cluster behavior in CI run 1913399 - switch to v1beta1 for the priority and fairness APIs 1913525 - Panic in OLM packageserver when invoking webhook authorization endpoint 1913532 - After a 4.6 to 4.7 upgrade, a node went unready 1913974 - snapshot test periodically failing with "can't open '/mnt/test/data': No such file or directory" 1914127 - Deletion of oc get svc router-default -n openshift-ingress hangs 1914446 - openshift-service-ca-operator and openshift-service-ca pods run as root 1914994 - Panic observed in k8s-prometheus-adapter since k8s 1.20 1915122 - Size of the hostname was preventing proper DNS resolution of the worker node names 1915693 - Not able to install gpu-operator on cpumanager enabled node. 1915971 - Role and Role Binding breadcrumbs do not work as expected 1916116 - the left navigation menu would not be expanded if repeat clicking the links in Overview page 1916118 - [OVN] Source IP is not EgressIP if configured allow 0.0.0.0/0 in the EgressFirewall 1916392 - scrape priority and fairness endpoints for must-gather 1916450 - Alertmanager: add title and text fields to Adv. config. section of Slack Receiver form 1916489 - [sig-scheduling] SchedulerPriorities [Serial] fails with "Error waiting for 1 pods to be running - probably a timeout: Timeout while waiting for pods with labels to be ready" 1916553 - Default template's description is empty on details tab 1916593 - Destroy cluster sometimes stuck in a loop 1916872 - need ability to reconcile exgw annotations on pod add 1916890 - [OCP 4.7] api or api-int not available during installation 1917241 - [en_US] The tooltips of Created date time is not easy to read in all most of UIs. 1917282 - [Migration] MCO stucked for rhel worker after enable the migration prepare state 1917328 - It should default to current namespace when create vm from template action on details page 1917482 - periodic-ci-openshift-release-master-ocp-4.7-e2e-metal-ipi failing with "cannot go from state 'deploy failed' to state 'manageable'" 1917485 - [oVirt] ovirt machine/machineset object has missing some field validations 1917667 - Master machine config pool updates are stalled during the migration from SDN to OVNKube. 1917906 - [oauth-server] bump k8s.io/apiserver to 1.20.3 1917931 - [e2e-gcp-upi] failing due to missing pyopenssl library 1918101 - [vsphere]Delete Provisioning machine took about 12 minutes 1918376 - Image registry pullthrough does not support ICSP, mirroring e2es do not pass 1918442 - Service Reject ACL does not work on dualstack 1918723 - installer fails to write boot record on 4k scsi lun on s390x 1918729 - Add hide/reveal button for the token field in the KMS configuration page 1918750 - CVE-2021-3114 golang: crypto/elliptic: incorrect operations on the P-224 curve 1918785 - Pod request and limit calculations in console are incorrect 1918910 - Scale from zero annotations should not requeue if instance type missing 1919032 - oc image extract - will not extract files from image rootdir - "error: unexpected directory from mapping tests.test" 1919048 - Whereabouts IPv6 addresses not calculated when leading hextets equal 0 1919151 - [Azure] dnsrecords with invalid domain should not be published to Azure dnsZone 1919168 - oc adm catalog mirror doesn't work for the air-gapped cluster 1919291 - [Cinder-csi-driver] Filesystem did not expand for on-line volume resize 1919336 - vsphere-problem-detector should check if datastore is part of datastore cluster 1919356 - Add missing profile annotation in cluster-update-keys manifests 1919391 - CVE-2021-20206 containernetworking-cni: Arbitrary path injection via type field in CNI configuration 1919398 - Permissive Egress NetworkPolicy (0.0.0.0/0) is blocking all traffic 1919406 - OperatorHub filter heading "Provider Type" should be "Source" 1919737 - hostname lookup delays when master node down 1920209 - Multus daemonset upgrade takes the longest time in the cluster during an upgrade 1920221 - GCP jobs exhaust zone listing query quota sometimes due to too many initializations of cloud provider in tests 1920300 - cri-o does not support configuration of stream idle time 1920307 - "VM not running" should be "Guest agent required" on vm details page in dev console 1920532 - Problem in trying to connect through the service to a member that is the same as the caller. 1920677 - Various missingKey errors in the devconsole namespace 1920699 - Operation cannot be fulfilled on clusterresourcequotas.quota.openshift.io error when creating different OpenShift resources 1920901 - [4.7]"500 Internal Error" for prometheus route in https_proxy cluster 1920903 - oc adm top reporting unknown status for Windows node 1920905 - Remove DNS lookup workaround from cluster-api-provider 1921106 - A11y Violation: button name(s) on Utilization Card on Cluster Dashboard 1921184 - kuryr-cni binds to wrong interface on machine with two interfaces 1921227 - Fix issues related to consuming new extensions in Console static plugins 1921264 - Bundle unpack jobs can hang indefinitely 1921267 - ResourceListDropdown not internationalized 1921321 - SR-IOV obliviously reboot the node 1921335 - ThanosSidecarUnhealthy 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1921720 - test: openshift-tests.[sig-cli] oc observe works as expected [Suite:openshift/conformance/parallel] 1921763 - operator registry has high memory usage in 4.7... cleanup row closes 1921778 - Push to stage now failing with semver issues on old releases 1921780 - Search page not fully internationalized 1921781 - DefaultList component not internationalized 1921878 - [kuryr] Egress network policy with namespaceSelector in Kuryr behaves differently than in OVN-Kubernetes 1921885 - Server-side Dry-run with Validation Downloads Entire OpenAPI spec often 1921892 - MAO: controller runtime manager closes event recorder 1921894 - Backport Avoid node disruption when kube-apiserver-to-kubelet-signer is rotated 1921937 - During upgrade /etc/hostname becomes a directory, nodes are set with kubernetes.io/hostname=localhost label 1921953 - ClusterServiceVersion property inference does not infer package and version 1922063 - "Virtual Machine" should be "Templates" in template wizard 1922065 - Rootdisk size is default to 15GiB in customize wizard 1922235 - [build-watch] e2e-aws-upi - e2e-aws-upi container setup failing because of Python code version mismatch 1922264 - Restore snapshot as a new PVC: RWO/RWX access modes are not click-able if parent PVC is deleted 1922280 - [v2v] on the upstream release, In VM import wizard I see RHV but no oVirt 1922646 - Panic in authentication-operator invoking webhook authorization 1922648 - FailedCreatePodSandBox due to "failed to pin namespaces [uts]: [pinns:e]: /var/run/utsns exists and is not a directory: File exists" 1922764 - authentication operator is degraded due to number of kube-apiservers 1922992 - some button text on YAML sidebar are not translated 1922997 - [Migration]The SDN migration rollback failed. 1923038 - [OSP] Cloud Info is loaded twice 1923157 - Ingress traffic performance drop due to NodePort services 1923786 - RHV UPI fails with unhelpful message when ASSET_DIR is not set. 1923811 - Registry claims Available=True despite .status.readyReplicas == 0 while .spec.replicas == 2 1923847 - Error occurs when creating pods if configuring multiple key-only labels in default cluster-wide node selectors or project-wide node selectors 1923984 - Incorrect anti-affinity for UWM prometheus 1924020 - panic: runtime error: index out of range [0] with length 0 1924075 - kuryr-controller restart when enablePortPoolsPrepopulation = true 1924083 - "Activity" Pane of Persistent Storage tab shows events related to Noobaa too 1924140 - [OSP] Typo in OPENSHFIT_INSTALL_SKIP_PREFLIGHT_VALIDATIONS variable 1924171 - ovn-kube must handle single-stack to dual-stack migration 1924358 - metal UPI setup fails, no worker nodes 1924502 - Failed to start transient scope unit: Argument list too long / systemd[1]: Failed to set up mount unit: Invalid argument 1924536 - 'More about Insights' link points to support link 1924585 - "Edit Annotation" are not correctly translated in Chinese 1924586 - Control Plane status and Operators status are not fully internationalized 1924641 - [User Experience] The message "Missing storage class" needs to be displayed after user clicks Next and needs to be rephrased 1924663 - Insights operator should collect related pod logs when operator is degraded 1924701 - Cluster destroy fails when using byo with Kuryr 1924728 - Difficult to identify deployment issue if the destination disk is too small 1924729 - Create Storageclass for CephFS provisioner assumes incorrect default FSName in external mode (side-effect of fix for Bug 1878086) 1924747 - InventoryItem doesn't internationalize resource kind 1924788 - Not clear error message when there are no NADs available for the user 1924816 - Misleading error messages in ironic-conductor log 1924869 - selinux avc deny after installing OCP 4.7 1924916 - PVC reported as Uploading when it is actually cloning 1924917 - kuryr-controller in crash loop if IP is removed from secondary interfaces 1924953 - newly added 'excessive etcd leader changes' test case failing in serial job 1924968 - Monitoring list page filter options are not translated 1924983 - some components in utils directory not localized 1925017 - [UI] VM Details-> Network Interfaces, 'Name,' is displayed instead on 'Name' 1925061 - Prometheus backed by a PVC may start consuming a lot of RAM after 4.6 -> 4.7 upgrade due to series churn 1925083 - Some texts are not marked for translation on idp creation page. 1925087 - Add i18n support for the Secret page 1925148 - Shouldn't create the redundant imagestream when use oc new-app --name=testapp2 -i with exist imagestream 1925207 - VM from custom template - cloudinit disk is not added if creating the VM from custom template using customization wizard 1925216 - openshift installer fails immediately failed to fetch Install Config 1925236 - OpenShift Route targets every port of a multi-port service 1925245 - oc idle: Clusters upgrading with an idled workload do not have annotations on the workload's service 1925261 - Items marked as mandatory in KMS Provider form are not enforced 1925291 - Baremetal IPI - While deploying with IPv6 provision network with subnet other than /64 masters fail to PXE boot 1925343 - [ci] e2e-metal tests are not using reserved instances 1925493 - Enable snapshot e2e tests 1925586 - cluster-etcd-operator is leaking transports 1925614 - Error: InstallPlan.operators.coreos.com not found 1925698 - On GCP, load balancers report kube-apiserver fails its /readyz check 50% of the time, causing load balancer backend churn and disruptions to apiservers 1926029 - [RFE] Either disable save or give warning when no disks support snapshot 1926054 - Localvolume CR is created successfully, when the storageclass name defined in the localvolume exists. 1926072 - Close button (X) does not work in the new "Storage cluster exists" Warning alert message(introduced via fix for Bug 1867400) 1926082 - Insights operator should not go degraded during upgrade 1926106 - [ja_JP][zh_CN] Create Project, Delete Project and Delete PVC modal are not fully internationalized 1926115 - Texts in “Insights” popover on overview page are not marked for i18n 1926123 - Pseudo bug: revert "force cert rotation every couple days for development" in 4.7 1926126 - some kebab/action menu translation issues 1926131 - Add HPA page is not fully internationalized 1926146 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it 1926154 - Create new pool with arbiter - wrong replica 1926278 - [oVirt] consume K8S 1.20 packages 1926279 - Pod ignores mtu setting from sriovNetworkNodePolicies in case of PF partitioning 1926285 - ignore pod not found status messages 1926289 - Accessibility: Modal content hidden from screen readers 1926310 - CannotRetrieveUpdates alerts on Critical severity 1926329 - [Assisted-4.7][Staging] monitoring stack in staging is being overloaded by the amount of metrics being exposed by assisted-installer pods and scraped by prometheus. 1926336 - Service details can overflow boxes at some screen widths 1926346 - move to go 1.15 and registry.ci.openshift.org 1926364 - Installer timeouts because proxy blocked connection to Ironic API running on bootstrap VM 1926465 - bootstrap kube-apiserver does not have --advertise-address set – was: [BM][IPI][DualStack] Installation fails cause Kubernetes service doesn't have IPv6 endpoints 1926484 - API server exits non-zero on 2 SIGTERM signals 1926547 - OpenShift installer not reporting IAM permission issue when removing the Shared Subnet Tag 1926579 - Setting .spec.policy is deprecated and will be removed eventually. Please use .spec.profile instead is being logged every 3 seconds in scheduler operator log 1926598 - Duplicate alert rules are displayed on console for thanos-querier api return wrong results 1926776 - "Template support" modal appears when select the RHEL6 common template 1926835 - [e2e][automation] prow gating use unsupported CDI version 1926843 - pipeline with finally tasks status is improper 1926867 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade 1926893 - When deploying the operator via OLM (after creating the respective catalogsource), the deployment "lost" the resources section. 1926903 - NTO may fail to disable stalld when relying on Tuned '[service]' plugin 1926931 - Inconsistent ovs-flow rule on one of the app node for egress node 1926943 - vsphere-problem-detector: Alerts in CI jobs 1926977 - [sig-devex][Feature:ImageEcosystem][Slow] openshift sample application repositories rails/nodejs 1927013 - Tables don't render properly at smaller screen widths 1927017 - CCO does not relinquish leadership when restarting for proxy CA change 1927042 - Empty static pod files on UPI deployments are confusing 1927047 - multiple external gateway pods will not work in ingress with IP fragmentation 1927068 - Workers fail to PXE boot when IPv6 provisionining network has subnet other than /64 1927075 - [e2e][automation] Fix pvc string in pvc.view 1927118 - OCP 4.7: NVIDIA GPU Operator DCGM metrics not displayed in OpenShift Console Monitoring Metrics page 1927244 - UPI installation with Kuryr timing out on bootstrap stage 1927263 - kubelet service takes around 43 secs to start container when started from stopped state 1927264 - FailedCreatePodSandBox due to multus inability to reach apiserver 1927310 - Performance: Console makes unnecessary requests for en-US messages on load 1927340 - Race condition in OperatorCondition reconcilation 1927366 - OVS configuration service unable to clone NetworkManager's connections in the overlay FS 1927391 - Fix flake in TestSyncPodsDeletesWhenSourcesAreReady 1927393 - 4.7 still points to 4.6 catalog images 1927397 - p&f: add auto update for priority & fairness bootstrap configuration objects 1927423 - Happy "Not Found" and no visible error messages on error-list page when /silences 504s 1927465 - Homepage dashboard content not internationalized 1927678 - Reboot interface defaults to softPowerOff so fencing is too slow 1927731 - /usr/lib/dracut/modules.d/30ignition/ignition --version sigsev 1927797 - 'Pod(s)' should be included in the pod donut label when a horizontal pod autoscaler is enabled 1927882 - Can't create cluster role binding from UI when a project is selected 1927895 - global RuntimeConfig is overwritten with merge result 1927898 - i18n Admin Notifier 1927902 - i18n Cluster Utilization dashboard duration 1927903 - "CannotRetrieveUpdates" - critical error in openshift web console 1927925 - Manually misspelled as Manualy 1927941 - StatusDescriptor detail item and Status component can cause runtime error when the status is an object or array 1927942 - etcd should use socket option (SO_REUSEADDR) instead of wait for port release on process restart 1927944 - cluster version operator cycles terminating state waiting for leader election 1927993 - Documentation Links in OKD Web Console are not Working 1928008 - Incorrect behavior when we click back button after viewing the node details in Internal-attached mode 1928045 - N+1 scaling Info message says "single zone" even if the nodes are spread across 2 or 0 zones 1928147 - Domain search set in the required domains in Option 119 of DHCP Server is ignored by RHCOS on RHV 1928157 - 4.7 CNO claims to be done upgrading before it even starts 1928164 - Traffic to outside the cluster redirected when OVN is used and NodePort service is configured 1928297 - HAProxy fails with 500 on some requests 1928473 - NetworkManager overlay FS not being created on None platform 1928512 - sap license management logs gatherer 1928537 - Cannot IPI with tang/tpm disk encryption 1928640 - Definite error message when using StorageClass based on azure-file / Premium_LRS 1928658 - Update plugins and Jenkins version to prepare openshift-sync-plugin 1.0.46 release 1928850 - Unable to pull images due to limited quota on Docker Hub 1928851 - manually creating NetNamespaces will break things and this is not obvious 1928867 - golden images - DV should not be created with WaitForFirstConsumer 1928869 - Remove css required to fix search bug in console caused by pf issue in 2021.1 1928875 - Update translations 1928893 - Memory Pressure Drop Down Info is stating "Disk" capacity is low instead of memory 1928931 - DNSRecord CRD is using deprecated v1beta1 API 1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 1929052 - Add new Jenkins agent maven dir for 3.6 1929056 - kube-apiserver-availability.rules are failing evaluation 1929110 - LoadBalancer service check test fails during vsphere upgrade 1929136 - openshift isn't able to mount nfs manila shares to pods 1929175 - LocalVolumeSet: PV is created on disk belonging to other provisioner 1929243 - Namespace column missing in Nodes Node Details / pods tab 1929277 - Monitoring workloads using too high a priorityclass 1929281 - Update Tech Preview badge to transparent border color when upgrading to PatternFly v4.87.1 1929314 - ovn-kubernetes endpoint slice controller doesn't run on CI jobs 1929359 - etcd-quorum-guard uses origin-cli [4.8] 1929577 - Edit Application action overwrites Deployment envFrom values on save 1929654 - Registry for Azure uses legacy V1 StorageAccount 1929693 - Pod stuck at "ContainerCreating" status 1929733 - oVirt CSI driver operator is constantly restarting 1929769 - Getting 404 after switching user perspective in another tab and reload Project details 1929803 - Pipelines shown in edit flow for Workloads created via ContainerImage flow 1929824 - fix alerting on volume name check for vsphere 1929917 - Bare-metal operator is firing for ClusterOperatorDown for 15m during 4.6 to 4.7 upgrade 1929944 - The etcdInsufficientMembers alert fires incorrectly when any instance is down and not when quorum is lost 1930007 - filter dropdown item filter and resource list dropdown item filter doesn't support multi selection 1930015 - OS list is overlapped by buttons in template wizard 1930064 - Web console crashes during VM creation from template when no storage classes are defined 1930220 - Cinder CSI driver is not able to mount volumes under heavier load 1930240 - Generated clouds.yaml incomplete when provisioning network is disabled 1930248 - After creating a remediation flow and rebooting a worker there is no access to the openshift-web-console 1930268 - intel vfio devices are not expose as resources 1930356 - Darwin binary missing from mirror.openshift.com 1930393 - Gather info about unhealthy SAP pods 1930546 - Monitoring-dashboard-workload keep loading when user with cluster-role cluster-monitoring-view login develoer console 1930570 - Jenkins templates are displayed in Developer Catalog twice 1930620 - the logLevel field in containerruntimeconfig can't be set to "trace" 1930631 - Image local-storage-mustgather in the doc does not come from product registry 1930893 - Backport upstream patch 98956 for pod terminations 1931005 - Related objects page doesn't show the object when its name is empty 1931103 - remove periodic log within kubelet 1931115 - Azure cluster install fails with worker type workers Standard_D4_v2 1931215 - [RFE] Cluster-api-provider-ovirt should handle affinity groups 1931217 - [RFE] Installer should create RHV Affinity group for OCP cluster VMS 1931467 - Kubelet consuming a large amount of CPU and memory and node becoming unhealthy 1931505 - [IPI baremetal] Two nodes hold the VIP post remove and start of the Keepalived container 1931522 - Fresh UPI install on BM with bonding using OVN Kubernetes fails 1931529 - SNO: mentioning of 4 nodes in error message - Cluster network CIDR prefix 24 does not contain enough addresses for 4 hosts each one with 25 prefix (128 addresses) 1931629 - Conversational Hub Fails due to ImagePullBackOff 1931637 - Kubeturbo Operator fails due to ImagePullBackOff 1931652 - [single-node] etcd: discover-etcd-initial-cluster graceful termination race. 1931658 - [single-node] cluster-etcd-operator: cluster never pivots from bootstrapIP endpoint 1931674 - [Kuryr] Enforce nodes MTU for the Namespaces and Pods 1931852 - Ignition HTTP GET is failing, because DHCP IPv4 config is failing silently 1931883 - Fail to install Volume Expander Operator due to CrashLookBackOff 1931949 - Red Hat Integration Camel-K Operator keeps stuck in Pending state 1931974 - Operators cannot access kubeapi endpoint on OVNKubernetes on ipv6 1931997 - network-check-target causes upgrade to fail from 4.6.18 to 4.7 1932001 - Only one of multiple subscriptions to the same package is honored 1932097 - Apiserver liveness probe is marking it as unhealthy during normal shutdown 1932105 - machine-config ClusterOperator claims level while control-plane still updating 1932133 - AWS EBS CSI Driver doesn’t support “csi.storage.k8s.io/fsTyps” parameter 1932135 - When “iopsPerGB” parameter is not set, event for AWS EBS CSI Driver provisioning is not clear 1932152 - When “iopsPerGB” parameter is set to a wrong number, events for AWS EBS CSI Driver provisioning are not clear 1932154 - [AWS ] machine stuck in provisioned phase , no warnings or errors 1932182 - catalog operator causing CPU spikes and bad etcd performance 1932229 - Can’t find kubelet metrics for aws ebs csi volumes 1932281 - [Assisted-4.7][UI] Unable to change upgrade channel once upgrades were discovered 1932323 - CVE-2021-26540 sanitize-html: improper validation of hostnames set by the "allowedIframeHostnames" option can lead to bypass hostname whitelist for iframe element 1932324 - CRIO fails to create a Pod in sandbox stage - starting container process caused: process_linux.go:472: container init caused: Running hook #0:: error running hook: exit status 255, stdout: , stderr: \"\n" 1932362 - CVE-2021-26539 sanitize-html: improper handling of internationalized domain name (IDN) can lead to bypass hostname whitelist validation 1932401 - Cluster Ingress Operator degrades if external LB redirects http to https because of new "canary" route 1932453 - Update Japanese timestamp format 1932472 - Edit Form/YAML switchers cause weird collapsing/code-folding issue 1932487 - [OKD] origin-branding manifest is missing cluster profile annotations 1932502 - Setting MTU for a bond interface using Kernel arguments is not working 1932618 - Alerts during a test run should fail the test job, but were not 1932624 - ClusterMonitoringOperatorReconciliationErrors is pending at the end of an upgrade and probably should not be 1932626 - During a 4.8 GCP upgrade OLM fires an alert indicating the operator is unhealthy 1932673 - Virtual machine template provided by red hat should not be editable. The UI allows to edit and then reverse the change after it was made 1932789 - Proxy with port is unable to be validated if it overlaps with service/cluster network 1932799 - During a hive driven baremetal installation the process does not go beyond 80% in the bootstrap VM 1932805 - e2e: test OAuth API connections in the tests by that name 1932816 - No new local storage operator bundle image is built 1932834 - enforce the use of hashed access/authorize tokens 1933101 - Can not upgrade a Helm Chart that uses a library chart in the OpenShift dev console 1933102 - Canary daemonset uses default node selector 1933114 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it [Suite:openshift/conformance/parallel/minimal] 1933159 - multus DaemonSets should use maxUnavailable: 33% 1933173 - openshift-sdn/sdn DaemonSet should use maxUnavailable: 10% 1933174 - openshift-sdn/ovs DaemonSet should use maxUnavailable: 10% 1933179 - network-check-target DaemonSet should use maxUnavailable: 10% 1933180 - openshift-image-registry/node-ca DaemonSet should use maxUnavailable: 10% 1933184 - openshift-cluster-csi-drivers DaemonSets should use maxUnavailable: 10% 1933263 - user manifest with nodeport services causes bootstrap to block 1933269 - Cluster unstable replacing an unhealthy etcd member 1933284 - Samples in CRD creation are ordered arbitarly 1933414 - Machines are created with unexpected name for Ports 1933599 - bump k8s.io/apiserver to 1.20.3 1933630 - [Local Volume] Provision disk failed when disk label has unsupported value like ":" 1933664 - Getting Forbidden for image in a container template when creating a sample app 1933708 - Grafana is not displaying deployment config resources in dashboard Default /Kubernetes / Compute Resources / Namespace (Workloads) 1933711 - EgressDNS: Keep short lived records at most 30s 1933730 - [AI-UI-Wizard] Toggling "Use extra disks for local storage" checkbox highlights the "Next" button to move forward but grays out once clicked 1933761 - Cluster DNS service caps TTLs too low and thus evicts from its cache too aggressively 1933772 - MCD Crash Loop Backoff 1933805 - TargetDown alert fires during upgrades because of normal upgrade behavior 1933857 - Details page can throw an uncaught exception if kindObj prop is undefined 1933880 - Kuryr-Controller crashes when it's missing the status object 1934021 - High RAM usage on machine api termination node system oom 1934071 - etcd consuming high amount of memory and CPU after upgrade to 4.6.17 1934080 - Both old and new Clusterlogging CSVs stuck in Pending during upgrade 1934085 - Scheduling conformance tests failing in a single node cluster 1934107 - cluster-authentication-operator builds URL incorrectly for IPv6 1934112 - Add memory and uptime metadata to IO archive 1934113 - mcd panic when there's not enough free disk space 1934123 - [OSP] First public endpoint is used to fetch ignition config from Glance URL (with multiple endpoints) on OSP 1934163 - Thanos Querier restarting and gettin alert ThanosQueryHttpRequestQueryRangeErrorRateHigh 1934174 - rootfs too small when enabling NBDE 1934176 - Machine Config Operator degrades during cluster update with failed to convert Ignition config spec v2 to v3 1934177 - knative-camel-operator CreateContainerError "container_linux.go:366: starting container process caused: chdir to cwd (\"/home/nonroot\") set in config.json failed: permission denied" 1934216 - machineset-controller stuck in CrashLoopBackOff after upgrade to 4.7.0 1934229 - List page text filter has input lag 1934397 - Extend OLM operator gatherer to include Operator/ClusterServiceVersion conditions 1934400 - [ocp_4][4.6][apiserver-auth] OAuth API servers are not ready - PreconditionNotReady 1934516 - Setup different priority classes for prometheus-k8s and prometheus-user-workload pods 1934556 - OCP-Metal images 1934557 - RHCOS boot image bump for LUKS fixes 1934643 - Need BFD failover capability on ECMP routes 1934711 - openshift-ovn-kubernetes ovnkube-node DaemonSet should use maxUnavailable: 10% 1934773 - Canary client should perform canary probes explicitly over HTTPS (rather than redirect from HTTP) 1934905 - CoreDNS's "errors" plugin is not enabled for custom upstream resolvers 1935058 - Can’t finish install sts clusters on aws government region 1935102 - Error: specifying a root certificates file with the insecure flag is not allowed during oc login 1935155 - IGMP/MLD packets being dropped 1935157 - [e2e][automation] environment tests broken 1935165 - OCP 4.6 Build fails when filename contains an umlaut 1935176 - Missing an indication whether the deployed setup is SNO. 1935269 - Topology operator group shows child Jobs. Not shown in details view's resources. 1935419 - Failed to scale worker using virtualmedia on Dell R640 1935528 - [AWS][Proxy] ingress reports degrade with CanaryChecksSucceeding=False in the cluster with proxy setting 1935539 - Openshift-apiserver CO unavailable during cluster upgrade from 4.6 to 4.7 1935541 - console operator panics in DefaultDeployment with nil cm 1935582 - prometheus liveness probes cause issues while replaying WAL 1935604 - high CPU usage fails ingress controller 1935667 - pipelinerun status icon rendering issue 1935706 - test: Detect when the master pool is still updating after upgrade 1935732 - Update Jenkins agent maven directory to be version agnostic [ART ocp build data] 1935814 - Pod and Node lists eventually have incorrect row heights when additional columns have long text 1935909 - New CSV using ServiceAccount named "default" stuck in Pending during upgrade 1936022 - DNS operator performs spurious updates in response to API's defaulting of daemonset's terminationGracePeriod and service's clusterIPs 1936030 - Ingress operator performs spurious updates in response to API's defaulting of NodePort service's clusterIPs field 1936223 - The IPI installer has a typo. It is missing the word "the" in "the Engine". 1936336 - Updating multus-cni builder & base images to be consistent with ART 4.8 (closed) 1936342 - kuryr-controller restarting after 3 days cluster running - pools without members 1936443 - Hive based OCP IPI baremetal installation fails to connect to API VIP port 22623 1936488 - [sig-instrumentation][Late] Alerts shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured: Prometheus query error 1936515 - sdn-controller is missing some health checks 1936534 - When creating a worker with a used mac-address stuck on registering 1936585 - configure alerts if the catalogsources are missing 1936620 - OLM checkbox descriptor renders switch instead of checkbox 1936721 - network-metrics-deamon not associated with a priorityClassName 1936771 - [aws ebs csi driver] The event for Pod consuming a readonly PVC is not clear 1936785 - Configmap gatherer doesn't include namespace name (in the archive path) in case of a configmap with binary data 1936788 - RBD RWX PVC creation with Filesystem volume mode selection is creating RWX PVC with Block volume mode instead of disabling Filesystem volume mode selection 1936798 - Authentication log gatherer shouldn't scan all the pod logs in the openshift-authentication namespace 1936801 - Support ServiceBinding 0.5.0+ 1936854 - Incorrect imagestream is shown as selected in knative service container image edit flow 1936857 - e2e-ovirt-ipi-install-install is permafailing on 4.5 nightlies 1936859 - ovirt 4.4 -> 4.5 upgrade jobs are permafailing 1936867 - Periodic vsphere IPI install is broken - missing pip 1936871 - [Cinder CSI] Topology aware provisioning doesn't work when Nova and Cinder AZs are different 1936904 - Wrong output YAML when syncing groups without --confirm 1936983 - Topology view - vm details screen isntt stop loading 1937005 - when kuryr quotas are unlimited, we should not sent alerts 1937018 - FilterToolbar component does not handle 'null' value for 'rowFilters' prop 1937020 - Release new from image stream chooses incorrect ID based on status 1937077 - Blank White page on Topology 1937102 - Pod Containers Page Not Translated 1937122 - CAPBM changes to support flexible reboot modes 1937145 - [Local storage] PV provisioned by localvolumeset stays in "Released" status after the pod/pvc deleted 1937167 - [sig-arch] Managed cluster should have no crashlooping pods in core namespaces over four minutes 1937244 - [Local Storage] The model name of aws EBS doesn't be extracted well 1937299 - pod.spec.volumes.awsElasticBlockStore.partition is not respected on NVMe volumes 1937452 - cluster-network-operator CI linting fails in master branch 1937459 - Wrong Subnet retrieved for Service without Selector 1937460 - [CI] Network quota pre-flight checks are failing the installation 1937464 - openstack cloud credentials are not getting configured with correct user_domain_name across the cluster 1937466 - KubeClientCertificateExpiration alert is confusing, without explanation in the documentation 1937496 - Metrics viewer in OCP Console is missing date in a timestamp for selected datapoint 1937535 - Not all image pulls within OpenShift builds retry 1937594 - multiple pods in ContainerCreating state after migration from OpenshiftSDN to OVNKubernetes 1937627 - Bump DEFAULT_DOC_URL for 4.8 1937628 - Bump upgrade channels for 4.8 1937658 - Description for storage class encryption during storagecluster creation needs to be updated 1937666 - Mouseover on headline 1937683 - Wrong icon classification of output in buildConfig when the destination is a DockerImage 1937693 - ironic image "/" cluttered with files 1937694 - [oVirt] split ovirt providerIDReconciler logic into NodeController and ProviderIDController 1937717 - If browser default font size is 20, the layout of template screen breaks 1937722 - OCP 4.8 vuln due to BZ 1936445 1937929 - Operand page shows a 404:Not Found error for OpenShift GitOps Operator 1937941 - [RFE]fix wording for favorite templates 1937972 - Router HAProxy config file template is slow to render due to repetitive regex compilations 1938131 - [AWS] Missing iam:ListAttachedRolePolicies permission in permissions.go 1938321 - Cannot view PackageManifest objects in YAML on 'Home > Search' page nor 'CatalogSource details > Operators tab' 1938465 - thanos-querier should set a CPU request on the thanos-query container 1938466 - packageserver deployment sets neither CPU or memory request on the packageserver container 1938467 - The default cluster-autoscaler should get default cpu and memory requests if user omits them 1938468 - kube-scheduler-operator has a container without a CPU request 1938492 - Marketplace extract container does not request CPU or memory 1938493 - machine-api-operator declares restrictive cpu and memory limits where it should not 1938636 - Can't set the loglevel of the container: cluster-policy-controller and kube-controller-manager-recovery-controller 1938903 - Time range on dashboard page will be empty after drog and drop mouse in the graph 1938920 - ovnkube-master/ovs-node DaemonSets should use maxUnavailable: 10% 1938947 - Update blocked from 4.6 to 4.7 when using spot/preemptible instances 1938949 - [VPA] Updater failed to trigger evictions due to "vpa-admission-controller" not found 1939054 - machine healthcheck kills aws spot instance before generated 1939060 - CNO: nodes and masters are upgrading simultaneously 1939069 - Add source to vm template silently failed when no storage class is defined in the cluster 1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string 1939168 - Builds failing for OCP 3.11 since PR#25 was merged 1939226 - kube-apiserver readiness probe appears to be hitting /healthz, not /readyz 1939227 - kube-apiserver liveness probe appears to be hitting /healthz, not /livez 1939232 - CI tests using openshift/hello-world broken by Ruby Version Update 1939270 - fix co upgradeableFalse status and reason 1939294 - OLM may not delete pods with grace period zero (force delete) 1939412 - missed labels for thanos-ruler pods 1939485 - CVE-2021-20291 containers/storage: DoS via malicious image 1939547 - Include container="POD" in resource queries 1939555 - VSphereProblemDetectorControllerDegraded: context canceled during upgrade to 4.8.0 1939573 - after entering valid git repo url on add flow page, throwing warning message instead Validated 1939580 - Authentication operator is degraded during 4.8 to 4.8 upgrade and normal 4.8 e2e runs 1939606 - Attempting to put a host into maintenance mode warns about Ceph cluster health, but no storage cluster problems are apparent 1939661 - support new AWS region ap-northeast-3 1939726 - clusteroperator/network should not change condition/Degraded during normal serial test execution 1939731 - Image registry operator reports unavailable during normal serial run 1939734 - Node Fanout Causes Excessive WATCH Secret Calls, Taking Down Clusters 1939740 - dual stack nodes with OVN single ipv6 fails on bootstrap phase 1939752 - ovnkube-master sbdb container does not set requests on cpu or memory 1939753 - Delete HCO is stucking if there is still VM in the cluster 1939815 - Change the Warning Alert for Encrypted PVs in Create StorageClass(provisioner:RBD) page 1939853 - [DOC] Creating manifests API should not allow folder in the "file_name" 1939865 - GCP PD CSI driver does not have CSIDriver instance 1939869 - [e2e][automation] Add annotations to datavolume for HPP 1939873 - Unlimited number of characters accepted for base domain name 1939943 - cluster-kube-apiserver-operator check-endpoints observed a panic: runtime error: invalid memory address or nil pointer dereference 1940030 - cluster-resource-override: fix spelling mistake for run-level match expression in webhook configuration 1940057 - Openshift builds should use a wach instead of polling when checking for pod status 1940142 - 4.6->4.7 updates stick on OpenStackCinderCSIDriverOperatorCR_OpenStackCinderDriverControllerServiceController_Deploying 1940159 - [OSP] cluster destruction fails to remove router in BYON (with provider network) with Kuryr as primary network 1940206 - Selector and VolumeTableRows not i18ned 1940207 - 4.7->4.6 rollbacks stuck on prometheusrules admission webhook "no route to host" 1940314 - Failed to get type for Dashboard Kubernetes / Compute Resources / Namespace (Workloads) 1940318 - No data under 'Current Bandwidth' for Dashboard 'Kubernetes / Networking / Pod' 1940322 - Split of dashbard is wrong, many Network parts 1940337 - rhos-ipi installer fails with not clear message when openstack tenant doesn't have flavors needed for compute machines 1940361 - [e2e][automation] Fix vm action tests with storageclass HPP 1940432 - Gather datahubs.installers.datahub.sap.com resources from SAP clusters 1940488 - After fix for CVE-2021-3344, Builds do not mount node entitlement keys 1940498 - pods may fail to add logical port due to lr-nat-del/lr-nat-add error messages 1940499 - hybrid-overlay not logging properly before exiting due to an error 1940518 - Components in bare metal components lack resource requests 1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header 1940704 - prjquota is dropped from rootflags if rootfs is reprovisioned 1940755 - [Web-console][Local Storage] LocalVolumeSet could not be created from web-console without detail error info 1940865 - Add BareMetalPlatformType into e2e upgrade service unsupported list 1940876 - Components in ovirt components lack resource requests 1940889 - Installation failures in OpenStack release jobs 1940933 - [sig-arch] Check if alerts are firing during or after upgrade success: AggregatedAPIDown on v1beta1.metrics.k8s.io 1940939 - Wrong Openshift node IP as kubelet setting VIP as node IP 1940940 - csi-snapshot-controller goes unavailable when machines are added removed to cluster 1940950 - vsphere: client/bootstrap CSR double create 1940972 - vsphere: [4.6] CSR approval delayed for unknown reason 1941000 - cinder storageclass creates persistent volumes with wrong label failure-domain.beta.kubernetes.io/zone in multi availability zones architecture on OSP 16. 1941334 - [RFE] Cluster-api-provider-ovirt should handle auto pinning policy 1941342 - Add kata-osbuilder-generate.service as part of the default presets 1941456 - Multiple pods stuck in ContainerCreating status with the message "failed to create container for [kubepods burstable podxxx] : dbus: connection closed by user" being seen in the journal log 1941526 - controller-manager-operator: Observed a panic: nil pointer dereference 1941592 - HAProxyDown not Firing 1941606 - [assisted operator] Assisted Installer Operator CSV related images should be digests for icsp 1941625 - Developer -> Topology - i18n misses 1941635 - Developer -> Monitoring - i18n misses 1941636 - BM worker nodes deployment with virtual media failed while trying to clean raid 1941645 - Developer -> Builds - i18n misses 1941655 - Developer -> Pipelines - i18n misses 1941667 - Developer -> Project - i18n misses 1941669 - Developer -> ConfigMaps - i18n misses 1941759 - Errored pre-flight checks should not prevent install 1941798 - Some details pages don't have internationalized ResourceKind labels 1941801 - Many filter toolbar dropdowns haven't been internationalized 1941815 - From the web console the terminal can no longer connect after using leaving and returning to the terminal view 1941859 - [assisted operator] assisted pod deploy first time in error state 1941901 - Toleration merge logic does not account for multiple entries with the same key 1941915 - No validation against template name in boot source customization 1941936 - when setting parameters in containerRuntimeConfig, it will show incorrect information on its description 1941980 - cluster-kube-descheduler operator is broken when upgraded from 4.7 to 4.8 1941990 - Pipeline metrics endpoint changed in osp-1.4 1941995 - fix backwards incompatible trigger api changes in osp1.4 1942086 - Administrator -> Home - i18n misses 1942117 - Administrator -> Workloads - i18n misses 1942125 - Administrator -> Serverless - i18n misses 1942193 - Operand creation form - broken/cutoff blue line on the Accordion component (fieldGroup) 1942207 - [vsphere] hostname are changed when upgrading from 4.6 to 4.7.x causing upgrades to fail 1942271 - Insights operator doesn't gather pod information from openshift-cluster-version 1942375 - CRI-O failing with error "reserving ctr name" 1942395 - The status is always "Updating" on dc detail page after deployment has failed. 1942521 - [Assisted-4.7] [Staging][OCS] Minimum memory for selected role is failing although minimum OCP requirement satisfied 1942522 - Resolution fails to sort channel if inner entry does not satisfy predicate 1942536 - Corrupted image preventing containers from starting 1942548 - Administrator -> Networking - i18n misses 1942553 - CVE-2021-22133 go.elastic.co/apm: leaks sensitive HTTP headers during panic 1942555 - Network policies in ovn-kubernetes don't support external traffic from router when the endpoint publishing strategy is HostNetwork 1942557 - Query is reporting "no datapoint" when label cluster="" is set but work when the label is removed or when running directly in Prometheus 1942608 - crictl cannot list the images with an error: error locating item named "manifest" for image with ID 1942614 - Administrator -> Storage - i18n misses 1942641 - Administrator -> Builds - i18n misses 1942673 - Administrator -> Pipelines - i18n misses 1942694 - Resource names with a colon do not display property in the browser window title 1942715 - Administrator -> User Management - i18n misses 1942716 - Quay Container Security operator has Medium <-> Low colors reversed 1942725 - [SCC] openshift-apiserver degraded when creating new pod after installing Stackrox which creates a less privileged SCC [4.8] 1942736 - Administrator -> Administration - i18n misses 1942749 - Install Operator form should use info icon for popovers 1942837 - [OCPv4.6] unable to deploy pod with unsafe sysctls 1942839 - Windows VMs fail to start on air-gapped environments 1942856 - Unable to assign nodes for EgressIP even if the egress-assignable label is set 1942858 - [RFE]Confusing detach volume UX 1942883 - AWS EBS CSI driver does not support partitions 1942894 - IPA error when provisioning masters due to an error from ironic.conductor - /dev/sda is busy 1942935 - must-gather improvements 1943145 - vsphere: client/bootstrap CSR double create 1943175 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies (set azure storage account TLS version default to 1.2) 1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() 1943219 - unable to install IPI PRIVATE OpenShift cluster in Azure - SSH access from the Internet should be blocked 1943224 - cannot upgrade openshift-kube-descheduler from 4.7.2 to latest 1943238 - The conditions table does not occupy 100% of the width. 1943258 - [Assisted-4.7][Staging][Advanced Networking] Cluster install fails while waiting for control plane 1943314 - [OVN SCALE] Combine Logical Flows inside Southbound DB. 1943315 - avoid workload disruption for ICSP changes 1943320 - Baremetal node loses connectivity with bonded interface and OVNKubernetes 1943329 - TLSSecurityProfile missing from KubeletConfig CRD Manifest 1943356 - Dynamic plugins surfaced in the UI should be referred to as "Console plugins" 1943539 - crio-wipe is failing to start "Failed to shutdown storage before wiping: A layer is mounted: layer is in use by a container" 1943543 - DeploymentConfig Rollback doesn't reset params correctly 1943558 - [assisted operator] Assisted Service pod unable to reach self signed local registry in disco environement 1943578 - CoreDNS caches NXDOMAIN responses for up to 900 seconds 1943614 - add bracket logging on openshift/builder calls into buildah to assist test-platform team triage 1943637 - upgrade from ocp 4.5 to 4.6 does not clear SNAT rules on ovn 1943649 - don't use hello-openshift for network-check-target 1943667 - KubeDaemonSetRolloutStuck fires during upgrades too often because it does not accurately detect progress 1943719 - storage-operator/vsphere-problem-detector causing upgrades to fail that would have succeeded in past versions 1943804 - API server on AWS takes disruption between 70s and 110s after pod begins termination via external LB 1943845 - Router pods should have startup probes configured 1944121 - OVN-kubernetes references AddressSets after deleting them, causing ovn-controller errors 1944160 - CNO: nbctl daemon should log reconnection info 1944180 - OVN-Kube Master does not release election lock on shutdown 1944246 - Ironic fails to inspect and move node to "manageable' but get bmh remains in "inspecting" 1944268 - openshift-install AWS SDK is missing endpoints for the ap-northeast-3 region 1944509 - Translatable texts without context in ssh expose component 1944581 - oc project not works with cluster proxy 1944587 - VPA could not take actions based on the recommendation when min-replicas=1 1944590 - The field name "VolumeSnapshotContent" is wrong on VolumeSnapshotContent detail page 1944602 - Consistant fallures of features/project-creation.feature Cypress test in CI 1944631 - openshif authenticator should not accept non-hashed tokens 1944655 - [manila-csi-driver-operator] openstack-manila-csi-nodeplugin pods stucked with ".. still connecting to unix:///var/lib/kubelet/plugins/csi-nfsplugin/csi.sock" 1944660 - dm-multipath race condition on bare metal causing /boot partition mount failures 1944674 - Project field become to "All projects" and disabled in "Review and create virtual machine" step in devconsole 1944678 - Whereabouts IPAM CNI duplicate IP addresses assigned to pods 1944761 - field level help instances do not use common util component 1944762 - Drain on worker node during an upgrade fails due to PDB set for image registry pod when only a single replica is present 1944763 - field level help instances do not use common util component 1944853 - Update to nodejs >=14.15.4 for ARM 1944974 - Duplicate KubeControllerManagerDown/KubeSchedulerDown alerts 1944986 - Clarify the ContainerRuntimeConfiguration cr description on the validation 1945027 - Button 'Copy SSH Command' does not work 1945085 - Bring back API data in etcd test 1945091 - In k8s 1.21 bump Feature:IPv6DualStack tests are disabled 1945103 - 'User credentials' shows even the VM is not running 1945104 - In k8s 1.21 bump '[sig-storage] [cis-hostpath] [Testpattern: Generic Ephemeral-volume' tests are disabled 1945146 - Remove pipeline Tech preview badge for pipelines GA operator 1945236 - Bootstrap ignition shim doesn't follow proxy settings 1945261 - Operator dependency not consistently chosen from default channel 1945312 - project deletion does not reset UI project context 1945326 - console-operator: does not check route health periodically 1945387 - Image Registry deployment should have 2 replicas and hard anti-affinity rules 1945398 - 4.8 CI failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial] 1945431 - alerts: SystemMemoryExceedsReservation triggers too quickly 1945443 - operator-lifecycle-manager-packageserver flaps Available=False with no reason or message 1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service 1945548 - catalog resource update failed if spec.secrets set to "" 1945584 - Elasticsearch operator fails to install on 4.8 cluster on ppc64le/s390x 1945599 - Optionally set KERNEL_VERSION and RT_KERNEL_VERSION 1945630 - Pod log filename no longer in -.log format 1945637 - QE- Automation- Fixing smoke test suite for pipeline-plugin 1945646 - gcp-routes.sh running as initrc_t unnecessarily 1945659 - [oVirt] remove ovirt_cafile from ovirt-credentials secret 1945677 - Need ACM Managed Cluster Info metric enabled for OCP monitoring telemetry 1945687 - Dockerfile needs updating to new container CI registry 1945700 - Syncing boot mode after changing device should be restricted to Supermicro 1945816 - " Ingresses " should be kept in English for Chinese 1945818 - Chinese translation issues: Operator should be the same with English Operators 1945849 - Unnecessary series churn when a new version of kube-state-metrics is rolled out 1945910 - [aws] support byo iam roles for instances 1945948 - SNO: pods can't reach ingress when the ingress uses a different IPv6. 1946079 - Virtual master is not getting an IP address 1946097 - [oVirt] oVirt credentials secret contains unnecessary "ovirt_cafile" 1946119 - panic parsing install-config 1946243 - No relevant error when pg limit is reached in block pools page 1946307 - [CI] [UPI] use a standardized and reliable way to install google cloud SDK in UPI image 1946320 - Incorrect error message in Deployment Attach Storage Page 1946449 - [e2e][automation] Fix cloud-init tests as UI changed 1946458 - Edit Application action overwrites Deployment envFrom values on save 1946459 - In bare metal IPv6 environment, [sig-storage] [Driver: nfs] tests are failing in CI. 1946479 - In k8s 1.21 bump BoundServiceAccountTokenVolume is disabled by default 1946497 - local-storage-diskmaker pod logs "DeviceSymlinkExists" and "not symlinking, could not get lock: " 1946506 - [on-prem] mDNS plugin no longer needed 1946513 - honor use specified system reserved with auto node sizing 1946540 - auth operator: only configure webhook authenticators for internal auth when oauth-apiserver pods are ready 1946584 - Machine-config controller fails to generate MC, when machine config pool with dashes in name presents under the cluster 1946607 - etcd readinessProbe is not reflective of actual readiness 1946705 - Fix issues with "search" capability in the Topology Quick Add component 1946751 - DAY2 Confusing event when trying to add hosts to a cluster that completed installation 1946788 - Serial tests are broken because of router 1946790 - Marketplace operator flakes Available=False OperatorStarting during updates 1946838 - Copied CSVs show up as adopted components 1946839 - [Azure] While mirroring images to private registry throwing error: invalid character '<' looking for beginning of value 1946865 - no "namespace:kube_pod_container_resource_requests_cpu_cores:sum" and "namespace:kube_pod_container_resource_requests_memory_bytes:sum" metrics 1946893 - the error messages are inconsistent in DNS status conditions if the default service IP is taken 1946922 - Ingress details page doesn't show referenced secret name and link 1946929 - the default dns operator's Progressing status is always True and cluster operator dns Progressing status is False 1947036 - "failed to create Matchbox client or connect" on e2e-metal jobs or metal clusters via cluster-bot 1947066 - machine-config-operator pod crashes when noProxy is * 1947067 - [Installer] Pick up upstream fix for installer console output 1947078 - Incorrect skipped status for conditional tasks in the pipeline run 1947080 - SNO IPv6 with 'temporary 60-day domain' option fails with IPv4 exception 1947154 - [master] [assisted operator] Unable to re-register an SNO instance if deleting CRDs during install 1947164 - Print "Successfully pushed" even if the build push fails. 1947176 - OVN-Kubernetes leaves stale AddressSets around if the deletion was missed. 1947293 - IPv6 provision addresses range larger then /64 prefix (e.g. /48) 1947311 - When adding a new node to localvolumediscovery UI does not show pre-existing node name's 1947360 - [vSphere csi driver operator] operator pod runs as “BestEffort” qosClass 1947371 - [vSphere csi driver operator] operator doesn't create “csidriver” instance 1947402 - Single Node cluster upgrade: AWS EBS CSI driver deployment is stuck on rollout 1947478 - discovery v1 beta1 EndpointSlice is deprecated in Kubernetes 1.21 (OCP 4.8) 1947490 - If Clevis on a managed LUKs volume with Ignition enables, the system will fails to automatically open the LUKs volume on system boot 1947498 - policy v1 beta1 PodDisruptionBudget is deprecated in Kubernetes 1.21 (OCP 4.8) 1947663 - disk details are not synced in web-console 1947665 - Internationalization values for ceph-storage-plugin should be in file named after plugin 1947684 - MCO on SNO sometimes has rendered configs and sometimes does not 1947712 - [OVN] Many faults and Polling interval stuck for 4 seconds every roughly 5 minutes intervals. 1947719 - 8 APIRemovedInNextReleaseInUse info alerts display 1947746 - Show wrong kubernetes version from kube-scheduler/kube-controller-manager operator pods 1947756 - [azure-disk-csi-driver-operator] Should allow more nodes to be updated simultaneously for speeding up cluster upgrade 1947767 - [azure-disk-csi-driver-operator] Uses the same storage type in the sc created by it as the default sc? 1947771 - [kube-descheduler]descheduler operator pod should not run as “BestEffort” qosClass 1947774 - CSI driver operators use "Always" imagePullPolicy in some containers 1947775 - [vSphere csi driver operator] doesn’t use the downstream images from payload. 1947776 - [vSphere csi driver operator] Should allow more nodes to be updated simultaneously for speeding up cluster upgrade 1947779 - [LSO] Should allow more nodes to be updated simultaneously for speeding up LSO upgrade 1947785 - Cloud Compute: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947789 - Console: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947791 - MCO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947793 - DevEx: APIRemovedInNextReleaseInUse info alerts display 1947794 - OLM: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component does not trigger APIRemovedInNextReleaseInUse alert 1947795 - Networking: APIRemovedInNextReleaseInUse info alerts display 1947797 - CVO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947798 - Images: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947800 - Ingress: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947801 - Kube Storage Version Migrator APIRemovedInNextReleaseInUse info alerts display 1947803 - Openshift Apiserver: APIRemovedInNextReleaseInUse info alerts display 1947806 - Re-enable h2spec, http/2 and grpc-interop e2e tests in openshift/origin 1947828 - download it link should save pod log in -.log format 1947866 - disk.csi.azure.com.spec.operatorLogLevel is not updated when CSO loglevel is changed 1947917 - Egress Firewall does not reliably apply firewall rules 1947946 - Operator upgrades can delete existing CSV before completion 1948011 - openshift-controller-manager constantly reporting type "Upgradeable" status Unknown 1948012 - service-ca constantly reporting type "Upgradeable" status Unknown 1948019 - [4.8] Large number of requests to the infrastructure cinder volume service 1948022 - Some on-prem namespaces missing from must-gather 1948040 - cluster-etcd-operator: etcd is using deprecated logger 1948082 - Monitoring should not set Available=False with no reason on updates 1948137 - CNI DEL not called on node reboot - OCP 4 CRI-O. 1948232 - DNS operator performs spurious updates in response to API's defaulting of daemonset's maxSurge and service's ipFamilies and ipFamilyPolicy fields 1948311 - Some jobs failing due to excessive watches: the server has received too many requests and has asked us to try again later 1948359 - [aws] shared tag was not removed from user provided IAM role 1948410 - [LSO] Local Storage Operator uses imagePullPolicy as "Always" 1948415 - [vSphere csi driver operator] clustercsidriver.spec.logLevel doesn't take effective after changing 1948427 - No action is triggered after click 'Continue' button on 'Show community Operator' windows 1948431 - TechPreviewNoUpgrade does not enable CSI migration 1948436 - The outbound traffic was broken intermittently after shutdown one egressIP node 1948443 - OCP 4.8 nightly still showing v1.20 even after 1.21 merge 1948471 - [sig-auth][Feature:OpenShiftAuthorization][Serial] authorization TestAuthorizationResourceAccessReview should succeed [Suite:openshift/conformance/serial] 1948505 - [vSphere csi driver operator] vmware-vsphere-csi-driver-operator pod restart every 10 minutes 1948513 - get-resources.sh doesn't honor the no_proxy settings 1948524 - 'DeploymentUpdated' Updated Deployment.apps/downloads -n openshift-console because it changed message is printed every minute 1948546 - VM of worker is in error state when a network has port_security_enabled=False 1948553 - When setting etcd spec.LogLevel is not propagated to etcd operand 1948555 - A lot of events "rpc error: code = DeadlineExceeded desc = context deadline exceeded" were seen in azure disk csi driver verification test 1948563 - End-to-End Secure boot deployment fails "Invalid value for input variable" 1948582 - Need ability to specify local gateway mode in CNO config 1948585 - Need a CI jobs to test local gateway mode with bare metal 1948592 - [Cluster Network Operator] Missing Egress Router Controller 1948606 - DNS e2e test fails "[sig-arch] Only known images used by tests" because it does not use a known image 1948610 - External Storage [Driver: disk.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly] 1948626 - TestRouteAdmissionPolicy e2e test is failing often 1948628 - ccoctl needs to plan for future (non-AWS) platform support in the CLI 1948634 - upgrades: allow upgrades without version change 1948640 - [Descheduler] operator log reports key failed with : kubedeschedulers.operator.openshift.io "cluster" not found 1948701 - unneeded CCO alert already covered by CVO 1948703 - p&f: probes should not get 429s 1948705 - [assisted operator] SNO deployment fails - ClusterDeployment shows bootstrap.ign was not found 1948706 - Cluster Autoscaler Operator manifests missing annotation for ibm-cloud-managed profile 1948708 - cluster-dns-operator includes a deployment with node selector of masters for the IBM cloud managed profile 1948711 - thanos querier and prometheus-adapter should have 2 replicas 1948714 - cluster-image-registry-operator targets master nodes in ibm-cloud-managed-profile 1948716 - cluster-ingress-operator deployment targets master nodes for ibm-cloud-managed profile 1948718 - cluster-network-operator deployment manifest for ibm-cloud-managed profile contains master node selector 1948719 - Machine API components should use 1.21 dependencies 1948721 - cluster-storage-operator deployment targets master nodes for ibm-cloud-managed profile 1948725 - operator lifecycle manager does not include profile annotations for ibm-cloud-managed 1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing 1948771 - ~50% of GCP upgrade jobs in 4.8 failing with "AggregatedAPIDown" alert on packages.coreos.com 1948782 - Stale references to the single-node-production-edge cluster profile 1948787 - secret.StringData shouldn't be used for reads 1948788 - Clicking an empty metrics graph (when there is no data) should still open metrics viewer 1948789 - Clicking on a metrics graph should show request and limits queries as well on the resulting metrics page 1948919 - Need minor update in message on channel modal 1948923 - [aws] installer forces the platform.aws.amiID option to be set, while installing a cluster into GovCloud or C2S region 1948926 - Memory Usage of Dashboard 'Kubernetes / Compute Resources / Pod' contain wrong CPU query 1948936 - [e2e][automation][prow] Prow script point to deleted resource 1948943 - (release-4.8) Limit the number of collected pods in the workloads gatherer 1948953 - Uninitialized cloud provider error when provisioning a cinder volume 1948963 - [RFE] Cluster-api-provider-ovirt should handle hugepages 1948966 - Add the ability to run a gather done by IO via a Kubernetes Job 1948981 - Align dependencies and libraries with latest ironic code 1948998 - style fixes by GoLand and golangci-lint 1948999 - Can not assign multiple EgressIPs to a namespace by using automatic way. 1949019 - PersistentVolumes page cannot sync project status automatically which will block user to create PV 1949022 - Openshift 4 has a zombie problem 1949039 - Wrong env name to get podnetinfo for hugepage in app-netutil 1949041 - vsphere: wrong image names in bundle 1949042 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the http2 tests (on OpenStack) 1949050 - Bump k8s to latest 1.21 1949061 - [assisted operator][nmstate] Continuous attempts to reconcile InstallEnv in the case of invalid NMStateConfig 1949063 - [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service 1949075 - Extend openshift/api for Add card customization 1949093 - PatternFly v4.96.2 regression results in a.pf-c-button hover issues 1949096 - Restore private git clone tests 1949099 - network-check-target code cleanup 1949105 - NetworkPolicy ... should enforce ingress policy allowing any port traffic to a server on a specific protocol 1949145 - Move openshift-user-critical priority class to CCO 1949155 - Console doesn't correctly check for favorited or last namespace on load if project picker used 1949180 - Pipelines plugin model kinds aren't picked up by parser 1949202 - sriov-network-operator not available from operatorhub on ppc64le 1949218 - ccoctl not included in container image 1949237 - Bump OVN: Lots of conjunction warnings in ovn-controller container logs 1949277 - operator-marketplace: deployment manifests for ibm-cloud-managed profile have master node selectors 1949294 - [assisted operator] OPENSHIFT_VERSIONS in assisted operator subscription does not propagate 1949306 - need a way to see top API accessors 1949313 - Rename vmware-vsphere- images to vsphere- images before 4.8 ships 1949316 - BaremetalHost resource automatedCleaningMode ignored due to outdated vendoring 1949347 - apiserver-watcher support for dual-stack 1949357 - manila-csi-controller pod not running due to secret lack(in another ns) 1949361 - CoreDNS resolution failure for external hostnames with "A: dns: overflow unpacking uint16" 1949364 - Mention scheduling profiles in scheduler operator repository 1949370 - Testability of: Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error 1949384 - Edit Default Pull Secret modal - i18n misses 1949387 - Fix the typo in auto node sizing script 1949404 - label selector on pvc creation page - i18n misses 1949410 - The referred role doesn't exist if create rolebinding from rolebinding tab of role page 1949411 - VolumeSnapshot, VolumeSnapshotClass and VolumeSnapshotConent Details tab is not translated - i18n misses 1949413 - Automatic boot order setting is done incorrectly when using by-path style device names 1949418 - Controller factory workers should always restart on panic() 1949419 - oauth-apiserver logs "[SHOULD NOT HAPPEN] failed to update managedFields for authentication.k8s.io/v1, Kind=TokenReview: failed to convert new object (authentication.k8s.io/v1, Kind=TokenReview)" 1949420 - [azure csi driver operator] pvc.status.capacity and pv.spec.capacity are processed not the same as in-tree plugin 1949435 - ingressclass controller doesn't recreate the openshift-default ingressclass after deleting it 1949480 - Listeners timeout are constantly being updated 1949481 - cluster-samples-operator restarts approximately two times per day and logs too many same messages 1949509 - Kuryr should manage API LB instead of CNO 1949514 - URL is not visible for routes at narrow screen widths 1949554 - Metrics of vSphere CSI driver sidecars are not collected 1949582 - OCP v4.7 installation with OVN-Kubernetes fails with error "egress bandwidth restriction -1 is not equals" 1949589 - APIRemovedInNextEUSReleaseInUse Alert Missing 1949591 - Alert does not catch removed api usage during end-to-end tests. 1949593 - rename DeprecatedAPIInUse alert to APIRemovedInNextReleaseInUse 1949612 - Install with 1.21 Kubelet is spamming logs with failed to get stats failed command 'du' 1949626 - machine-api fails to create AWS client in new regions 1949661 - Kubelet Workloads Management changes for OCPNODE-529 1949664 - Spurious keepalived liveness probe failures 1949671 - System services such as openvswitch are stopped before pod containers on system shutdown or reboot 1949677 - multus is the first pod on a new node and the last to go ready 1949711 - cvo unable to reconcile deletion of openshift-monitoring namespace 1949721 - Pick 99237: Use the audit ID of a request for better correlation 1949741 - Bump golang version of cluster-machine-approver 1949799 - ingresscontroller should deny the setting when spec.tuningOptions.threadCount exceed 64 1949810 - OKD 4.7 unable to access Project Topology View 1949818 - Add e2e test to perform MCO operation Single Node OpenShift 1949820 - Unable to use oc adm top is shortcut when asking for imagestreams 1949862 - The ccoctl tool hits the panic sometime when running the delete subcommand 1949866 - The ccoctl fails to create authentication file when running the command ccoctl aws create-identity-provider with --output-dir parameter 1949880 - adding providerParameters.gcp.clientAccess to existing ingresscontroller doesn't work 1949882 - service-idler build error 1949898 - Backport RP#848 to OCP 4.8 1949907 - Gather summary of PodNetworkConnectivityChecks 1949923 - some defined rootVolumes zones not used on installation 1949928 - Samples Operator updates break CI tests 1949935 - Fix incorrect access review check on start pipeline kebab action 1949956 - kaso: add minreadyseconds to ensure we don't have an LB outage on kas 1949967 - Update Kube dependencies in MCO to 1.21 1949972 - Descheduler metrics: populate build info data and make the metrics entries more readeable 1949978 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the h2spec conformance tests [Suite:openshift/conformance/parallel/minimal] 1949990 - (release-4.8) Extend the OLM operator gatherer to include CSV display name 1949991 - openshift-marketplace pods are crashlooping 1950007 - [CI] [UPI] easy_install is not reliable enough to be used in an image 1950026 - [Descheduler] Need better way to handle evicted pod count for removeDuplicate pod strategy 1950047 - CSV deployment template custom annotations are not propagated to deployments 1950112 - SNO: machine-config pool is degraded: error running chcon -R -t var_run_t /run/mco-machine-os-content/os-content-321709791 1950113 - in-cluster operators need an API for additional AWS tags 1950133 - MCO creates empty conditions on the kubeletconfig object 1950159 - Downstream ovn-kubernetes repo should have no linter errors 1950175 - Update Jenkins and agent base image to Go 1.16 1950196 - ssh Key is added even with 'Expose SSH access to this virtual machine' unchecked 1950210 - VPA CRDs use deprecated API version 1950219 - KnativeServing is not shown in list on global config page 1950232 - [Descheduler] - The minKubeVersion should be 1.21 1950236 - Update OKD imagestreams to prefer centos7 images 1950270 - should use "kubernetes.io/os" in the dns/ingresscontroller node selector description when executing oc explain command 1950284 - Tracking bug for NE-563 - support user-defined tags on AWS load balancers 1950341 - NetworkPolicy: allow-from-router policy does not allow access to service when the endpoint publishing strategy is HostNetwork on OpenshiftSDN network 1950379 - oauth-server is in pending/crashbackoff at beginning 50% of CI runs 1950384 - [sig-builds][Feature:Builds][sig-devex][Feature:Jenkins][Slow] openshift pipeline build perm failing 1950409 - Descheduler operator code and docs still reference v1beta1 1950417 - The Marketplace Operator is building with EOL k8s versions 1950430 - CVO serves metrics over HTTP, despite a lack of consumers 1950460 - RFE: Change Request Size Input to Number Spinner Input 1950471 - e2e-metal-ipi-ovn-dualstack is failing with etcd unable to bootstrap 1950532 - Include "update" when referring to operator approval and channel 1950543 - Document non-HA behaviors in the MCO (SingleNodeOpenshift) 1950590 - CNO: Too many OVN netFlows collectors causes ovnkube pods CrashLoopBackOff 1950653 - BuildConfig ignores Args 1950761 - Monitoring operator deployments anti-affinity rules prevent their rollout on single-node 1950908 - kube_pod_labels metric does not contain k8s labels 1950912 - [e2e][automation] add devconsole tests 1950916 - [RFE]console page show error when vm is poused 1950934 - Unnecessary rollouts can happen due to unsorted endpoints 1950935 - Updating cluster-network-operator builder & base images to be consistent with ART 1950978 - the ingressclass cannot be removed even after deleting the related custom ingresscontroller 1951007 - ovn master pod crashed 1951029 - Drainer panics on missing context for node patch 1951034 - (release-4.8) Split up the GatherClusterOperators into smaller parts 1951042 - Panics every few minutes in kubelet logs post-rebase 1951043 - Start Pipeline Modal Parameters should accept empty string defaults 1951058 - [gcp-pd-csi-driver-operator] topology and multipods capabilities are not enabled in e2e tests 1951066 - [IBM][ROKS] Enable volume snapshot controllers on IBM Cloud 1951084 - avoid benign "Path \"/run/secrets/etc-pki-entitlement\" from \"/etc/containers/mounts.conf\" doesn't exist, skipping" messages 1951158 - Egress Router CRD missing Addresses entry 1951169 - Improve API Explorer discoverability from the Console 1951174 - re-pin libvirt to 6.0.0 1951203 - oc adm catalog mirror can generate ICSPs that exceed etcd's size limit 1951209 - RerunOnFailure runStrategy shows wrong VM status (Starting) on Succeeded VMI 1951212 - User/Group details shows unrelated subjects in role bindings tab 1951214 - VM list page crashes when the volume type is sysprep 1951339 - Cluster-version operator does not manage operand container environments when manifest lacks opinions 1951387 - opm index add doesn't respect deprecated bundles 1951412 - Configmap gatherer can fail incorrectly 1951456 - Docs and linting fixes 1951486 - Replace "kubevirt_vmi_network_traffic_bytes_total" with new metrics names 1951505 - Remove deprecated techPreviewUserWorkload field from CMO's configmap 1951558 - Backport Upstream 101093 for Startup Probe Fix 1951585 - enterprise-pod fails to build 1951636 - assisted service operator use default serviceaccount in operator bundle 1951637 - don't rollout a new kube-apiserver revision on oauth accessTokenInactivityTimeout changes 1951639 - Bootstrap API server unclean shutdown causes reconcile delay 1951646 - Unexpected memory climb while container not in use 1951652 - Add retries to opm index add 1951670 - Error gathering bootstrap log after pivot: The bootstrap machine did not execute the release-image.service systemd unit 1951671 - Excessive writes to ironic Nodes 1951705 - kube-apiserver needs alerts on CPU utlization 1951713 - [OCP-OSP] After changing image in machine object it enters in Failed - Can't find created instance 1951853 - dnses.operator.openshift.io resource's spec.nodePlacement.tolerations godoc incorrectly describes default behavior 1951858 - unexpected text '0' on filter toolbar on RoleBinding tab 1951860 - [4.8] add Intel XXV710 NIC model (1572) support in SR-IOV Operator 1951870 - sriov network resources injector: user defined injection removed existing pod annotations 1951891 - [migration] cannot change ClusterNetwork CIDR during migration 1951952 - [AWS CSI Migration] Metrics for cloudprovider error requests are lost 1952001 - Delegated authentication: reduce the number of watch requests 1952032 - malformatted assets in CMO 1952045 - Mirror nfs-server image used in jenkins-e2e 1952049 - Helm: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1952079 - rebase openshift/sdn to kube 1.21 1952111 - Optimize importing from @patternfly/react-tokens 1952174 - DNS operator claims to be done upgrading before it even starts 1952179 - OpenStack Provider Ports UI Underscore Variables 1952187 - Pods stuck in ImagePullBackOff with errors like rpc error: code = Unknown desc = Error committing the finished image: image with ID "SomeLongID" already exists, but uses a different top layer: that ID 1952211 - cascading mounts happening exponentially on when deleting openstack-cinder-csi-driver-node pods 1952214 - Console Devfile Import Dev Preview broken 1952238 - Catalog pods don't report termination logs to catalog-operator 1952262 - Need support external gateway via hybrid overlay 1952266 - etcd operator bumps status.version[name=operator] before operands update 1952268 - etcd operator should not set Degraded=True EtcdMembersDegraded on healthy machine-config node reboots 1952282 - CSR approver races with nodelink controller and does not requeue 1952310 - VM cannot start up if the ssh key is added by another template 1952325 - [e2e][automation] Check support modal in ssh tests and skip template parentSupport 1952333 - openshift/kubernetes vulnerable to CVE-2021-3121 1952358 - Openshift-apiserver CO unavailable in fresh OCP 4.7.5 installations 1952367 - No VM status on overview page when VM is pending 1952368 - worker pool went degraded due to no rpm-ostree on rhel worker during applying new mc 1952372 - VM stop action should not be there if the VM is not running 1952405 - console-operator is not reporting correct Available status 1952448 - Switch from Managed to Disabled mode: no IP removed from configuration and no container metal3-static-ip-manager stopped 1952460 - In k8s 1.21 bump '[sig-network] Firewall rule control plane should not expose well-known ports' test is disabled 1952473 - Monitor pod placement during upgrades 1952487 - Template filter does not work properly 1952495 - “Create” button on the Templates page is confuse 1952527 - [Multus] multi-networkpolicy does wrong filtering 1952545 - Selection issue when inserting YAML snippets 1952585 - Operator links for 'repository' and 'container image' should be clickable in OperatorHub 1952604 - Incorrect port in external loadbalancer config 1952610 - [aws] image-registry panics when the cluster is installed in a new region 1952611 - Tracking bug for OCPCLOUD-1115 - support user-defined tags on AWS EC2 Instances 1952618 - 4.7.4->4.7.8 Upgrade Caused OpenShift-Apiserver Outage 1952625 - Fix translator-reported text issues 1952632 - 4.8 installer should default ClusterVersion channel to stable-4.8 1952635 - Web console displays a blank page- white space instead of cluster information 1952665 - [Multus] multi-networkpolicy pod continue restart due to OOM (out of memory) 1952666 - Implement Enhancement 741 for Kubelet 1952667 - Update Readme for cluster-baremetal-operator with details about the operator 1952684 - cluster-etcd-operator: metrics controller panics on invalid response from client 1952728 - It was not clear for users why Snapshot feature was not available 1952730 - “Customize virtual machine” and the “Advanced” feature are confusing in wizard 1952732 - Users did not understand the boot source labels 1952741 - Monitoring DB: after set Time Range as Custom time range, no data display 1952744 - PrometheusDuplicateTimestamps with user workload monitoring enabled 1952759 - [RFE]It was not immediately clear what the Star icon meant 1952795 - cloud-network-config-controller CRD does not specify correct plural name 1952819 - failed to configure pod interface: error while waiting on flows for pod: timed out waiting for OVS flows 1952820 - [LSO] Delete localvolume pv is failed 1952832 - [IBM][ROKS] Enable the Web console UI to deploy OCS in External mode on IBM Cloud 1952891 - Upgrade failed due to cinder csi driver not deployed 1952904 - Linting issues in gather/clusterconfig package 1952906 - Unit tests for configobserver.go 1952931 - CI does not check leftover PVs 1952958 - Runtime error loading console in Safari 13 1953019 - [Installer][baremetal][metal3] The baremetal IPI installer fails on delete cluster with: failed to clean baremetal bootstrap storage pool 1953035 - Installer should error out if publish: Internal is set while deploying OCP cluster on any on-prem platform 1953041 - openshift-authentication-operator uses 3.9k% of its requested CPU 1953077 - Handling GCP's: Error 400: Permission accesscontextmanager.accessLevels.list is not valid for this resource 1953102 - kubelet CPU use during an e2e run increased 25% after rebase 1953105 - RHCOS system components registered a 3.5x increase in CPU use over an e2e run before and after 4/9 1953169 - endpoint slice controller doesn't handle services target port correctly 1953257 - Multiple EgressIPs per node for one namespace when "oc get hostsubnet" 1953280 - DaemonSet/node-resolver is not recreated by dns operator after deleting it 1953291 - cluster-etcd-operator: peer cert DNS SAN is populated incorrectly 1953418 - [e2e][automation] Fix vm wizard validate tests 1953518 - thanos-ruler pods failed to start up for "cannot unmarshal DNS message" 1953530 - Fix openshift/sdn unit test flake 1953539 - kube-storage-version-migrator: priorityClassName not set 1953543 - (release-4.8) Add missing sample archive data 1953551 - build failure: unexpected trampoline for shared or dynamic linking 1953555 - GlusterFS tests fail on ipv6 clusters 1953647 - prometheus-adapter should have a PodDisruptionBudget in HA topology 1953670 - ironic container image build failing because esp partition size is too small 1953680 - ipBlock ignoring all other cidr's apart from the last one specified 1953691 - Remove unused mock 1953703 - Inconsistent usage of Tech preview badge in OCS plugin of OCP Console 1953726 - Fix issues related to loading dynamic plugins 1953729 - e2e unidling test is flaking heavily on SNO jobs 1953795 - Ironic can't virtual media attach ISOs sourced from ingress routes 1953798 - GCP e2e (parallel and upgrade) regularly trigger KubeAPIErrorBudgetBurn alert, also happens on AWS 1953803 - [AWS] Installer should do pre-check to ensure user-provided private hosted zone name is valid for OCP cluster 1953810 - Allow use of storage policy in VMC environments 1953830 - The oc-compliance build does not available for OCP4.8 1953846 - SystemMemoryExceedsReservation alert should consider hugepage reservation 1953977 - [4.8] packageserver pods restart many times on the SNO cluster 1953979 - Ironic caching virtualmedia images results in disk space limitations 1954003 - Alerts shouldn't report any alerts in firing or pending state: openstack-cinder-csi-driver-controller-metrics TargetDown 1954025 - Disk errors while scaling up a node with multipathing enabled 1954087 - Unit tests for kube-scheduler-operator 1954095 - Apply user defined tags in AWS Internal Registry 1954105 - TaskRuns Tab in PipelineRun Details Page makes cluster based calls for TaskRuns 1954124 - oc set volume not adding storageclass to pvc which leads to issues using snapshots 1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js 1954177 - machine-api: admissionReviewVersions v1beta1 is going to be removed in 1.22 1954187 - multus: admissionReviewVersions v1beta1 is going to be removed in 1.22 1954248 - Disable Alertmanager Protractor e2e tests 1954317 - [assisted operator] Environment variables set in the subscription not being inherited by the assisted-service container 1954330 - NetworkPolicy: allow-from-router with label policy-group.network.openshift.io/ingress: "" does not work on a upgraded cluster 1954421 - Get 'Application is not available' when access Prometheus UI 1954459 - Error: Gateway Time-out display on Alerting console 1954460 - UI, The status of "Used Capacity Breakdown [Pods]" is "Not available" 1954509 - FC volume is marked as unmounted after failed reconstruction 1954540 - Lack translation for local language on pages under storage menu 1954544 - authn operator: endpoints controller should use the context it creates 1954554 - Add e2e tests for auto node sizing 1954566 - Cannot update a component (UtilizationCard) error when switching perspectives manually 1954597 - Default image for GCP does not support ignition V3 1954615 - Undiagnosed panic detected in pod: pods/openshift-cloud-credential-operator_cloud-credential-operator 1954634 - apirequestcounts does not honor max users 1954638 - apirequestcounts should indicate removedinrelease of empty instead of 2.0 1954640 - Support of gatherers with different periods 1954671 - disable volume expansion support in vsphere csi driver storage class 1954687 - localvolumediscovery and localvolumset e2es are disabled 1954688 - LSO has missing examples for localvolumesets 1954696 - [API-1009] apirequestcounts should indicate useragent 1954715 - Imagestream imports become very slow when doing many in parallel 1954755 - Multus configuration should allow for net-attach-defs referenced in the openshift-multus namespace 1954765 - CCO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1954768 - baremetal-operator: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1954770 - Backport upstream fix for Kubelet getting stuck in DiskPressure 1954773 - OVN: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component does not trigger APIRemovedInNextReleaseInUse alert 1954783 - [aws] support byo private hosted zone 1954790 - KCM Alert PodDisruptionBudget At and Limit do not alert with maxUnavailable or MinAvailable by percentage 1954830 - verify-client-go job is failing for release-4.7 branch 1954865 - Add necessary priority class to pod-identity-webhook deployment 1954866 - Add necessary priority class to downloads 1954870 - Add necessary priority class to network components 1954873 - dns server may not be specified for clusters with more than 2 dns servers specified by openstack. 1954891 - Add necessary priority class to pruner 1954892 - Add necessary priority class to ingress-canary 1954931 - (release-4.8) Remove legacy URL anonymization in the ClusterOperator related resources 1954937 - [API-1009] oc get apirequestcount shows blank for column REQUESTSINCURRENTHOUR 1954959 - unwanted decorator shown for revisions in topology though should only be shown only for knative services 1954972 - TechPreviewNoUpgrade featureset can be undone 1954973 - "read /proc/pressure/cpu: operation not supported" in node-exporter logs 1954994 - should update to 2.26.0 for prometheus resources label 1955051 - metrics "kube_node_status_capacity_cpu_cores" does not exist 1955089 - Support [sig-cli] oc observe works as expected test for IPv6 1955100 - Samples: APIRemovedInNextReleaseInUse info alerts display 1955102 - Add vsphere_node_hw_version_total metric to the collected metrics 1955114 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM 1955196 - linuxptp-daemon crash on 4.8 1955226 - operator updates apirequestcount CRD over and over 1955229 - release-openshift-origin-installer-e2e-aws-calico-4.7 is permfailing 1955256 - stop collecting API that no longer exists 1955324 - Kubernetes Autoscaler should use Go 1.16 for testing scripts 1955336 - Failure to Install OpenShift on GCP due to Cluster Name being similar to / contains "google" 1955414 - 4.8 -> 4.7 rollbacks broken on unrecognized flowschema openshift-etcd-operator 1955445 - Drop crio image metrics with high cardinality 1955457 - Drop container_memory_failures_total metric because of high cardinality 1955467 - Disable collection of node_mountstats_nfs metrics in node_exporter 1955474 - [aws-ebs-csi-driver] rebase from version v1.0.0 1955478 - Drop high-cardinality metrics from kube-state-metrics which aren't used 1955517 - Failed to upgrade from 4.6.25 to 4.7.8 due to the machine-config degradation 1955548 - [IPI][OSP] OCP 4.6/4.7 IPI with kuryr exceeds defined serviceNetwork range 1955554 - MAO does not react to events triggered from Validating Webhook Configurations 1955589 - thanos-querier should have a PodDisruptionBudget in HA topology 1955595 - Add DevPreviewLongLifecycle Descheduler profile 1955596 - Pods stuck in creation phase on realtime kernel SNO 1955610 - release-openshift-origin-installer-old-rhcos-e2e-aws-4.7 is permfailing 1955622 - 4.8-e2e-metal-assisted jobs: Timeout of 360 seconds expired waiting for Cluster to be in status ['installing', 'error'] 1955701 - [4.8] RHCOS boot image bump for RHEL 8.4 Beta 1955749 - OCP branded templates need to be translated 1955761 - packageserver clusteroperator does not set reason or message for Available condition 1955783 - NetworkPolicy: ACL audit log message for allow-from-router policy should also include the namespace to distinguish between two policies similarly named configured in respective namespaces 1955803 - OperatorHub - console accepts any value for "Infrastructure features" annotation 1955822 - CIS Benchmark 5.4.1 Fails on ROKS 4: Prefer using secrets as files over secrets as environment variables 1955854 - Ingress clusteroperator reports Degraded=True/Available=False if any ingresscontroller is degraded or unavailable 1955862 - Local Storage Operator using LocalVolume CR fails to create PV's when backend storage failure is simulated 1955874 - Webscale: sriov vfs are not created and sriovnetworknodestate indicates sync succeeded - state is not correct 1955879 - Customer tags cannot be seen in S3 level when set spec.managementState from Managed-> Removed-> Managed in configs.imageregistry with high ratio 1955969 - Workers cannot be deployed attached to multiple networks. 1956079 - Installer gather doesn't collect any networking information 1956208 - Installer should validate root volume type 1956220 - Set htt proxy system properties as expected by kubernetes-client 1956281 - Disconnected installs are failing with kubelet trying to pause image from the internet 1956334 - Event Listener Details page does not show Triggers section 1956353 - test: analyze job consistently fails 1956372 - openshift-gcp-routes causes disruption during upgrade by stopping before all pods terminate 1956405 - Bump k8s dependencies in cluster resource override admission operator 1956411 - Apply custom tags to AWS EBS volumes 1956480 - [4.8] Bootimage bump tracker 1956606 - probes FlowSchema manifest not included in any cluster profile 1956607 - Multiple manifests lack cluster profile annotations 1956609 - [cluster-machine-approver] CSRs for replacement control plane nodes not approved after restore from backup 1956610 - manage-helm-repos manifest lacks cluster profile annotations 1956611 - OLM CRD schema validation failing against CRs where the value of a string field is a blank string 1956650 - The container disk URL is empty for Windows guest tools 1956768 - aws-ebs-csi-driver-controller-metrics TargetDown 1956826 - buildArgs does not work when the value is taken from a secret 1956895 - Fix chatty kubelet log message 1956898 - fix log files being overwritten on container state loss 1956920 - can't open terminal for pods that have more than one container running 1956959 - ipv6 disconnected sno crd deployment hive reports success status and clusterdeployrmet reporting false 1956978 - Installer gather doesn't include pod names in filename 1957039 - Physical VIP for pod -> Svc -> Host is incorrectly set to an IP of 169.254.169.2 for Local GW 1957041 - Update CI e2echart with more node info 1957127 - Delegated authentication: reduce the number of watch requests 1957131 - Conformance tests for OpenStack require the Cinder client that is not included in the "tests" image 1957146 - Only run test/extended/router/idle tests on OpenshiftSDN or OVNKubernetes 1957149 - CI: "Managed cluster should start all core operators" fails with: OpenStackCinderDriverStaticResourcesControllerDegraded: "volumesnapshotclass.yaml" (string): missing dynamicClient 1957179 - Incorrect VERSION in node_exporter 1957190 - CI jobs failing due too many watch requests (prometheus-operator) 1957198 - Misspelled console-operator condition 1957227 - Issue replacing the EnvVariables using the unsupported ConfigMap 1957260 - [4.8] [gcp] Installer is missing new region/zone europe-central2 1957261 - update godoc for new build status image change trigger fields 1957295 - Apply priority classes conventions as test to openshift/origin repo 1957315 - kuryr-controller doesn't indicate being out of quota 1957349 - [Azure] Machine object showing Failed phase even node is ready and VM is running properly 1957374 - mcddrainerr doesn't list specific pod 1957386 - Config serve and validate command should be under alpha 1957446 - prepare CCO for future without v1beta1 CustomResourceDefinitions 1957502 - Infrequent panic in kube-apiserver in aws-serial job 1957561 - lack of pseudolocalization for some text on Cluster Setting page 1957584 - Routes are not getting created when using hostname without FQDN standard 1957597 - Public DNS records were not deleted when destroying a cluster which is using byo private hosted zone 1957645 - Event "Updated PrometheusRule.monitoring.coreos.com/v1 because it changed" is frequently looped with weird empty {} changes 1957708 - e2e-metal-ipi and related jobs fail to bootstrap due to multiple VIP's 1957726 - Pod stuck in ContainerCreating - Failed to start transient scope unit: Connection timed out 1957748 - Ptp operator pod should have CPU and memory requests set but not limits 1957756 - Device Replacemet UI, The status of the disk is "replacement ready" before I clicked on "start replacement" 1957772 - ptp daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent 1957775 - CVO creating cloud-controller-manager too early causing upgrade failures 1957809 - [OSP] Install with invalid platform.openstack.machinesSubnet results in runtime error 1957822 - Update apiserver tlsSecurityProfile description to include Custom profile 1957832 - CMO end-to-end tests work only on AWS 1957856 - 'resource name may not be empty' is shown in CI testing 1957869 - baremetal IPI power_interface for irmc is inconsistent 1957879 - cloud-controller-manage ClusterOperator manifest does not declare relatedObjects 1957889 - Incomprehensible documentation of the GatherClusterOperatorPodsAndEvents gatherer 1957893 - ClusterDeployment / Agent conditions show "ClusterAlreadyInstalling" during each spoke install 1957895 - Cypress helper projectDropdown.shouldContain is not an assertion 1957908 - Many e2e failed requests caused by kube-storage-version-migrator-operator's version reads 1957926 - "Add Capacity" should allow to add n3 (or n4) local devices at once 1957951 - [aws] destroy can get blocked on instances stuck in shutting-down state 1957967 - Possible test flake in listPage Cypress view 1957972 - Leftover templates from mdns 1957976 - Ironic execute_deploy_steps command to ramdisk times out, resulting in a failed deployment in 4.7 1957982 - Deployment Actions clickable for view-only projects 1957991 - ClusterOperatorDegraded can fire during installation 1958015 - "config-reloader-cpu" and "config-reloader-memory" flags have been deprecated for prometheus-operator 1958080 - Missing i18n for login, error and selectprovider pages 1958094 - Audit log files are corrupted sometimes 1958097 - don't show "old, insecure token format" if the token does not actually exist 1958114 - Ignore staged vendor files in pre-commit script 1958126 - [OVN]Egressip doesn't take effect 1958158 - OAuth proxy container for AlertManager and Thanos are flooding the logs 1958216 - ocp libvirt: dnsmasq options in install config should allow duplicate option names 1958245 - cluster-etcd-operator: static pod revision is not visible from etcd logs 1958285 - Deployment considered unhealthy despite being available and at latest generation 1958296 - OLM must explicitly alert on deprecated APIs in use 1958329 - pick 97428: add more context to log after a request times out 1958367 - Build metrics do not aggregate totals by build strategy 1958391 - Update MCO KubeletConfig to mixin the API Server TLS Security Profile Singleton 1958405 - etcd: current health checks and reporting are not adequate to ensure availability 1958406 - Twistlock flags mode of /var/run/crio/crio.sock 1958420 - openshift-install 4.7.10 fails with segmentation error 1958424 - aws: support more auth options in manual mode 1958439 - Install/Upgrade button on Install/Upgrade Helm Chart page does not work with Form View 1958492 - CCO: pod-identity-webhook still accesses APIRemovedInNextReleaseInUse 1958643 - All pods creation stuck due to SR-IOV webhook timeout 1958679 - Compression on pool can't be disabled via UI 1958753 - VMI nic tab is not loadable 1958759 - Pulling Insights report is missing retry logic 1958811 - VM creation fails on API version mismatch 1958812 - Cluster upgrade halts as machine-config-daemon fails to parse rpm-ostree status during cluster upgrades 1958861 - [CCO] pod-identity-webhook certificate request failed 1958868 - ssh copy is missing when vm is running 1958884 - Confusing error message when volume AZ not found 1958913 - "Replacing an unhealthy etcd member whose node is not ready" procedure results in new etcd pod in CrashLoopBackOff 1958930 - network config in machine configs prevents addition of new nodes with static networking via kargs 1958958 - [SCALE] segfault with ovnkube adding to address set 1958972 - [SCALE] deadlock in ovn-kube when scaling up to 300 nodes 1959041 - LSO Cluster UI,"Troubleshoot" link does not exist after scale down osd pod 1959058 - ovn-kubernetes has lock contention on the LSP cache 1959158 - packageserver clusteroperator Available condition set to false on any Deployment spec change 1959177 - Descheduler dev manifests are missing permissions 1959190 - Set LABEL io.openshift.release.operator=true for driver-toolkit image addition to payload 1959194 - Ingress controller should use minReadySeconds because otherwise it is disrupted during deployment updates 1959278 - Should remove prometheus servicemonitor from openshift-user-workload-monitoring 1959294 - openshift-operator-lifecycle-manager:olm-operator-serviceaccount should not rely on external networking for health check 1959327 - Degraded nodes on upgrade - Cleaning bootversions: Read-only file system 1959406 - Difficult to debug performance on ovn-k without pprof enabled 1959471 - Kube sysctl conformance tests are disabled, meaning we can't submit conformance results 1959479 - machines doesn't support dual-stack loadbalancers on Azure 1959513 - Cluster-kube-apiserver does not use library-go for audit pkg 1959519 - Operand details page only renders one status donut no matter how many 'podStatuses' descriptors are used 1959550 - Overly generic CSS rules for dd and dt elements breaks styling elsewhere in console 1959564 - Test verify /run filesystem contents failing 1959648 - oc adm top --help indicates that oc adm top can display storage usage while it cannot 1959650 - Gather SDI-related MachineConfigs 1959658 - showing a lot "constructing many client instances from the same exec auth config" 1959696 - Deprecate 'ConsoleConfigRoute' struct in console-operator config 1959699 - [RFE] Collect LSO pod log and daemonset log managed by LSO 1959703 - Bootstrap gather gets into an infinite loop on bootstrap-in-place mode 1959711 - Egressnetworkpolicy doesn't work when configure the EgressIP 1959786 - [dualstack]EgressIP doesn't work on dualstack cluster for IPv6 1959916 - Console not works well against a proxy in front of openshift clusters 1959920 - UEFISecureBoot set not on the right master node 1959981 - [OCPonRHV] - Affinity Group should not create by default if we define empty affinityGroupsNames: [] 1960035 - iptables is missing from ose-keepalived-ipfailover image 1960059 - Remove "Grafana UI" link from Console Monitoring > Dashboards page 1960089 - ImageStreams list page, detail page and breadcrumb are not following CamelCase conventions 1960129 - [e2e][automation] add smoke tests about VM pages and actions 1960134 - some origin images are not public 1960171 - Enable SNO checks for image-registry 1960176 - CCO should recreate a user for the component when it was removed from the cloud providers 1960205 - The kubelet log flooded with reconcileState message once CPU manager enabled 1960255 - fixed obfuscation permissions 1960257 - breaking changes in pr template 1960284 - ExternalTrafficPolicy Local does not preserve connections correctly on shutdown, policy Cluster has significant performance cost 1960323 - Address issues raised by coverity security scan 1960324 - manifests: extra "spec.version" in console quickstarts makes CVO hotloop 1960330 - manifests: invalid selector in ServiceMonitor makes CVO hotloop 1960334 - manifests: invalid selector in ServiceMonitor makes CVO hotloop 1960337 - manifests: invalid selector in ServiceMonitor makes CVO hotloop 1960339 - manifests: unset "preemptionPolicy" makes CVO hotloop 1960531 - Items under 'Current Bandwidth' for Dashboard 'Kubernetes / Networking / Pod' keep added for every access 1960534 - Some graphs of console dashboards have no legend and tooltips are difficult to undstand compared with grafana 1960546 - Add virt_platform metric to the collected metrics 1960554 - Remove rbacv1beta1 handling code 1960612 - Node disk info in overview/details does not account for second drive where /var is located 1960619 - Image registry integration tests use old-style OAuth tokens 1960683 - GlobalConfigPage is constantly requesting resources 1960711 - Enabling IPsec runtime causing incorrect MTU on Pod interfaces 1960716 - Missing details for debugging 1960732 - Outdated manifests directory in CSI driver operator repositories 1960757 - [OVN] hostnetwork pod can access MCS port 22623 or 22624 on master 1960758 - oc debug / oc adm must-gather do not require openshift/tools and openshift/must-gather to be "the newest" 1960767 - /metrics endpoint of the Grafana UI is accessible without authentication 1960780 - CI: failed to create PDB "service-test" the server could not find the requested resource 1961064 - Documentation link to network policies is outdated 1961067 - Improve log gathering logic 1961081 - policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget in CMO logs 1961091 - Gather MachineHealthCheck definitions 1961120 - CSI driver operators fail when upgrading a cluster 1961173 - recreate existing static pod manifests instead of updating 1961201 - [sig-network-edge] DNS should answer A and AAAA queries for a dual-stack service is constantly failing 1961314 - Race condition in operator-registry pull retry unit tests 1961320 - CatalogSource does not emit any metrics to indicate if it's ready or not 1961336 - Devfile sample for BuildConfig is not defined 1961356 - Update single quotes to double quotes in string 1961363 - Minor string update for " No Storage classes found in cluster, adding source is disabled." 1961393 - DetailsPage does not work with group~version~kind 1961452 - Remove "Alertmanager UI" link from Console Monitoring > Alerting page 1961466 - Some dropdown placeholder text on route creation page is not translated 1961472 - openshift-marketplace pods in CrashLoopBackOff state after RHACS installed with an SCC with readOnlyFileSystem set to true 1961506 - NodePorts do not work on RHEL 7.9 workers (was "4.7 -> 4.8 upgrade is stuck at Ingress operator Degraded with rhel 7.9 workers") 1961536 - clusterdeployment without pull secret is crashing assisted service pod 1961538 - manifests: invalid namespace in ClusterRoleBinding makes CVO hotloop 1961545 - Fixing Documentation Generation 1961550 - HAproxy pod logs showing error "another server named 'pod:httpd-7c7ccfffdc-wdkvk:httpd:8080-tcp:10.128.x.x:8080' was already defined at line 326, please use distinct names" 1961554 - respect the shutdown-delay-duration from OpenShiftAPIServerConfig 1961561 - The encryption controllers send lots of request to an API server 1961582 - Build failure on s390x 1961644 - NodeAuthenticator tests are failing in IPv6 1961656 - driver-toolkit missing some release metadata 1961675 - Kebab menu of taskrun contains Edit options which should not be present 1961701 - Enhance gathering of events 1961717 - Update runtime dependencies to Wallaby builds for bugfixes 1961829 - Quick starts prereqs not shown when description is long 1961852 - Excessive lock contention when adding many pods selected by the same NetworkPolicy 1961878 - Add Sprint 199 translations 1961897 - Remove history listener before console UI is unmounted 1961925 - New ManagementCPUsOverride admission plugin blocks pod creation in clusters with no nodes 1962062 - Monitoring dashboards should support default values of "All" 1962074 - SNO:the pod get stuck in CreateContainerError and prompt "failed to add conmon to systemd sandbox cgroup: dial unix /run/systemd/private: connect: resource temporarily unavailable" after adding a performanceprofile 1962095 - Replace gather-job image without FQDN 1962153 - VolumeSnapshot routes are ambiguous, too generic 1962172 - Single node CI e2e tests kubelet metrics endpoints intermittent downtime 1962219 - NTO relies on unreliable leader-for-life implementation. 1962256 - use RHEL8 as the vm-example 1962261 - Monitoring components requesting more memory than they use 1962274 - OCP on RHV installer fails to generate an install-config with only 2 hosts in RHV cluster 1962347 - Cluster does not exist logs after successful installation 1962392 - After upgrade from 4.5.16 to 4.6.17, customer's application is seeing re-transmits 1962415 - duplicate zone information for in-tree PV after enabling migration 1962429 - Cannot create windows vm because kubemacpool.io denied the request 1962525 - [Migration] SDN migration stuck on MCO on RHV cluster 1962569 - NetworkPolicy details page should also show Egress rules 1962592 - Worker nodes restarting during OS installation 1962602 - Cloud credential operator scrolls info "unable to provide upcoming..." on unsupported platform 1962630 - NTO: Ship the current upstream TuneD 1962687 - openshift-kube-storage-version-migrator pod failed due to Error: container has runAsNonRoot and image will run as root 1962698 - Console-operator can not create resource console-public configmap in the openshift-config-managed namespace 1962718 - CVE-2021-29622 prometheus: open redirect under the /new endpoint 1962740 - Add documentation to Egress Router 1962850 - [4.8] Bootimage bump tracker 1962882 - Version pod does not set priorityClassName 1962905 - Ramdisk ISO source defaulting to "http" breaks deployment on a good amount of BMCs 1963068 - ironic container should not specify the entrypoint 1963079 - KCM/KS: ability to enforce localhost communication with the API server. 1963154 - Current BMAC reconcile flow skips Ironic's deprovision step 1963159 - Add Sprint 200 translations 1963204 - Update to 8.4 IPA images 1963205 - Installer is using old redirector 1963208 - Translation typos/inconsistencies for Sprint 200 files 1963209 - Some strings in public.json have errors 1963211 - Fix grammar issue in kubevirt-plugin.json string 1963213 - Memsource download script running into API error 1963219 - ImageStreamTags not internationalized 1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment 1963267 - Warning: Invalid DOM property classname. Did you mean className? console warnings in volumes table 1963502 - create template from is not descriptive 1963676 - in vm wizard when selecting an os template it looks like selecting the flavor too 1963833 - Cluster monitoring operator crashlooping on single node clusters due to segfault 1963848 - Use OS-shipped stalld vs. the NTO-shipped one. 1963866 - NTO: use the latest k8s 1.21.1 and openshift vendor dependencies 1963871 - cluster-etcd-operator:[build] upgrade to go 1.16 1963896 - The VM disks table does not show easy links to PVCs 1963912 - "[sig-network] DNS should provide DNS for {services, cluster, subdomain, hostname}" failures on vsphere 1963932 - Installation failures in bootstrap in OpenStack release jobs 1963964 - Characters are not escaped on config ini file causing Kuryr bootstrap to fail 1964059 - rebase openshift/sdn to kube 1.21.1 1964197 - Failing Test vendor/k8s.io/kube-aggregator/pkg/apiserver TestProxyCertReload due to hardcoded certificate expiration 1964203 - e2e-metal-ipi, e2e-metal-ipi-ovn-dualstack and e2e-metal-ipi-ovn-ipv6 are failing due to "Unknown provider baremetal" 1964243 - The oc compliance fetch-raw doesn’t work for disconnected cluster 1964270 - Failed to install 'cluster-kube-descheduler-operator' with error: "clusterkubedescheduleroperator.4.8.0-202105211057.p0.assembly.stream\": must be no more than 63 characters" 1964319 - Network policy "deny all" interpreted as "allow all" in description page 1964334 - alertmanager/prometheus/thanos-querier /metrics endpoints are not secured 1964472 - Make project and namespace requirements more visible rather than giving me an error after submission 1964486 - Bulk adding of CIDR IPS to whitelist is not working 1964492 - Pick 102171: Implement support for watch initialization in P&F 1964625 - NETID duplicate check is only required in NetworkPolicy Mode 1964748 - Sync upstream 1.7.2 downstream 1964756 - PVC status is always in 'Bound' status when it is actually cloning 1964847 - Sanity check test suite missing from the repo 1964888 - opoenshift-apiserver imagestreamimports depend on >34s timeout support, WAS: transport: loopyWriter.run returning. connection error: desc = "transport is closing" 1964936 - error log for "oc adm catalog mirror" is not correct 1964979 - Add mapping from ACI to infraenv to handle creation order issues 1964997 - Helm Library charts are showing and can be installed from Catalog 1965024 - [DR] backup and restore should perform consistency checks on etcd snapshots 1965092 - [Assisted-4.7] [Staging][OLM] Operators deployments start before all workers finished installation 1965283 - 4.7->4.8 upgrades: cluster operators are not ready: openshift-controller-manager (Upgradeable=Unknown NoData: ), service-ca (Upgradeable=Unknown NoData: 1965330 - oc image extract fails due to security capabilities on files 1965334 - opm index add fails during image extraction 1965367 - Typo in in etcd-metric-serving-ca resource name 1965370 - "Route" is not translated in Korean or Chinese 1965391 - When storage class is already present wizard do not jumps to "Stoarge and nodes" 1965422 - runc is missing Provides oci-runtime in rpm spec 1965522 - [v2v] Multiple typos on VM Import screen 1965545 - Pod stuck in ContainerCreating: Unit ...slice already exists 1965909 - Replace "Enable Taint Nodes" by "Mark nodes as dedicated" 1965921 - [oVirt] High performance VMs shouldn't be created with Existing policy 1965929 - kube-apiserver should use cert auth when reaching out to the oauth-apiserver with a TokenReview request 1966077 - hidden descriptor is visible in the Operator instance details page1966116 - DNS SRV request which worked in 4.7.9 stopped working in 4.7.11 1966126 - root_ca_cert_publisher_sync_duration_seconds metric can have an excessive cardinality 1966138 - (release-4.8) Update K8s & OpenShift API versions 1966156 - Issue with Internal Registry CA on the service pod 1966174 - No storage class is installed, OCS and CNV installations fail 1966268 - Workaround for Network Manager not supporting nmconnections priority 1966401 - Revamp Ceph Table in Install Wizard flow 1966410 - kube-controller-manager should not trigger APIRemovedInNextReleaseInUse alert 1966416 - (release-4.8) Do not exceed the data size limit 1966459 - 'policy/v1beta1 PodDisruptionBudget' and 'batch/v1beta1 CronJob' appear in image-registry-operator log 1966487 - IP address in Pods list table are showing node IP other than pod IP 1966520 - Add button from ocs add capacity should not be enabled if there are no PV's 1966523 - (release-4.8) Gather MachineAutoScaler definitions 1966546 - [master] KubeAPI - keep day1 after cluster is successfully installed 1966561 - Workload partitioning annotation workaround needed for CSV annotation propagation bug 1966602 - don't require manually setting IPv6DualStack feature gate in 4.8 1966620 - The bundle.Dockerfile in the repo is obsolete 1966632 - [4.8.0] [assisted operator] Unable to re-register an SNO instance if deleting CRDs during install 1966654 - Alertmanager PDB is not created, but Prometheus UWM is 1966672 - Add Sprint 201 translations 1966675 - Admin console string updates 1966677 - Change comma to semicolon 1966683 - Translation bugs from Sprint 201 files 1966684 - Verify "Creating snapshot for claim <1>{pvcName}</1>" displays correctly 1966697 - Garbage collector logs every interval - move to debug level 1966717 - include full timestamps in the logs 1966759 - Enable downstream plugin for Operator SDK 1966795 - [tests] Release 4.7 broken due to the usage of wrong OCS version 1966813 - "Replacing an unhealthy etcd member whose node is not ready" procedure results in new etcd pod in CrashLoopBackOff 1966862 - vsphere IPI - local dns prepender is not prepending nameserver 127.0.0.1 1966892 - [master] [Assisted-4.8][SNO] SNO node cannot transition into "Writing image to disk" from "Waiting for bootkub[e" 1966952 - [4.8.0] [Assisted-4.8][SNO][Dual Stack] DHCPv6 settings "ipv6.dhcp-duid=ll" missing from dual stack install 1967104 - [4.8.0] InfraEnv ctrl: log the amount of NMstate Configs baked into the image 1967126 - [4.8.0] [DOC] KubeAPI docs should clarify that the InfraEnv Spec pullSecretRef is currently ignored 1967197 - 404 errors loading some i18n namespaces 1967207 - Getting started card: console customization resources link shows other resources 1967208 - Getting started card should use semver library for parsing the version instead of string manipulation 1967234 - Console is continuously polling for ConsoleLink acm-link 1967275 - Awkward wrapping in getting started dashboard card 1967276 - Help menu tooltip overlays dropdown 1967398 - authentication operator still uses previous deleted pod ip rather than the new created pod ip to do health check 1967403 - (release-4.8) Increase workloads fingerprint gatherer pods limit 1967423 - [master] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion 1967444 - openshift-local-storage pods found with invalid priority class, should be openshift-user-critical or begin with system- while running e2e tests 1967531 - the ccoctl tool should extend MaxItems when listRoles, the default value 100 is a little small 1967578 - [4.8.0] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion 1967591 - The ManagementCPUsOverride admission plugin should not mutate containers with the limit 1967595 - Fixes the remaining lint issues 1967614 - prometheus-k8s pods can't be scheduled due to volume node affinity conflict 1967623 - [OCPonRHV] - ./openshift-install installation with install-config doesn't work if ovirt-config.yaml doesn't exist and user should fill the FQDN URL 1967625 - Add OpenShift Dockerfile for cloud-provider-aws 1967631 - [4.8.0] Cluster install failed due to timeout while "Waiting for control plane" 1967633 - [4.8.0] [Assisted-4.8][SNO] SNO node cannot transition into "Writing image to disk" from "Waiting for bootkube" 1967639 - Console whitescreens if user preferences fail to load 1967662 - machine-api-operator should not use deprecated "platform" field in infrastructures.config.openshift.io 1967667 - Add Sprint 202 Round 1 translations 1967713 - Insights widget shows invalid link to the OCM 1967717 - Insights Advisor widget is missing a description paragraph and contains deprecated naming 1967745 - When setting DNS node placement by toleration to not tolerate master node, effect value should not allow string other than "NoExecute" 1967803 - should update to 7.5.5 for grafana resources version label 1967832 - Add more tests for periodic.go 1967833 - Add tasks pool to tasks_processing 1967842 - Production logs are spammed on "OCS requirements validation status Insufficient hosts to deploy OCS. A minimum of 3 hosts is required to deploy OCS" 1967843 - Fix null reference to messagesToSearch in gather_logs.go 1967902 - [4.8.0] Assisted installer chrony manifests missing index numberring 1967933 - Network-Tools debug scripts not working as expected 1967945 - [4.8.0] [assisted operator] Assisted Service Postgres crashes msg: "mkdir: cannot create directory '/var/lib/pgsql/data/userdata': Permission denied" 1968019 - drain timeout and pool degrading period is too short 1968067 - [master] Agent validation not including reason for being insufficient 1968168 - [4.8.0] KubeAPI - keep day1 after cluster is successfully installed 1968175 - [4.8.0] Agent validation not including reason for being insufficient 1968373 - [4.8.0] BMAC re-attaches installed node on ISO regeneration 1968385 - [4.8.0] Infra env require pullSecretRef although it shouldn't be required 1968435 - [4.8.0] Unclear message in case of missing clusterImageSet 1968436 - Listeners timeout updated to remain using default value 1968449 - [4.8.0] Wrong Install-config override documentation 1968451 - [4.8.0] Garbage collector not cleaning up directories of removed clusters 1968452 - [4.8.0] [doc] "Mirror Registry Configuration" doc section needs clarification of functionality and limitations 1968454 - [4.8.0] backend events generated with wrong namespace for agent 1968455 - [4.8.0] Assisted Service operator's controllers are starting before the base service is ready 1968515 - oc should set user-agent when talking with registry 1968531 - Sync upstream 1.8.0 downstream 1968558 - [sig-cli] oc adm storage-admin [Suite:openshift/conformance/parallel] doesn't clean up properly 1968567 - [OVN] Egress router pod not running and openshift.io/scc is restricted 1968625 - Pods using sr-iov interfaces failign to start for Failed to create pod sandbox 1968700 - catalog-operator crashes when status.initContainerStatuses[].state.waiting is nil 1968701 - Bare metal IPI installation is failed due to worker inspection failure 1968754 - CI: e2e-metal-ipi-upgrade failing on KubeletHasDiskPressure, which triggers machine-config RequiredPoolsFailed 1969212 - [FJ OCP4.8 Bug - PUBLIC VERSION]: Masters repeat reboot every few minutes during workers provisioning 1969284 - Console Query Browser: Can't reset zoom to fixed time range after dragging to zoom 1969315 - [4.8.0] BMAC doesn't check if ISO Url changed before queuing BMH for reconcile 1969352 - [4.8.0] Creating BareMetalHost without the "inspect.metal3.io" does not automatically add it 1969363 - [4.8.0] Infra env should show the time that ISO was generated. 1969367 - [4.8.0] BMAC should wait for an ISO to exist for 1 minute before using it 1969386 - Filesystem's Utilization doesn't show in VM overview tab 1969397 - OVN bug causing subports to stay DOWN fails installations 1969470 - [4.8.0] Misleading error in case of install-config override bad input 1969487 - [FJ OCP4.8 Bug]: Avoid always do delete_configuration clean step 1969525 - Replace golint with revive 1969535 - Topology edit icon does not link correctly when branch name contains slash 1969538 - Install a VolumeSnapshotClass by default on CSI Drivers that support it 1969551 - [4.8.0] Assisted service times out on GetNextSteps due tooc adm release infotaking too long 1969561 - Test "an end user can use OLM can subscribe to the operator" generates deprecation alert 1969578 - installer: accesses v1beta1 RBAC APIs and causes APIRemovedInNextReleaseInUse to fire 1969599 - images without registry are being prefixed with registry.hub.docker.com instead of docker.io 1969601 - manifest for networks.config.openshift.io CRD uses deprecated apiextensions.k8s.io/v1beta1 1969626 - Portfoward stream cleanup can cause kubelet to panic 1969631 - EncryptionPruneControllerDegraded: etcdserver: request timed out 1969681 - MCO: maxUnavailable of ds/machine-config-daemon does not get updated due to missing resourcemerge check 1969712 - [4.8.0] Assisted service reports a malformed iso when we fail to download the base iso 1969752 - [4.8.0] [assisted operator] Installed Clusters are missing DNS setups 1969773 - [4.8.0] Empty cluster name on handleEnsureISOErrors log after applying InfraEnv.yaml 1969784 - WebTerminal widget should send resize events 1969832 - Applying a profile with multiple inheritance where parents include a common ancestor fails 1969891 - Fix rotated pipelinerun status icon issue in safari 1969900 - Test files should not use deprecated APIs that will trigger APIRemovedInNextReleaseInUse 1969903 - Provisioning a large number of hosts results in an unexpected delay in hosts becoming available 1969951 - Cluster local doesn't work for knative services created from dev console 1969969 - ironic-rhcos-downloader container uses and old base image 1970062 - ccoctl does not work with STS authentication 1970068 - ovnkube-master logs "Failed to find node ips for gateway" error 1970126 - [4.8.0] Disable "metrics-events" when deploying using the operator 1970150 - master pool is still upgrading when machine config reports level / restarts on osimageurl change 1970262 - [4.8.0] Remove Agent CRD Status fields not needed 1970265 - [4.8.0] Add State and StateInfo to DebugInfo in ACI and Agent CRDs 1970269 - [4.8.0] missing role in agent CRD 1970271 - [4.8.0] Add ProgressInfo to Agent and AgentClusterInstalll CRDs 1970381 - Monitoring dashboards: Custom time range inputs should retain their values 1970395 - [4.8.0] SNO with AI/operator - kubeconfig secret is not created until the spoke is deployed 1970401 - [4.8.0] AgentLabelSelector is required yet not supported 1970415 - SR-IOV Docs needs documentation for disabling port security on a network 1970470 - Add pipeline annotation to Secrets which are created for a private repo 1970494 - [4.8.0] Missing value-filling of log line in assisted-service operator pod 1970624 - 4.7->4.8 updates: AggregatedAPIDown for v1beta1.metrics.k8s.io 1970828 - "500 Internal Error" for all openshift-monitoring routes 1970975 - 4.7 -> 4.8 upgrades on AWS take longer than expected 1971068 - Removing invalid AWS instances from the CF templates 1971080 - 4.7->4.8 CI: KubePodNotReady due to MCD's 5m sleep between drain attempts 1971188 - Web Console does not show OpenShift Virtualization Menu with VirtualMachine CRDs of version v1alpha3 ! 1971293 - [4.8.0] Deleting agent from one namespace causes all agents with the same name to be deleted from all namespaces 1971308 - [4.8.0] AI KubeAPI AgentClusterInstall confusing "Validated" condition about VIP not matching machine network 1971529 - [Dummy bug for robot] 4.7.14 upgrade to 4.8 and then downgrade back to 4.7.14 doesn't work - clusteroperator/kube-apiserver is not upgradeable 1971589 - [4.8.0] Telemetry-client won't report metrics in case the cluster was installed using the assisted operator 1971630 - [4.8.0] ACM/ZTP with Wan emulation fails to start the agent service 1971632 - [4.8.0] ACM/ZTP with Wan emulation, several clusters fail to step past discovery 1971654 - [4.8.0] InfraEnv controller should always requeue for backend response HTTP StatusConflict (code 409) 1971739 - Keep /boot RW when kdump is enabled 1972085 - [4.8.0] Updating configmap within AgentServiceConfig is not logged properly 1972128 - ironic-static-ip-manager container still uses 4.7 base image 1972140 - [4.8.0] ACM/ZTP with Wan emulation, SNO cluster installs do not show as installed although they are 1972167 - Several operators degraded because Failed to create pod sandbox when installing an sts cluster 1972213 - Openshift Installer| UEFI mode | BM hosts have BIOS halted 1972262 - [4.8.0] "baremetalhost.metal3.io/detached" uses boolean value where string is expected 1972426 - Adopt failure can trigger deprovisioning 1972436 - [4.8.0] [DOCS] AgentServiceConfig examples in operator.md doc should each contain databaseStorage + filesystemStorage 1972526 - [4.8.0] clusterDeployments controller should send an event to InfraEnv for backend cluster registration 1972530 - [4.8.0] no indication for missing debugInfo in AgentClusterInstall 1972565 - performance issues due to lost node, pods taking too long to relaunch 1972662 - DPDK KNI modules need some additional tools 1972676 - Requirements for authenticating kernel modules with X.509 1972687 - Using bound SA tokens causes causes failures to /apis/authorization.openshift.io/v1/clusterrolebindings 1972690 - [4.8.0] infra-env condition message isn't informative in case of missing pull secret 1972702 - [4.8.0] Domain dummy.com (not belonging to Red Hat) is being used in a default configuration 1972768 - kube-apiserver setup fail while installing SNO due to port being used 1972864 - Newlocal-with-fallback` service annotation does not preserve source IP 1973018 - Ironic rhcos downloader breaks image cache in upgrade process from 4.7 to 4.8 1973117 - No storage class is installed, OCS and CNV installations fail 1973233 - remove kubevirt images and references 1973237 - RHCOS-shipped stalld systemd units do not use SCHED_FIFO to run stalld. 1973428 - Placeholder bug for OCP 4.8.0 image release 1973667 - [4.8] NetworkPolicy tests were mistakenly marked skipped 1973672 - fix ovn-kubernetes NetworkPolicy 4.7->4.8 upgrade issue 1973995 - [Feature:IPv6DualStack] tests are failing in dualstack 1974414 - Uninstalling kube-descheduler clusterkubedescheduleroperator.4.6.0-202106010807.p0.git.5db84c5 removes some clusterrolebindings 1974447 - Requirements for nvidia GPU driver container for driver toolkit 1974677 - [4.8.0] KubeAPI CVO progress is not available on CR/conditions only in events. 1974718 - Tuned net plugin fails to handle net devices with n/a value for a channel 1974743 - [4.8.0] All resources not being cleaned up after clusterdeployment deletion 1974746 - [4.8.0] File system usage not being logged appropriately 1974757 - [4.8.0] Assisted-service deployed on an IPv6 cluster installed with proxy: agentclusterinstall shows error pulling an image from quay. 1974773 - Using bound SA tokens causes fail to query cluster resource especially in a sts cluster 1974839 - CVE-2021-29059 nodejs-is-svg: Regular expression denial of service if the application is provided and checks a crafted invalid SVG string 1974850 - [4.8] coreos-installer failing Execshield 1974931 - [4.8.0] Assisted Service Operator should be Infrastructure Operator for Red Hat OpenShift 1974978 - 4.8.0.rc0 upgrade hung, stuck on DNS clusteroperator progressing 1975155 - Kubernetes service IP cannot be accessed for rhel worker 1975227 - [4.8.0] KubeAPI Move conditions consts to CRD types 1975360 - [4.8.0] [master] timeout on kubeAPI subsystem test: SNO full install and validate MetaData 1975404 - [4.8.0] Confusing behavior when multi-node spoke workers present when only controlPlaneAgents specified 1975432 - Alert InstallPlanStepAppliedWithWarnings does not resolve 1975527 - VMware UPI is configuring static IPs via ignition rather than afterburn 1975672 - [4.8.0] Production logs are spammed on "Found unpreparing host: id 08f22447-2cf1-a107-eedf-12c7421f7380 status insufficient" 1975789 - worker nodes rebooted when we simulate a case where the api-server is down 1975938 - gcp-realtime: e2e test failing [sig-storage] Multi-AZ Cluster Volumes should only be allowed to provision PDs in zones where nodes exist [Suite:openshift/conformance/parallel] [Suite:k8s] 1975964 - 4.7 nightly upgrade to 4.8 and then downgrade back to 4.7 nightly doesn't work - ingresscontroller "default" is degraded 1976079 - [4.8.0] Openshift Installer| UEFI mode | BM hosts have BIOS halted 1976263 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel] 1976376 - disable jenkins client plugin test whose Jenkinsfile references master branch openshift/origin artifacts 1976590 - [Tracker] [SNO][assisted-operator][nmstate] Bond Interface is down when booting from the discovery ISO 1977233 - [4.8] Unable to authenticate against IDP after upgrade to 4.8-rc.1 1977351 - CVO pod skipped by workload partitioning with incorrect error stating cluster is not SNO 1977352 - [4.8.0] [SNO] No DNS to cluster API from assisted-installer-controller 1977426 - Installation of OCP 4.6.13 fails when teaming interface is used with OVNKubernetes 1977479 - CI failing on firing CertifiedOperatorsCatalogError due to slow livenessProbe responses 1977540 - sriov webhook not worked when upgrade from 4.7 to 4.8 1977607 - [4.8.0] Post making changes to AgentServiceConfig assisted-service operator is not detecting the change and redeploying assisted-service pod 1977924 - Pod fails to run when a custom SCC with a specific set of volumes is used 1980788 - NTO-shipped stalld can segfault 1981633 - enhance service-ca injection 1982250 - Performance Addon Operator fails to install after catalog source becomes ready 1982252 - olm Operator is in CrashLoopBackOff state with error "couldn't cleanup cross-namespace ownerreferences"

  1. References:

https://access.redhat.com/security/cve/CVE-2016-2183 https://access.redhat.com/security/cve/CVE-2020-7774 https://access.redhat.com/security/cve/CVE-2020-15106 https://access.redhat.com/security/cve/CVE-2020-15112 https://access.redhat.com/security/cve/CVE-2020-15113 https://access.redhat.com/security/cve/CVE-2020-15114 https://access.redhat.com/security/cve/CVE-2020-15136 https://access.redhat.com/security/cve/CVE-2020-26160 https://access.redhat.com/security/cve/CVE-2020-26541 https://access.redhat.com/security/cve/CVE-2020-28469 https://access.redhat.com/security/cve/CVE-2020-28500 https://access.redhat.com/security/cve/CVE-2020-28852 https://access.redhat.com/security/cve/CVE-2021-3114 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/cve/CVE-2021-3516 https://access.redhat.com/security/cve/CVE-2021-3517 https://access.redhat.com/security/cve/CVE-2021-3518 https://access.redhat.com/security/cve/CVE-2021-3520 https://access.redhat.com/security/cve/CVE-2021-3537 https://access.redhat.com/security/cve/CVE-2021-3541 https://access.redhat.com/security/cve/CVE-2021-3636 https://access.redhat.com/security/cve/CVE-2021-20206 https://access.redhat.com/security/cve/CVE-2021-20271 https://access.redhat.com/security/cve/CVE-2021-20291 https://access.redhat.com/security/cve/CVE-2021-21419 https://access.redhat.com/security/cve/CVE-2021-21623 https://access.redhat.com/security/cve/CVE-2021-21639 https://access.redhat.com/security/cve/CVE-2021-21640 https://access.redhat.com/security/cve/CVE-2021-21648 https://access.redhat.com/security/cve/CVE-2021-22133 https://access.redhat.com/security/cve/CVE-2021-23337 https://access.redhat.com/security/cve/CVE-2021-23362 https://access.redhat.com/security/cve/CVE-2021-23368 https://access.redhat.com/security/cve/CVE-2021-23382 https://access.redhat.com/security/cve/CVE-2021-25735 https://access.redhat.com/security/cve/CVE-2021-25737 https://access.redhat.com/security/cve/CVE-2021-26539 https://access.redhat.com/security/cve/CVE-2021-26540 https://access.redhat.com/security/cve/CVE-2021-27292 https://access.redhat.com/security/cve/CVE-2021-28092 https://access.redhat.com/security/cve/CVE-2021-29059 https://access.redhat.com/security/cve/CVE-2021-29622 https://access.redhat.com/security/cve/CVE-2021-32399 https://access.redhat.com/security/cve/CVE-2021-33034 https://access.redhat.com/security/cve/CVE-2021-33194 https://access.redhat.com/security/cve/CVE-2021-33909 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYQCOF9zjgjWX9erEAQjsEg/+NSFQdRcZpqA34LWRtxn+01y2MO0WLroQ d4o+3h0ECKYNRFKJe6n7z8MdmPpvV2uNYN0oIwidTESKHkFTReQ6ZolcV/sh7A26 Z7E+hhpTTObxAL7Xx8nvI7PNffw3CIOZSpnKws5TdrwuMkH5hnBSSZntP5obp9Vs ImewWWl7CNQtFewtXbcmUojNzIvU1mujES2DTy2ffypLoOW6kYdJzyWubigIoR6h gep9HKf1X4oGPuDNF5trSdxKwi6W68+VsOA25qvcNZMFyeTFhZqowot/Jh1HUHD8 TWVpDPA83uuExi/c8tE8u7VZgakWkRWcJUsIw68VJVOYGvpP6K/MjTpSuP2itgUX X//1RGQM7g6sYTCSwTOIrMAPbYH0IMbGDjcS4fSZcfg6c+WJnEpZ72ZgjHZV8mxb 1BtQSs2lil48/cwDKM0yMO2nYsKiz4DCCx2W5izP0rLwNA8Hvqh9qlFgkxJWWOvA mtBCelB0E74qrE4NXbX+MIF7+ZQKjd1evE91/VWNs0FLR/xXdP3C5ORLU3Fag0G/ 0oTV73NdxP7IXVAdsECwU2AqS9ne1y01zJKtd7hq7H/wtkbasqCNq5J7HikJlLe6 dpKh5ZRQzYhGeQvho9WQfz/jd4HZZTcB6wxrWubbd05bYt/i/0gau90LpuFEuSDx +bLvJlpGiMg= =NJcM -----END PGP SIGNATURE-----

-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:

Red Hat Advanced Cluster Management for Kubernetes 2.3.0 images

Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in.

Bugs:

  • RFE Make the source code for the endpoint-metrics-operator public (BZ# 1913444)

  • cluster became offline after apiserver health check (BZ# 1942589)

  • Solution:

Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):

1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension 1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag 1913444 - RFE Make the source code for the endpoint-metrics-operator public 1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull 1927520 - RHACM 2.3.0 images 1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection 1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash() 1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate 1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms 1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization 1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string 1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application 1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header 1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call 1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS 1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service 1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service 1942589 - cluster became offline after apiserver health check 1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() 1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character 1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data 1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service 1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option 1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing 1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js 1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service 1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS) 1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option 1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe 1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command 1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets 1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs 1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method 1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions 1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id 1983131 - Defragmenting an etcd member doesn't reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters

  1. VDSM manages and monitors the host's storage, memory and networks as well as virtual machine creation, other host administration tasks, statistics gathering, and log collection.

Bug Fix(es):

  • An update in libvirt has changed the way block threshold events are submitted. As a result, the VDSM was confused by the libvirt event, and tried to look up a drive, logging a warning about a missing drive. In this release, the VDSM has been adapted to handle the new libvirt behavior, and does not log warnings about missing drives. (BZ#1948177)

  • Previously, when a virtual machine was powered off on the source host of a live migration and the migration finished successfully at the same time, the two events interfered with each other, and sometimes prevented migration cleanup resulting in additional migrations from the host being blocked. In this release, additional migrations are not blocked. (BZ#1959436)

  • Previously, when failing to execute a snapshot and re-executing it later, the second try would fail due to using the previous execution data. In this release, this data will be used only when needed, in recovery mode. (BZ#1984209)

  • Then engine deletes the volume and causes data corruption. 1998017 - Keep cinbderlib dependencies optional for 4.4.8

Bug Fix(es):

  • Documentation is referencing deprecated API for Service Export - Submariner (BZ#1936528)

  • Importing of cluster fails due to error/typo in generated command (BZ#1936642)

  • RHACM 2.2.2 images (BZ#1938215)

  • 2.2 clusterlifecycle fails to allow provision fips: true clusters on aws, vsphere (BZ#1941778)

  • Summary:

The Migration Toolkit for Containers (MTC) 1.7.4 is now available. Description:

The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202102-1466",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "primavera unifier",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "17.7"
      },
      {
        "model": "financial services crime and compliance management studio",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.0.8.3.0"
      },
      {
        "model": "jd edwards enterpriseone tools",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "9.2.6.1"
      },
      {
        "model": "health sciences data management workbench",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "3.0.0.0"
      },
      {
        "model": "primavera gateway",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "19.12.11"
      },
      {
        "model": "banking trade finance process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "primavera gateway",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "17.12.0"
      },
      {
        "model": "primavera gateway",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "20.12.0"
      },
      {
        "model": "primavera unifier",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "17.12"
      },
      {
        "model": "peoplesoft enterprise peopletools",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.59"
      },
      {
        "model": "banking trade finance process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "banking supply chain finance",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "communications cloud native core policy",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "1.11.0"
      },
      {
        "model": "primavera unifier",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "20.12"
      },
      {
        "model": "lodash",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "lodash",
        "version": "4.17.21"
      },
      {
        "model": "primavera gateway",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "20.12.7"
      },
      {
        "model": "banking corporate lending process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "banking supply chain finance",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "primavera gateway",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "18.8.0"
      },
      {
        "model": "banking trade finance process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "banking extensibility workbench",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "primavera unifier",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "19.12"
      },
      {
        "model": "cloud manager",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "health sciences data management workbench",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "2.5.2.1"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "communications cloud native core binding support function",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "1.9.0"
      },
      {
        "model": "banking corporate lending process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "enterprise communications broker",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "3.2.0"
      },
      {
        "model": "banking credit facilities process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "banking extensibility workbench",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "primavera gateway",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "19.12.0"
      },
      {
        "model": "banking supply chain finance",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "communications design studio",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "7.4.2.0.0"
      },
      {
        "model": "banking credit facilities process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "system manager",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": "9.0"
      },
      {
        "model": "primavera gateway",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "18.8.12"
      },
      {
        "model": "active iq unified manager",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "banking corporate lending process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "financial services crime and compliance management studio",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.0.8.2.0"
      },
      {
        "model": "retail customer management and segmentation foundation",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "19.0"
      },
      {
        "model": "communications session border controller",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "9.0"
      },
      {
        "model": "banking extensibility workbench",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "primavera unifier",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "18.8"
      },
      {
        "model": "communications session border controller",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.4"
      },
      {
        "model": "enterprise communications broker",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "3.3.0"
      },
      {
        "model": "primavera gateway",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "17.12.11"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "banking credit facilities process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "communications services gatekeeper",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "7.0"
      },
      {
        "model": "peoplesoft enterprise peopletools",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.58"
      },
      {
        "model": "lodash",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "lodash",
        "version": "4.17.21"
      },
      {
        "model": "lodash",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "lodash",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001309"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23337"
      }
    ]
  },
  "configurations": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/configurations#",
      "children": {
        "@container": "@list"
      },
      "cpe_match": {
        "@container": "@list"
      },
      "data": {
        "@container": "@list"
      },
      "nodes": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "CVE_data_version": "4.0",
        "nodes": [
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:a:lodash:lodash:*:*:*:*:*:node.js:*:*",
                "cpe_name": [],
                "versionEndExcluding": "4.17.21",
                "vulnerable": true
              }
            ],
            "operator": "OR"
          },
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:18.8:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndIncluding": "17.12",
                "versionStartIncluding": "17.7",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.58:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:19.12:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:retail_customer_management_and_segmentation_foundation:19.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_services_gatekeeper:7.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:enterprise_communications_broker:3.2.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:20.12:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_extensibility_workbench:14.3.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_trade_finance_process_management:14.3.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_credit_facilities_process_management:14.3.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_corporate_lending_process_management:14.3.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.59:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:primavera_gateway:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndIncluding": "17.12.11",
                "versionStartIncluding": "17.12.0",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_session_border_controller:8.4:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_session_border_controller:9.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:primavera_gateway:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndIncluding": "20.12.7",
                "versionStartIncluding": "20.12.0",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:primavera_gateway:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndIncluding": "19.12.11",
                "versionStartIncluding": "19.12.0",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:primavera_gateway:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndIncluding": "18.8.12",
                "versionStartIncluding": "18.8.0",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_supply_chain_finance:14.2.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_trade_finance_process_management:14.5.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_credit_facilities_process_management:14.2.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_credit_facilities_process_management:14.5.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_corporate_lending_process_management:14.2.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_corporate_lending_process_management:14.5.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_supply_chain_finance:14.5.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_supply_chain_finance:14.3.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_trade_finance_process_management:14.2.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_extensibility_workbench:14.2.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_extensibility_workbench:14.5.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:enterprise_communications_broker:3.3.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_design_studio:7.4.2.0.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_policy:1.11.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_binding_support_function:1.9.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_enterpriseone_tools:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "9.2.6.1",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:financial_services_crime_and_compliance_management_studio:8.0.8.3.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:health_sciences_data_management_workbench:2.5.2.1:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:health_sciences_data_management_workbench:3.0.0.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:financial_services_crime_and_compliance_management_studio:8.0.8.2.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              }
            ],
            "operator": "OR"
          },
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:vmware_vsphere:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:linux:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:windows:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:netapp:cloud_manager:-:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:netapp:system_manager:9.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              }
            ],
            "operator": "OR"
          },
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "1.0",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              }
            ],
            "operator": "OR"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-23337"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "163276"
      },
      {
        "db": "PACKETSTORM",
        "id": "162901"
      },
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "db": "PACKETSTORM",
        "id": "164090"
      },
      {
        "db": "PACKETSTORM",
        "id": "162151"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1137"
      }
    ],
    "trust": 1.3
  },
  "cve": "CVE-2021-23337",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "acInsufInfo": false,
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "SINGLE",
            "author": "NVD",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.5,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.0,
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "obtainAllPrivilege": false,
            "obtainOtherPrivilege": false,
            "obtainUserPrivilege": false,
            "severity": "MEDIUM",
            "trust": 1.0,
            "userInteractionRequired": false,
            "vectorString": "AV:N/AC:L/Au:S/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "acInsufInfo": null,
            "accessComplexity": "Low",
            "accessVector": "Network",
            "authentication": "Single",
            "author": "NVD",
            "availabilityImpact": "Partial",
            "baseScore": 6.5,
            "confidentialityImpact": "Partial",
            "exploitabilityScore": null,
            "id": "CVE-2021-23337",
            "impactScore": null,
            "integrityImpact": "Partial",
            "obtainAllPrivilege": null,
            "obtainOtherPrivilege": null,
            "obtainUserPrivilege": null,
            "severity": "Medium",
            "trust": 0.9,
            "userInteractionRequired": null,
            "vectorString": "AV:N/AC:L/Au:S/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "SINGLE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.5,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.0,
            "id": "VHN-381798",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:L/AU:S/C:P/I:P/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "NVD",
            "availabilityImpact": "HIGH",
            "baseScore": 7.2,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 1.2,
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "HIGH",
            "scope": "UNCHANGED",
            "trust": 2.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 7.2,
            "baseSeverity": "High",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2021-23337",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "High",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "NVD",
            "id": "CVE-2021-23337",
            "trust": 1.8,
            "value": "HIGH"
          },
          {
            "author": "report@snyk.io",
            "id": "CVE-2021-23337",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202102-1137",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "VULHUB",
            "id": "VHN-381798",
            "trust": 0.1,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2021-23337",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-381798"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-23337"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001309"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23337"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23337"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1137"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function. Lodash Contains a command injection vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. There is a security vulnerability in Lodash. Please keep an eye on CNNVD or vendor announcements. Description:\n\nThe ovirt-engine package provides the manager for virtualization\nenvironments. \nThis manager enables admins to define hosts and networks, as well as to add\nstorage, create VMs and manage user permissions. \n\nBug Fix(es):\n\n* This release adds the queue attribute to the virtio-scsi driver in the\nvirtual machine configuration. This improvement enables multi-queue\nperformance with the virtio-scsi driver. (BZ#911394)\n\n* With this release, source-load-balancing has been added as a new\nsub-option for xmit_hash_policy. It can be configured for bond modes\nbalance-xor (2), 802.3ad (4) and balance-tlb (5), by specifying\nxmit_hash_policy=vlan+srcmac. (BZ#1683987)\n\n* The default DataCenter/Cluster will be set to compatibility level 4.6 on\nnew installations of Red Hat Virtualization 4.4.6.; (BZ#1950348)\n\n* With this release, support has been added for copying disks between\nregular Storage Domains and Managed Block Storage Domains. \nIt is now possible to migrate disks between Managed Block Storage Domains\nand regular Storage Domains. (BZ#1906074)\n\n* Previously, the engine-config value LiveSnapshotPerformFreezeInEngine was\nset by default to false and was supposed to be uses in cluster\ncompatibility levels below 4.4. The value was set to general version. \nWith this release, each cluster level has it\u0027s own value, defaulting to\nfalse for 4.4 and above. This will reduce unnecessary overhead in removing\ntime outs of the file system freeze command. (BZ#1932284)\n\n* With this release, running virtual machines is supported for up to 16TB\nof RAM on x86_64 architectures. (BZ#1944723)\n\n* This release adds the gathering of oVirt/RHV related certificates to\nallow easier debugging of issues for faster customer help and issue\nresolution. \nInformation from certificates is now included as part of the sosreport. \nNote that no corresponding private key information is gathered, due to\nsecurity considerations. (BZ#1845877)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/2974891\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1113630 - [RFE] indicate vNICs that are out-of-sync from their configuration on engine\n1310330 - [RFE] Provide a way to remove stale LUNs from hypervisors\n1589763 - [downstream clone] Error changing CD for a running VM when ISO image is on a block domain\n1621421 - [RFE] indicate vNIC is out of sync on network QoS modification on engine\n1717411 - improve engine logging when migration fail\n1766414 - [downstream] [UI] hint after updating mtu on networks connected to running VMs\n1775145 - Incorrect message from hot-plugging memory\n1821199 - HP VM fails to migrate between identical hosts (the same cpu flags) not supporting TSC. \n1845877 - [RFE] Collect information about RHV PKI\n1875363 - engine-setup failing on FIPS enabled rhel8 machine\n1906074 - [RFE] Support disks copy between regular and managed block storage domains\n1910858 - vm_ovf_generations is not cleared while detaching the storage domain causing VM import with old stale configuration\n1917718 - [RFE] Collect memory usage from guests without ovirt-guest-agent and memory ballooning\n1919195 - Unable to create snapshot without saving memory of running VM from VM Portal. \n1919984 - engine-setup failse to deploy the grafana service in an external DWH server\n1924610 - VM Portal shows N/A as the VM IP address even if the guest agent is running and the IP is shown in the webadmin portal\n1926018 - Failed to run VM after FIPS mode is enabled\n1926823 - Integrating ELK with RHV-4.4 fails as RHVH is missing \u0027rsyslog-gnutls\u0027 package. \n1928158 - Rename \u0027CA Certificate\u0027 link in welcome page to \u0027Engine CA certificate\u0027\n1928188 - Failed to parse \u0027writeOps\u0027 value \u0027XXXX\u0027 to integer: For input string: \"XXXX\"\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1929211 - Failed to parse \u0027writeOps\u0027 value \u0027XXXX\u0027 to integer: For input string: \"XXXX\"\n1930522 - [RHV-4.4.5.5] Failed to deploy RHEL AV 8.4.0 host to RHV with error \"missing groups or modules: virt:8.4\"\n1930565 - Host upgrade failed in imgbased but RHVM shows upgrade successful\n1930895 - RHEL 8 virtual machine with qemu-guest-agent installed displays Guest OS Memory Free/Cached/Buffered: Not Configured\n1932284 - Engine handled FS freeze is not fast enough for Windows systems\n1935073 - Ansible ovirt_disk module can create disks with conflicting IDs that cannot be removed\n1942083 - upgrade ovirt-cockpit-sso to 0.1.4-2\n1943267 - Snapshot creation is failing for VM having vGPU. \n1944723 - [RFE] Support virtual machines with 16TB memory\n1948577 - [welcome page] remove \"Infrastructure Migration\" section (obsoleted)\n1949543 - rhv-log-collector-analyzer fails to run MAC Pools rule\n1949547 - rhv-log-collector-analyzer report contains \u0027b characters\n1950348 - Set compatibility level 4.6 for Default DataCenter/Cluster during new installations of RHV 4.4.6\n1950466 - Host installation failed\n1954401 - HP VMs pinning is wiped after edit-\u003eok and pinned to first physical CPUs.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Moderate: OpenShift Container Platform 4.8.2 bug fix and security update\nAdvisory ID:       RHSA-2021:2438-01\nProduct:           Red Hat OpenShift Enterprise\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2021:2438\nIssue date:        2021-07-27\nCVE Names:         CVE-2016-2183 CVE-2020-7774 CVE-2020-15106 \n                   CVE-2020-15112 CVE-2020-15113 CVE-2020-15114 \n                   CVE-2020-15136 CVE-2020-26160 CVE-2020-26541 \n                   CVE-2020-28469 CVE-2020-28500 CVE-2020-28852 \n                   CVE-2021-3114 CVE-2021-3121 CVE-2021-3516 \n                   CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 \n                   CVE-2021-3537 CVE-2021-3541 CVE-2021-3636 \n                   CVE-2021-20206 CVE-2021-20271 CVE-2021-20291 \n                   CVE-2021-21419 CVE-2021-21623 CVE-2021-21639 \n                   CVE-2021-21640 CVE-2021-21648 CVE-2021-22133 \n                   CVE-2021-23337 CVE-2021-23362 CVE-2021-23368 \n                   CVE-2021-23382 CVE-2021-25735 CVE-2021-25737 \n                   CVE-2021-26539 CVE-2021-26540 CVE-2021-27292 \n                   CVE-2021-28092 CVE-2021-29059 CVE-2021-29622 \n                   CVE-2021-32399 CVE-2021-33034 CVE-2021-33194 \n                   CVE-2021-33909 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.8.2 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nThis release includes a security update for Red Hat OpenShift Container\nPlatform 4.8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.8.2. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2021:2437\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel\nease-notes.html\n\nSecurity Fix(es):\n\n* SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32)\n(CVE-2016-2183)\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n\n* nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)\n\n* etcd: Large slice causes panic in decodeRecord method (CVE-2020-15106)\n\n* etcd: DoS in wal/wal.go (CVE-2020-15112)\n\n* etcd: directories created via os.MkdirAll are not checked for permissions\n(CVE-2020-15113)\n\n* etcd: gateway can include itself as an endpoint resulting in resource\nexhaustion and leads to DoS (CVE-2020-15114)\n\n* etcd: no authentication is performed against endpoints provided in the\n- --endpoints flag (CVE-2020-15136)\n\n* jwt-go: access restriction bypass vulnerability (CVE-2020-26160)\n\n* nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)\n\n* nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n(CVE-2020-28500)\n\n* golang.org/x/text: Panic in language.ParseAcceptLanguage while processing\nbcp47 tag (CVE-2020-28852)\n\n* golang: crypto/elliptic: incorrect operations on the P-224 curve\n(CVE-2021-3114)\n\n* containernetworking-cni: Arbitrary path injection via type field in CNI\nconfiguration (CVE-2021-20206)\n\n* containers/storage: DoS via malicious image (CVE-2021-20291)\n\n* prometheus: open redirect under the /new endpoint (CVE-2021-29622)\n\n* golang: x/net/html: infinite loop in ParseFragment (CVE-2021-33194)\n\n* go.elastic.co/apm: leaks sensitive HTTP headers during panic\n(CVE-2021-22133)\n\nSpace precludes listing in detail the following additional CVEs fixes:\n(CVE-2021-27292), (CVE-2021-28092), (CVE-2021-29059), (CVE-2021-23382),\n(CVE-2021-26539), (CVE-2021-26540), (CVE-2021-23337), (CVE-2021-23362) and\n(CVE-2021-23368)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.8.2-x86_64\n\nThe image digest is\nssha256:0e82d17ababc79b10c10c5186920232810aeccbccf2a74c691487090a2c98ebc\n\n(For s390x architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.8.2-s390x\n\nThe image digest is\nsha256:a284c5c3fa21b06a6a65d82be1dc7e58f378aa280acd38742fb167a26b91ecb5\n\n(For ppc64le architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.8.2-ppc64le\n\nThe image digest is\nsha256:da989b8e28bccadbb535c2b9b7d3597146d14d254895cd35f544774f374cdd0f\n\nAll OpenShift Container Platform 4.8 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.8/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor\n\n3. Solution:\n\nFor OpenShift Container Platform 4.8 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.8/updating/updating-cluster\n- -cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1369383 - CVE-2016-2183 SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32)\n1725981 - oc explain does not work well with full resource.group names\n1747270 - [osp] Machine with name \"\u003ccluster-id\u003e-worker\"couldn\u0027t join the cluster\n1772993 - rbd block devices attached to a host are visible in unprivileged container pods\n1786273 - [4.6] KAS pod logs show \"error building openapi models ... has invalid property: anyOf\" for CRDs\n1786314 - [IPI][OSP] Install fails on OpenStack with self-signed certs unless the node running the installer has the CA cert in its system trusts\n1801407 - Router in v4v6 mode puts brackets around IPv4 addresses in the Forwarded header\n1812212 - ArgoCD example application cannot be downloaded from github\n1817954 - [ovirt] Workers nodes are not numbered sequentially\n1824911 - PersistentVolume yaml editor is read-only with system:persistent-volume-provisioner ClusterRole\n1825219 - openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with \"Unable to connect to the server\"\n1825417 - The containerruntimecontroller doesn\u0027t roll back to CR-1 if we delete CR-2\n1834551 - ClusterOperatorDown fires when operator is only degraded; states will block upgrades\n1835264 - Intree provisioner doesn\u0027t respect PVC.spec.dataSource sometimes\n1839101 - Some sidebar links in developer perspective don\u0027t follow same project\n1840881 - The KubeletConfigController cannot process multiple confs for a pool/ pool changes\n1846875 - Network setup test high failure rate\n1848151 - Console continues to poll the ClusterVersion resource when the user doesn\u0027t have authority\n1850060 - After upgrading to 3.11.219 timeouts are appearing. \n1852637 - Kubelet sets incorrect image names in node status images section\n1852743 - Node list CPU column only show usage\n1853467 - container_fs_writes_total is inconsistent with CPU/memory in summarizing cgroup values\n1857008 - [Edge] [BareMetal] Not provided STATE value for machines\n1857477 - Bad helptext for storagecluster creation\n1859382 - check-endpoints panics on graceful shutdown\n1862084 - Inconsistency of time formats in the OpenShift web-console\n1864116 - Cloud credential operator scrolls warnings about unsupported platform\n1866222 - Should output all options when runing `operator-sdk init --help`\n1866318 - [RHOCS Usability Study][Dashboard] Users found it difficult to navigate to the OCS dashboard\n1866322 - [RHOCS Usability Study][Dashboard] Alert details page does not help to explain the Alert\n1866331 - [RHOCS Usability Study][Dashboard] Users need additional tooltips or definitions\n1868755 - [vsphere] terraform provider vsphereprivate crashes when network is unavailable on host\n1868870 - CVE-2020-15113 etcd: directories created via os.MkdirAll are not checked for permissions\n1868872 - CVE-2020-15112 etcd: DoS in wal/wal.go\n1868874 - CVE-2020-15114 etcd: gateway can include itself as an endpoint resulting in resource exhaustion and leads to DoS\n1868880 - CVE-2020-15136 etcd: no authentication is performed against endpoints provided in the --endpoints flag\n1868883 - CVE-2020-15106 etcd: Large slice causes panic in decodeRecord method\n1871303 - [sig-instrumentation] Prometheus when installed on the cluster should have important platform topology metrics\n1871770 - [IPI baremetal] The Keepalived.conf file is not indented evenly\n1872659 - ClusterAutoscaler doesn\u0027t scale down when a node is not needed anymore\n1873079 - SSH to api and console route is possible when the clsuter is hosted on Openstack\n1873649 - proxy.config.openshift.io should validate user inputs\n1874322 - openshift/oauth-proxy: htpasswd using SHA1 to store credentials\n1874931 - Accessibility - Keyboard shortcut to exit YAML editor not easily discoverable\n1876918 - scheduler test leaves taint behind\n1878199 - Remove Log Level Normalization controller in cluster-config-operator release N+1\n1878655 - [aws-custom-region] creating manifests take too much time when custom endpoint is unreachable\n1878685 - Ingress resource with \"Passthrough\"  annotation does not get applied when using the newer \"networking.k8s.io/v1\" API\n1879077 - Nodes tainted after configuring additional host iface\n1879140 - console auth errors not understandable by customers\n1879182 - switch over to secure access-token logging by default and delete old non-sha256 tokens\n1879184 - CVO must detect or log resource hotloops\n1879495 - [4.6] namespace \\\u201copenshift-user-workload-monitoring\\\u201d does not exist\u201d\n1879638 - Binary file uploaded to a secret in OCP 4 GUI is not properly converted to Base64-encoded string\n1879944 - [OCP 4.8] Slow PV creation with vsphere\n1880757 - AWS: master not removed from LB/target group when machine deleted\n1880758 - Component descriptions in cloud console have bad description (Managed by Terraform)\n1881210 - nodePort for router-default metrics with NodePortService does not exist\n1881481 - CVO hotloops on some service manifests\n1881484 - CVO hotloops on deployment manifests\n1881514 - CVO hotloops on imagestreams from cluster-samples-operator\n1881520 - CVO hotloops on (some) clusterrolebindings\n1881522 - CVO hotloops on clusterserviceversions packageserver\n1881662 - Error getting volume limit for plugin kubernetes.io/\u003cname\u003e in kubelet logs\n1881694 - Evidence of disconnected installs pulling images from the local registry instead of quay.io\n1881938 - migrator deployment doesn\u0027t tolerate masters\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1883587 - No option for user to select volumeMode\n1883993 - Openshift 4.5.8 Deleting pv disk vmdk after delete machine\n1884053 - cluster DNS experiencing disruptions during cluster upgrade in insights cluster\n1884800 - Failed to set up mount unit: Invalid argument\n1885186 - Removing ssh keys MC does not remove the key from authorized_keys\n1885349 - [IPI Baremetal] Proxy Information Not passed to metal3\n1885717 - activeDeadlineSeconds DeadlineExceeded does not show terminated container statuses\n1886572 - auth: error contacting auth provider when extra ingress (not default)  goes down\n1887849 - When creating new storage class failure_domain is missing. \n1888712 - Worker nodes do not come up on a baremetal IPI deployment with control plane network configured on a vlan on top of bond interface due to Pending CSRs\n1889689 - AggregatedAPIErrors alert may never fire\n1890678 - Cypress:  Fix \u0027structure\u0027 accesibility violations\n1890828 - Intermittent prune job failures causing operator degradation\n1891124 - CP Conformance: CRD spec and status failures\n1891301 - Deleting bmh  by \"oc delete bmh\u0027 get stuck\n1891696 - [LSO] Add capacity UI does not check for node present in selected storageclass\n1891766 - [LSO] Min-Max filter\u0027s from OCS wizard accepts Negative values and that cause PV not getting created\n1892642 - oauth-server password metrics do not appear in UI after initial OCP installation\n1892718 - HostAlreadyClaimed: The new route cannot be loaded with a new api group version\n1893850 - Add an alert for requests rejected by the apiserver\n1893999 - can\u0027t login ocp cluster with oc 4.7 client without the username\n1895028 - [gcp-pd-csi-driver-operator] Volumes created by CSI driver are not deleted on cluster deletion\n1895053 - Allow builds to optionally mount in cluster trust stores\n1896226 - recycler-pod template should not be in kubelet static manifests directory\n1896321 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types\n1896751 - [RHV IPI] Worker nodes stuck in the Provisioning Stage if the machineset has a long name\n1897415 - [Bare Metal - Ironic] provide the ability to set the cipher suite for ipmitool when doing a Bare Metal IPI install\n1897621 - Auth test.Login test.logs in as kubeadmin user: Timeout\n1897918 - [oVirt] e2e tests fail due to kube-apiserver not finishing\n1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability\n1899057 - fix spurious br-ex MAC address error log\n1899187 - [Openstack] node-valid-hostname.service failes during the first boot leading to 5 minute provisioning delay\n1899587 - [External] RGW usage metrics shown on Object Service Dashboard  is incorrect\n1900454 - Enable host-based disk encryption on Azure platform\n1900819 - Scaled ingress replicas following sharded pattern don\u0027t balance evenly across multi-AZ\n1901207 - Search Page - Pipeline resources table not immediately updated after Name filter applied or removed\n1901535 - Remove the managingOAuthAPIServer field from the authentication.operator API\n1901648 - \"do you need to set up custom dns\" tooltip inaccurate\n1902003 - Jobs Completions column is not sorting when there are \"0 of 1\" and \"1 of 1\" in the list. \n1902076 - image registry operator should monitor status of its routes\n1902247 - openshift-oauth-apiserver apiserver pod crashloopbackoffs\n1903055 - [OSP] Validation should fail when no any IaaS flavor or type related field are given\n1903228 - Pod stuck in Terminating, runc init process frozen\n1903383 - Latest RHCOS 47.83. builds failing to install: mount /root.squashfs failed\n1903553 - systemd container renders node NotReady after deleting it\n1903700 - metal3 Deployment doesn\u0027t have unique Pod selector\n1904006 - The --dir option doest not work for command  `oc image extract`\n1904505 - Excessive Memory Use in Builds\n1904507 - vsphere-problem-detector: implement missing metrics\n1904558 - Random init-p error when trying to start pod\n1905095 - Images built on OCP 4.6 clusters create manifests that result in quay.io (and other registries) rejecting those manifests\n1905147 - ConsoleQuickStart Card\u0027s prerequisites is a combined text instead of a list\n1905159 - Installation on previous unused dasd fails after formatting\n1905331 - openshift-multus initContainer multus-binary-copy, etc. are not requesting required resources: cpu, memory\n1905460 - Deploy using virtualmedia for disabled provisioning network on real BM(HPE) fails\n1905577 - Control plane machines not adopted when provisioning network is disabled\n1905627 - Warn users when using an unsupported browser such as IE\n1905709 - Machine API deletion does not properly handle stopped instances on AWS or GCP\n1905849 - Default volumesnapshotclass should be created when creating default storageclass\n1906056 - Bundles skipped via the `skips` field cannot be pinned\n1906102 - CBO produces standard metrics\n1906147 - ironic-rhcos-downloader should not use --insecure\n1906304 - Unexpected value NaN parsing x/y attribute when viewing pod Memory/CPU usage chart\n1906740 - [aws]Machine should be \"Failed\" when creating a machine with invalid region\n1907309 - Migrate controlflow v1alpha1 to v1beta1 in storage\n1907315 - the internal load balancer annotation for AWS should use \"true\" instead of \"0.0.0.0/0\" as value\n1907353 - [4.8] OVS daemonset is wasting resources even though it doesn\u0027t do anything\n1907614 - Update kubernetes deps to 1.20\n1908068 - Enable DownwardAPIHugePages feature gate\n1908169 - The example of Import URL is \"Fedora cloud image list\" for all templates. \n1908170 - sriov network resource injector: Hugepage injection doesn\u0027t work with mult container\n1908343 - Input labels in Manage columns modal should be clickable\n1908378 - [sig-network] pods should successfully create sandboxes by getting pod - Static Pod Failures\n1908655 - \"Evaluating rule failed\" for \"record: node:node_num_cpu:sum\" rule\n1908762 - [Dualstack baremetal cluster] multicast traffic is not working on ovn-kubernetes\n1908765 - [SCALE] enable OVN lflow data path groups\n1908774 - [SCALE] enable OVN DB memory trimming on compaction\n1908916 - CNO: turn on OVN DB RAFT diffs once all master DB pods are capable of it\n1909091 - Pod/node/ip/template isn\u0027t showing when vm is running\n1909600 - Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error\n1909849 - release-openshift-origin-installer-e2e-aws-upgrade-fips-4.4 is perm failing\n1909875 - [sig-cluster-lifecycle] Cluster version operator acknowledges upgrade : timed out waiting for cluster to acknowledge upgrade\n1910067 - UPI: openstacksdk fails on \"server group list\"\n1910113 - periodic-ci-openshift-release-master-ocp-4.5-ci-e2e-44-stable-to-45-ci is never passing\n1910318 - OC 4.6.9 Installer failed: Some pods are not scheduled: 3 node(s) didn\u0027t match node selector: AWS compute machines without status\n1910378 - socket timeouts for webservice communication between pods\n1910396 - 4.6.9 cred operator should back-off when provisioning fails on throttling\n1910500 - Could not list CSI provisioner on web when create storage class on GCP platform\n1911211 - Should show the cert-recovery-controller version  correctly\n1911470 - ServiceAccount Registry Authfiles Do Not Contain Entries for Public Hostnames\n1912571 - libvirt: Support setting dnsmasq options through the install config\n1912820 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade\n1913112 - BMC details should be optional for unmanaged hosts\n1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag\n1913341 - GCP: strange cluster behavior in CI run\n1913399 - switch to v1beta1 for the priority and fairness APIs\n1913525 - Panic in OLM packageserver when invoking webhook authorization endpoint\n1913532 - After a 4.6 to 4.7 upgrade, a node went unready\n1913974 - snapshot test periodically failing with \"can\u0027t open \u0027/mnt/test/data\u0027: No such file or directory\"\n1914127 - Deletion of oc get svc router-default -n openshift-ingress hangs\n1914446 - openshift-service-ca-operator and openshift-service-ca pods run as root\n1914994 - Panic observed in k8s-prometheus-adapter since k8s 1.20\n1915122 - Size of the hostname was preventing proper DNS resolution of the worker node names\n1915693 - Not able to install gpu-operator on cpumanager enabled node. \n1915971 - Role and Role Binding breadcrumbs do not work as expected\n1916116 - the left navigation menu would not be expanded if repeat clicking the links in Overview page\n1916118 - [OVN] Source IP is not EgressIP if configured allow 0.0.0.0/0 in the EgressFirewall\n1916392 - scrape priority and fairness endpoints for must-gather\n1916450 - Alertmanager: add title and text fields to Adv. config. section of Slack Receiver form\n1916489 - [sig-scheduling] SchedulerPriorities [Serial] fails with \"Error waiting for 1 pods to be running - probably a timeout: Timeout while waiting for pods with labels to be ready\"\n1916553 - Default template\u0027s description is empty on details tab\n1916593 - Destroy cluster sometimes stuck in a loop\n1916872 - need ability to reconcile exgw annotations on pod add\n1916890 - [OCP 4.7] api or api-int not available during installation\n1917241 - [en_US] The tooltips of Created date time is not easy to read in all most of UIs. \n1917282 - [Migration] MCO stucked for rhel worker after  enable the migration prepare state\n1917328 - It should default to current namespace when create vm from template action on details page\n1917482 - periodic-ci-openshift-release-master-ocp-4.7-e2e-metal-ipi failing with \"cannot go from state \u0027deploy failed\u0027 to state \u0027manageable\u0027\"\n1917485 - [oVirt] ovirt machine/machineset object has missing some field validations\n1917667 - Master machine config pool updates are stalled during the migration from SDN to OVNKube. \n1917906 - [oauth-server] bump k8s.io/apiserver to 1.20.3\n1917931 - [e2e-gcp-upi] failing due to missing pyopenssl library\n1918101 - [vsphere]Delete Provisioning machine took about 12 minutes\n1918376 - Image registry pullthrough does not support ICSP, mirroring e2es do not pass\n1918442 - Service Reject ACL does not work on dualstack\n1918723 - installer fails to write boot record on 4k scsi lun on s390x\n1918729 - Add hide/reveal button for the token field in the KMS configuration page\n1918750 - CVE-2021-3114 golang: crypto/elliptic: incorrect operations on the P-224 curve\n1918785 - Pod request and limit calculations in console are incorrect\n1918910 - Scale from zero annotations should not requeue if instance type missing\n1919032 - oc image extract - will not extract files from image rootdir - \"error: unexpected directory from mapping tests.test\"\n1919048 - Whereabouts IPv6 addresses not calculated when leading hextets equal 0\n1919151 - [Azure] dnsrecords with invalid domain should not be published to Azure dnsZone\n1919168 - `oc adm catalog mirror` doesn\u0027t work for the air-gapped cluster\n1919291 - [Cinder-csi-driver] Filesystem did not expand for on-line volume resize\n1919336 - vsphere-problem-detector should check if datastore is part of datastore cluster\n1919356 - Add missing profile annotation in cluster-update-keys manifests\n1919391 - CVE-2021-20206 containernetworking-cni: Arbitrary path injection via type field in CNI configuration\n1919398 - Permissive Egress NetworkPolicy (0.0.0.0/0) is blocking all traffic\n1919406 - OperatorHub filter heading \"Provider Type\" should be \"Source\"\n1919737 - hostname lookup delays when master node down\n1920209 - Multus daemonset upgrade takes the longest time in the cluster during an upgrade\n1920221 - GCP jobs exhaust zone listing query quota sometimes due to too many initializations of cloud provider in tests\n1920300 - cri-o does not support configuration of stream idle time\n1920307 - \"VM not running\" should be \"Guest agent required\" on vm details page in dev console\n1920532 - Problem in trying to connect through the service to a member that is the same as the caller. \n1920677 - Various missingKey errors in the devconsole namespace\n1920699 - Operation cannot be fulfilled on clusterresourcequotas.quota.openshift.io error when creating different OpenShift resources\n1920901 - [4.7]\"500 Internal Error\" for prometheus route in https_proxy cluster\n1920903 - oc adm top reporting unknown status for Windows node\n1920905 - Remove DNS lookup workaround from cluster-api-provider\n1921106 - A11y Violation: button name(s) on Utilization Card on Cluster Dashboard\n1921184 - kuryr-cni binds to wrong interface on machine with two interfaces\n1921227 - Fix issues related to consuming new extensions in Console static plugins\n1921264 - Bundle unpack jobs can hang indefinitely\n1921267 - ResourceListDropdown not internationalized\n1921321 - SR-IOV obliviously reboot the node\n1921335 - ThanosSidecarUnhealthy\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1921720 - test: openshift-tests.[sig-cli] oc observe works as expected [Suite:openshift/conformance/parallel]\n1921763 - operator registry has high memory usage in 4.7... cleanup row closes\n1921778 - Push to stage now failing with semver issues on old releases\n1921780 - Search page not fully internationalized\n1921781 - DefaultList component not internationalized\n1921878 - [kuryr] Egress network policy with namespaceSelector in Kuryr behaves differently than in OVN-Kubernetes\n1921885 - Server-side Dry-run with Validation Downloads Entire OpenAPI spec often\n1921892 - MAO: controller runtime manager closes event recorder\n1921894 - Backport Avoid node disruption when kube-apiserver-to-kubelet-signer is rotated\n1921937 - During upgrade /etc/hostname becomes a directory, nodes are set with kubernetes.io/hostname=localhost label\n1921953 - ClusterServiceVersion property inference does not infer package and version\n1922063 - \"Virtual Machine\" should be \"Templates\" in template wizard\n1922065 - Rootdisk size is default to 15GiB in customize wizard\n1922235 - [build-watch] e2e-aws-upi - e2e-aws-upi container setup failing because of Python code version mismatch\n1922264 - Restore snapshot as a new PVC: RWO/RWX access modes are not click-able if parent PVC is deleted\n1922280 - [v2v] on the upstream release, In VM import wizard I see RHV but no oVirt\n1922646 - Panic in authentication-operator invoking webhook authorization\n1922648 - FailedCreatePodSandBox due to \"failed to pin namespaces [uts]: [pinns:e]: /var/run/utsns exists and is not a directory: File exists\"\n1922764 - authentication operator is degraded due to number of kube-apiservers\n1922992 - some button text on YAML sidebar are not translated\n1922997 - [Migration]The SDN migration rollback failed. \n1923038 - [OSP] Cloud Info is loaded twice\n1923157 - Ingress traffic performance drop due to NodePort services\n1923786 - RHV UPI fails with unhelpful message when ASSET_DIR is not set. \n1923811 - Registry claims Available=True despite .status.readyReplicas == 0  while .spec.replicas == 2\n1923847 - Error occurs when creating pods if configuring multiple key-only labels in default cluster-wide node selectors or project-wide node selectors\n1923984 - Incorrect anti-affinity for UWM prometheus\n1924020 - panic: runtime error: index out of range [0] with length 0\n1924075 - kuryr-controller restart when enablePortPoolsPrepopulation = true\n1924083 - \"Activity\" Pane of Persistent Storage tab shows events related to Noobaa too\n1924140 - [OSP] Typo in OPENSHFIT_INSTALL_SKIP_PREFLIGHT_VALIDATIONS variable\n1924171 - ovn-kube must handle single-stack to dual-stack migration\n1924358 - metal UPI setup fails, no worker nodes\n1924502 - Failed to start transient scope unit: Argument list too long / systemd[1]: Failed to set up mount unit: Invalid argument\n1924536 - \u0027More about Insights\u0027 link points to support link\n1924585 - \"Edit Annotation\" are not correctly translated in Chinese\n1924586 - Control Plane status and Operators status are not fully internationalized\n1924641 - [User Experience] The message \"Missing storage class\" needs to be displayed after user clicks Next and needs to be rephrased\n1924663 - Insights operator should collect related pod logs when operator is degraded\n1924701 - Cluster destroy fails when using byo with Kuryr\n1924728 - Difficult to identify deployment issue if the destination disk is too small\n1924729 - Create Storageclass for CephFS provisioner assumes incorrect default FSName in external mode (side-effect of fix for Bug 1878086)\n1924747 - InventoryItem doesn\u0027t internationalize resource kind\n1924788 - Not clear error message when there are no NADs available for the user\n1924816 - Misleading error messages in ironic-conductor log\n1924869 - selinux avc deny after installing OCP 4.7\n1924916 - PVC reported as Uploading when it is actually cloning\n1924917 - kuryr-controller in crash loop if IP is removed from secondary interfaces\n1924953 - newly added \u0027excessive etcd leader changes\u0027 test case failing in serial job\n1924968 - Monitoring list page filter options are not translated\n1924983 - some components in utils directory not localized\n1925017 - [UI] VM Details-\u003e Network Interfaces, \u0027Name,\u0027 is displayed instead on \u0027Name\u0027\n1925061 - Prometheus backed by a PVC may start consuming a lot of RAM after 4.6 -\u003e 4.7 upgrade due to series churn\n1925083 - Some texts are not marked for translation on idp creation page. \n1925087 - Add i18n support for the Secret page\n1925148 - Shouldn\u0027t create the redundant imagestream when use `oc new-app --name=testapp2 -i ` with exist imagestream\n1925207 - VM from custom template - cloudinit disk is not added if creating the VM from custom template using customization wizard\n1925216 - openshift installer fails immediately failed to fetch Install Config\n1925236 - OpenShift Route targets every port of a multi-port service\n1925245 - oc idle: Clusters upgrading with an idled workload do not have annotations on the workload\u0027s service\n1925261 - Items marked as mandatory in KMS Provider form are not enforced\n1925291 - Baremetal IPI - While deploying with IPv6 provision network with subnet other than /64 masters fail to PXE boot\n1925343 - [ci] e2e-metal tests are not using reserved instances\n1925493 - Enable snapshot e2e tests\n1925586 - cluster-etcd-operator is leaking transports\n1925614 - Error: InstallPlan.operators.coreos.com not found\n1925698 - On GCP, load balancers report kube-apiserver fails its /readyz check 50% of the time, causing load balancer backend churn and disruptions to apiservers\n1926029 - [RFE] Either disable save or give warning when no disks support snapshot\n1926054 - Localvolume CR is created successfully, when the storageclass name defined in the localvolume exists. \n1926072 - Close button (X) does not work in the new \"Storage cluster exists\" Warning alert message(introduced via fix for Bug 1867400)\n1926082 - Insights operator should not go degraded during upgrade\n1926106 - [ja_JP][zh_CN] Create Project, Delete Project and Delete PVC modal are not fully internationalized\n1926115 - Texts in \u201cInsights\u201d popover on overview page are not marked for i18n\n1926123 - Pseudo bug: revert \"force cert rotation every couple days for development\" in 4.7\n1926126 - some kebab/action menu translation issues\n1926131 - Add HPA page is not fully internationalized\n1926146 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it\n1926154 - Create new pool with arbiter - wrong replica\n1926278 - [oVirt] consume K8S 1.20 packages\n1926279 - Pod ignores mtu setting from sriovNetworkNodePolicies in case of PF partitioning\n1926285 - ignore pod not found status messages\n1926289 - Accessibility: Modal content hidden from screen readers\n1926310 - CannotRetrieveUpdates alerts on Critical severity\n1926329 - [Assisted-4.7][Staging] monitoring stack in staging is being overloaded by the amount of metrics being exposed by assisted-installer pods and scraped by prometheus. \n1926336 - Service details can overflow boxes at some screen widths\n1926346 - move to go 1.15 and registry.ci.openshift.org\n1926364 - Installer timeouts because proxy blocked connection to Ironic API running on bootstrap VM\n1926465 - bootstrap kube-apiserver does not have --advertise-address set \u2013 was: [BM][IPI][DualStack] Installation fails cause Kubernetes service doesn\u0027t have IPv6 endpoints\n1926484 - API server exits non-zero on 2 SIGTERM signals\n1926547 - OpenShift installer not reporting IAM permission issue when removing the Shared Subnet Tag\n1926579 - Setting .spec.policy is deprecated and will be removed eventually. Please use .spec.profile instead is being logged every 3 seconds in scheduler operator log\n1926598 - Duplicate alert rules are displayed on console for thanos-querier api return wrong results\n1926776 - \"Template support\" modal appears when select the RHEL6 common template\n1926835 - [e2e][automation] prow gating use unsupported CDI version\n1926843 - pipeline with finally tasks status is improper\n1926867 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade\n1926893 - When deploying the operator via OLM (after creating the respective catalogsource), the deployment \"lost\" the `resources` section. \n1926903 - NTO may fail to disable stalld when relying on Tuned \u0027[service]\u0027 plugin\n1926931 - Inconsistent ovs-flow rule on one of the app node for egress node\n1926943 - vsphere-problem-detector: Alerts in CI jobs\n1926977 - [sig-devex][Feature:ImageEcosystem][Slow] openshift sample application repositories rails/nodejs\n1927013 - Tables don\u0027t render properly at smaller screen widths\n1927017 - CCO does not relinquish leadership when restarting for proxy CA change\n1927042 - Empty static pod files on UPI deployments are confusing\n1927047 - multiple external gateway pods will not work in ingress with IP fragmentation\n1927068 - Workers fail to PXE boot when IPv6 provisionining network has subnet other than /64\n1927075 - [e2e][automation] Fix pvc string in pvc.view\n1927118 - OCP 4.7: NVIDIA GPU Operator DCGM metrics not displayed in OpenShift Console Monitoring Metrics page\n1927244 - UPI installation with Kuryr timing out on bootstrap stage\n1927263 - kubelet service takes around 43 secs to start container when started from stopped state\n1927264 - FailedCreatePodSandBox due to multus inability to reach apiserver\n1927310 - Performance: Console makes unnecessary requests for en-US messages on load\n1927340 - Race condition in OperatorCondition reconcilation\n1927366 - OVS configuration service unable to clone NetworkManager\u0027s connections in the overlay FS\n1927391 - Fix flake in TestSyncPodsDeletesWhenSourcesAreReady\n1927393 - 4.7 still points to 4.6 catalog images\n1927397 - p\u0026f: add auto update for priority \u0026 fairness bootstrap configuration objects\n1927423 - Happy \"Not Found\" and no visible error messages on error-list page when /silences 504s\n1927465 - Homepage dashboard content not internationalized\n1927678 - Reboot interface defaults to softPowerOff so fencing is too slow\n1927731 - /usr/lib/dracut/modules.d/30ignition/ignition --version sigsev\n1927797 - \u0027Pod(s)\u0027 should be included in the pod donut label when a horizontal pod autoscaler is enabled\n1927882 - Can\u0027t create cluster role binding from UI when a project is selected\n1927895 - global RuntimeConfig is overwritten with merge result\n1927898 - i18n Admin Notifier\n1927902 - i18n Cluster Utilization dashboard duration\n1927903 - \"CannotRetrieveUpdates\" - critical error in openshift web console\n1927925 - Manually misspelled as Manualy\n1927941 - StatusDescriptor detail item and Status component can cause runtime error when the status is an object or array\n1927942 - etcd should use socket option (SO_REUSEADDR) instead of wait for port release on process restart\n1927944 - cluster version operator cycles terminating state waiting for leader election\n1927993 - Documentation Links in OKD Web Console are not Working\n1928008 - Incorrect behavior when we click back button after viewing the node details in Internal-attached mode\n1928045 - N+1 scaling Info message says \"single zone\" even if the nodes are spread across 2 or 0 zones\n1928147 - Domain search set in the required domains in Option 119 of DHCP Server is ignored by RHCOS on RHV\n1928157 - 4.7 CNO claims to be done upgrading before it even starts\n1928164 - Traffic to outside the cluster redirected when OVN is used and NodePort service is configured\n1928297 - HAProxy fails with 500 on some requests\n1928473 - NetworkManager overlay FS not being created on None platform\n1928512 - sap license management logs gatherer\n1928537 - Cannot IPI with tang/tpm disk encryption\n1928640 - Definite error message when using StorageClass based on azure-file / Premium_LRS\n1928658 - Update plugins and Jenkins version to prepare openshift-sync-plugin 1.0.46 release\n1928850 - Unable to pull images due to limited quota on Docker Hub\n1928851 - manually creating NetNamespaces will break things and this is not obvious\n1928867 - golden images - DV should not be created with WaitForFirstConsumer\n1928869 - Remove css required to fix search bug in console caused by pf issue in 2021.1\n1928875 - Update translations\n1928893 - Memory Pressure Drop Down Info is stating \"Disk\" capacity is low instead of memory\n1928931 - DNSRecord CRD is using deprecated v1beta1 API\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1929052 - Add new Jenkins agent maven dir for 3.6\n1929056 - kube-apiserver-availability.rules are failing evaluation\n1929110 - LoadBalancer service check test fails during vsphere upgrade\n1929136 - openshift isn\u0027t able to mount nfs manila shares to pods\n1929175 - LocalVolumeSet: PV is created on disk belonging to other provisioner\n1929243 - Namespace column missing in Nodes Node Details / pods tab\n1929277 - Monitoring workloads using too high a priorityclass\n1929281 - Update Tech Preview badge to transparent border color when upgrading to PatternFly v4.87.1\n1929314 - ovn-kubernetes endpoint slice controller doesn\u0027t run on CI jobs\n1929359 - etcd-quorum-guard uses origin-cli [4.8]\n1929577 - Edit Application action overwrites Deployment envFrom values on save\n1929654 - Registry for Azure uses legacy V1 StorageAccount\n1929693 - Pod stuck at \"ContainerCreating\" status\n1929733 - oVirt CSI driver operator is constantly restarting\n1929769 - Getting 404 after switching user perspective in another tab and reload Project details\n1929803 - Pipelines shown in edit flow for Workloads created via ContainerImage flow\n1929824 - fix alerting on volume name check for vsphere\n1929917 - Bare-metal operator is firing for ClusterOperatorDown for 15m during 4.6 to 4.7 upgrade\n1929944 - The etcdInsufficientMembers alert fires incorrectly when any instance is down and not when quorum is lost\n1930007 - filter dropdown item filter and resource list dropdown item filter doesn\u0027t support multi selection\n1930015 - OS list is overlapped by buttons in template wizard\n1930064 - Web console crashes during VM creation from template when no storage classes are defined\n1930220 - Cinder CSI driver is not able to mount volumes under heavier load\n1930240 - Generated clouds.yaml incomplete when provisioning network is disabled\n1930248 - After creating a remediation flow and rebooting a worker there is no access to the openshift-web-console\n1930268 - intel vfio devices are not expose as resources\n1930356 - Darwin binary missing from mirror.openshift.com\n1930393 - Gather info about unhealthy SAP pods\n1930546 - Monitoring-dashboard-workload keep loading when user with cluster-role cluster-monitoring-view login develoer console\n1930570 - Jenkins templates are displayed in Developer Catalog twice\n1930620 - the logLevel field in containerruntimeconfig can\u0027t be set to \"trace\"\n1930631 - Image local-storage-mustgather in the doc does not come from product registry\n1930893 - Backport upstream patch 98956 for pod terminations\n1931005 - Related objects page doesn\u0027t show the object when its name is empty\n1931103 - remove periodic log within kubelet\n1931115 - Azure cluster install fails with worker type workers Standard_D4_v2\n1931215 - [RFE] Cluster-api-provider-ovirt should handle affinity groups\n1931217 - [RFE] Installer should create RHV Affinity group for OCP cluster VMS\n1931467 - Kubelet consuming a large amount of CPU and memory and node becoming unhealthy\n1931505 - [IPI baremetal] Two nodes hold the VIP post remove and start  of the Keepalived container\n1931522 - Fresh UPI install on BM with bonding using OVN Kubernetes fails\n1931529 - SNO: mentioning of 4 nodes in error message - Cluster network CIDR prefix 24 does not contain enough addresses for 4 hosts each one with 25 prefix (128 addresses)\n1931629 - Conversational Hub Fails due to ImagePullBackOff\n1931637 - Kubeturbo Operator fails due to ImagePullBackOff\n1931652 - [single-node] etcd: discover-etcd-initial-cluster graceful termination race. \n1931658 - [single-node] cluster-etcd-operator: cluster never pivots from bootstrapIP endpoint\n1931674 - [Kuryr] Enforce nodes MTU for the Namespaces and Pods\n1931852 - Ignition HTTP GET is failing, because DHCP IPv4 config is failing silently\n1931883 - Fail to install Volume Expander Operator due to CrashLookBackOff\n1931949 - Red Hat  Integration Camel-K Operator keeps stuck in Pending state\n1931974 - Operators cannot access kubeapi endpoint on OVNKubernetes on ipv6\n1931997 - network-check-target causes upgrade to fail from 4.6.18 to 4.7\n1932001 - Only one of multiple subscriptions to the same package is honored\n1932097 - Apiserver liveness probe is marking it as unhealthy during normal shutdown\n1932105 - machine-config ClusterOperator claims level while control-plane still updating\n1932133 - AWS EBS CSI Driver doesn\u2019t support \u201ccsi.storage.k8s.io/fsTyps\u201d parameter\n1932135 - When \u201ciopsPerGB\u201d parameter is not set, event for AWS EBS CSI Driver provisioning is not clear\n1932152 - When \u201ciopsPerGB\u201d parameter is set to a wrong number, events for AWS EBS CSI Driver provisioning are not clear\n1932154 - [AWS ] machine stuck in provisioned phase , no warnings or errors\n1932182 - catalog operator causing CPU spikes and bad etcd performance\n1932229 - Can\u2019t find kubelet metrics for aws ebs csi volumes\n1932281 - [Assisted-4.7][UI] Unable to change upgrade channel once upgrades were discovered\n1932323 - CVE-2021-26540 sanitize-html: improper validation of hostnames set by the \"allowedIframeHostnames\" option can lead to bypass hostname whitelist for iframe element\n1932324 - CRIO fails to create a Pod in sandbox stage -  starting container process caused: process_linux.go:472: container init caused: Running hook #0:: error running hook: exit status 255, stdout: , stderr: \\\"\\n\"\n1932362 - CVE-2021-26539 sanitize-html: improper handling of internationalized domain name (IDN) can lead to bypass hostname whitelist validation\n1932401 - Cluster Ingress Operator degrades if external LB redirects http to https because of new \"canary\" route\n1932453 - Update Japanese timestamp format\n1932472 - Edit Form/YAML switchers cause weird collapsing/code-folding issue\n1932487 - [OKD] origin-branding manifest is missing cluster profile annotations\n1932502 - Setting MTU for a bond interface using Kernel arguments is not working\n1932618 - Alerts during a test run should fail the test job, but were not\n1932624 - ClusterMonitoringOperatorReconciliationErrors is pending at the end of an upgrade and probably should not be\n1932626 - During a 4.8 GCP upgrade OLM fires an alert indicating the operator is unhealthy\n1932673 - Virtual machine template provided by red hat should not be editable. The UI allows to edit and then reverse the change after it was made\n1932789 - Proxy with port is unable to be validated if it overlaps with service/cluster network\n1932799 - During a hive driven baremetal installation the process does not go beyond 80% in the bootstrap VM\n1932805 - e2e: test OAuth API connections in the tests by that name\n1932816 - No new local storage operator bundle image is built\n1932834 - enforce the use of hashed access/authorize tokens\n1933101 - Can not upgrade a Helm Chart that uses a library chart in the OpenShift dev console\n1933102 - Canary daemonset uses default node selector\n1933114 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it [Suite:openshift/conformance/parallel/minimal]\n1933159 - multus DaemonSets should use maxUnavailable: 33%\n1933173 - openshift-sdn/sdn DaemonSet should use maxUnavailable: 10%\n1933174 - openshift-sdn/ovs DaemonSet should use maxUnavailable: 10%\n1933179 - network-check-target DaemonSet should use maxUnavailable: 10%\n1933180 - openshift-image-registry/node-ca DaemonSet should use maxUnavailable: 10%\n1933184 - openshift-cluster-csi-drivers DaemonSets should use maxUnavailable: 10%\n1933263 - user manifest with nodeport services causes bootstrap to block\n1933269 - Cluster unstable replacing an unhealthy etcd member\n1933284 - Samples in CRD creation are ordered arbitarly\n1933414 - Machines are created with unexpected name for Ports\n1933599 - bump k8s.io/apiserver to 1.20.3\n1933630 - [Local Volume] Provision disk failed when disk label has unsupported value like \":\"\n1933664 - Getting Forbidden for image in a container template when creating a sample app\n1933708 - Grafana is not displaying deployment config resources in dashboard `Default /Kubernetes / Compute Resources / Namespace (Workloads)`\n1933711 - EgressDNS: Keep short lived records at most 30s\n1933730 - [AI-UI-Wizard] Toggling \"Use extra disks for local storage\" checkbox highlights the \"Next\" button to move forward but grays out once clicked\n1933761 - Cluster DNS service caps TTLs too low and thus evicts from its cache too aggressively\n1933772 - MCD Crash Loop Backoff\n1933805 - TargetDown alert fires during upgrades because of normal upgrade behavior\n1933857 - Details page can throw an uncaught exception if kindObj prop is undefined\n1933880 - Kuryr-Controller crashes when it\u0027s missing the status object\n1934021 - High RAM usage on machine api termination node system oom\n1934071 - etcd consuming high amount of  memory and CPU after upgrade to 4.6.17\n1934080 - Both old and new Clusterlogging CSVs stuck in Pending during upgrade\n1934085 - Scheduling conformance tests failing in a single node cluster\n1934107 - cluster-authentication-operator builds URL incorrectly for IPv6\n1934112 - Add memory and uptime metadata to IO archive\n1934113 - mcd panic when there\u0027s not enough free disk space\n1934123 - [OSP] First public endpoint is used to fetch ignition config from Glance URL (with multiple endpoints) on OSP\n1934163 - Thanos Querier restarting and gettin alert ThanosQueryHttpRequestQueryRangeErrorRateHigh\n1934174 - rootfs too small when enabling NBDE\n1934176 - Machine Config Operator degrades during cluster update with failed to convert Ignition config spec v2 to v3\n1934177 - knative-camel-operator  CreateContainerError \"container_linux.go:366: starting container process caused: chdir to cwd (\\\"/home/nonroot\\\") set in config.json failed: permission denied\"\n1934216 - machineset-controller stuck in CrashLoopBackOff after upgrade to 4.7.0\n1934229 - List page text filter has input lag\n1934397 - Extend OLM operator gatherer to include Operator/ClusterServiceVersion conditions\n1934400 - [ocp_4][4.6][apiserver-auth] OAuth API servers are not ready - PreconditionNotReady\n1934516 - Setup different priority classes for prometheus-k8s and prometheus-user-workload pods\n1934556 - OCP-Metal images\n1934557 - RHCOS boot image bump for LUKS fixes\n1934643 - Need BFD failover capability on ECMP routes\n1934711 - openshift-ovn-kubernetes ovnkube-node DaemonSet should use maxUnavailable: 10%\n1934773 - Canary client should perform canary probes explicitly over HTTPS (rather than redirect from HTTP)\n1934905 - CoreDNS\u0027s \"errors\" plugin is not enabled for custom upstream resolvers\n1935058 - Can\u2019t finish install sts clusters on aws government region\n1935102 - Error: specifying a root certificates file with the insecure flag is not allowed during oc login\n1935155 - IGMP/MLD packets being dropped\n1935157 - [e2e][automation] environment tests broken\n1935165 - OCP 4.6 Build fails when filename contains an umlaut\n1935176 - Missing an indication whether the deployed setup is SNO. \n1935269 - Topology operator group shows child Jobs. Not shown in details view\u0027s resources. \n1935419 - Failed to scale worker using virtualmedia on Dell R640\n1935528 - [AWS][Proxy] ingress reports degrade with CanaryChecksSucceeding=False in the cluster with proxy setting\n1935539 - Openshift-apiserver CO unavailable during cluster upgrade from 4.6 to 4.7\n1935541 - console operator panics in DefaultDeployment with nil cm\n1935582 - prometheus liveness probes cause issues while replaying WAL\n1935604 - high CPU usage fails ingress controller\n1935667 - pipelinerun status icon rendering issue\n1935706 - test: Detect when the master pool is still updating after upgrade\n1935732 - Update Jenkins agent maven directory to be version agnostic [ART ocp build data]\n1935814 - Pod and Node lists eventually have incorrect row heights when additional columns have long text\n1935909 - New CSV using ServiceAccount named \"default\" stuck in Pending during upgrade\n1936022 - DNS operator performs spurious updates in response to API\u0027s defaulting of daemonset\u0027s terminationGracePeriod and service\u0027s clusterIPs\n1936030 - Ingress operator performs spurious updates in response to API\u0027s defaulting of NodePort service\u0027s clusterIPs field\n1936223 - The IPI installer has a typo. It is missing the word \"the\" in \"the Engine\". \n1936336 - Updating multus-cni builder \u0026 base images to be consistent with ART 4.8 (closed)\n1936342 - kuryr-controller restarting after 3 days cluster running - pools without members\n1936443 - Hive based OCP IPI baremetal installation fails to connect to API VIP port 22623\n1936488 - [sig-instrumentation][Late] Alerts shouldn\u0027t report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured: Prometheus query error\n1936515 - sdn-controller is missing some health checks\n1936534 - When creating a worker with a used mac-address stuck on registering\n1936585 - configure alerts if the catalogsources are missing\n1936620 - OLM checkbox descriptor renders switch instead of checkbox\n1936721 - network-metrics-deamon not associated with a priorityClassName\n1936771 - [aws ebs csi driver] The event for Pod consuming a readonly PVC is not clear\n1936785 - Configmap gatherer doesn\u0027t include namespace name (in the archive path) in case of a configmap with binary data\n1936788 - RBD RWX PVC creation with  Filesystem volume mode selection is creating RWX PVC with Block volume mode instead of disabling Filesystem volume mode selection\n1936798 - Authentication log gatherer shouldn\u0027t scan all the pod logs in the openshift-authentication namespace\n1936801 - Support ServiceBinding 0.5.0+\n1936854 - Incorrect imagestream is shown as selected in knative service container image edit flow\n1936857 - e2e-ovirt-ipi-install-install is permafailing on 4.5 nightlies\n1936859 - ovirt 4.4 -\u003e 4.5 upgrade jobs are permafailing\n1936867 - Periodic vsphere IPI install is broken - missing pip\n1936871 - [Cinder CSI] Topology aware provisioning doesn\u0027t work when Nova and Cinder AZs are different\n1936904 - Wrong output YAML when syncing groups without --confirm\n1936983 - Topology view - vm details screen isntt stop loading\n1937005 - when kuryr quotas are unlimited, we should not sent alerts\n1937018 - FilterToolbar component does not handle \u0027null\u0027 value for \u0027rowFilters\u0027 prop\n1937020 - Release new from image stream chooses incorrect ID based on status\n1937077 - Blank White page on Topology\n1937102 - Pod Containers Page Not Translated\n1937122 - CAPBM changes to support flexible reboot modes\n1937145 - [Local storage] PV provisioned by localvolumeset stays in \"Released\" status after the pod/pvc deleted\n1937167 - [sig-arch] Managed cluster should have no crashlooping pods in core namespaces over four minutes\n1937244 - [Local Storage] The model name of aws EBS doesn\u0027t be extracted well\n1937299 - pod.spec.volumes.awsElasticBlockStore.partition is not respected on NVMe volumes\n1937452 - cluster-network-operator CI linting fails in master branch\n1937459 - Wrong Subnet retrieved for Service without Selector\n1937460 - [CI] Network quota pre-flight checks are failing the installation\n1937464 - openstack cloud credentials are not getting configured with correct user_domain_name across the cluster\n1937466 - KubeClientCertificateExpiration alert is confusing, without explanation in the documentation\n1937496 - Metrics viewer in OCP Console is missing date in a timestamp for selected datapoint\n1937535 - Not all image pulls within OpenShift builds retry\n1937594 - multiple pods in ContainerCreating state after migration from OpenshiftSDN to OVNKubernetes\n1937627 - Bump DEFAULT_DOC_URL for 4.8\n1937628 - Bump upgrade channels for 4.8\n1937658 - Description for storage class encryption during storagecluster creation needs to be updated\n1937666 - Mouseover on headline\n1937683 - Wrong icon classification of output in buildConfig when the destination is a DockerImage\n1937693 - ironic image \"/\" cluttered with files\n1937694 - [oVirt] split ovirt providerIDReconciler logic into NodeController and ProviderIDController\n1937717 - If browser default font size is 20, the layout of template screen breaks\n1937722 - OCP 4.8 vuln due to BZ 1936445\n1937929 - Operand page shows a 404:Not Found error for OpenShift GitOps Operator\n1937941 - [RFE]fix wording for favorite templates\n1937972 - Router HAProxy config file template is slow to render due to repetitive regex compilations\n1938131 - [AWS] Missing iam:ListAttachedRolePolicies permission in permissions.go\n1938321 - Cannot view PackageManifest objects in YAML on \u0027Home \u003e Search\u0027 page nor \u0027CatalogSource details \u003e Operators tab\u0027\n1938465 - thanos-querier should set a CPU request on the thanos-query container\n1938466 - packageserver deployment sets neither CPU or memory request on the packageserver container\n1938467 - The default cluster-autoscaler should get default cpu and memory requests if user omits them\n1938468 - kube-scheduler-operator has a container without a CPU request\n1938492 - Marketplace extract container does not request CPU or memory\n1938493 - machine-api-operator declares restrictive cpu and memory limits where it should not\n1938636 - Can\u0027t set the loglevel of the container: cluster-policy-controller and kube-controller-manager-recovery-controller\n1938903 - Time range on dashboard page will be empty after drog and drop mouse in the graph\n1938920 - ovnkube-master/ovs-node DaemonSets should use maxUnavailable: 10%\n1938947 - Update blocked from 4.6 to 4.7 when using spot/preemptible instances\n1938949 - [VPA] Updater failed to trigger evictions due to \"vpa-admission-controller\" not found\n1939054 - machine healthcheck kills aws spot instance before generated\n1939060 - CNO: nodes and masters are upgrading simultaneously\n1939069 - Add source to vm template silently failed when no storage class is defined in the cluster\n1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string\n1939168 - Builds failing for OCP 3.11 since PR#25 was merged\n1939226 - kube-apiserver readiness probe appears to be hitting /healthz, not /readyz\n1939227 - kube-apiserver liveness probe appears to be hitting /healthz, not /livez\n1939232 - CI tests using openshift/hello-world broken by Ruby Version Update\n1939270 - fix co upgradeableFalse status and reason\n1939294 - OLM may not delete pods with grace period zero (force delete)\n1939412 - missed labels for thanos-ruler pods\n1939485 - CVE-2021-20291 containers/storage: DoS via malicious image\n1939547 - Include container=\"POD\" in resource queries\n1939555 - VSphereProblemDetectorControllerDegraded: context canceled during upgrade to 4.8.0\n1939573 - after entering valid git repo url on add flow page, throwing warning message instead Validated\n1939580 - Authentication operator is degraded during 4.8 to 4.8 upgrade and normal 4.8 e2e runs\n1939606 - Attempting to put a host into maintenance mode warns about Ceph cluster health, but no storage cluster problems are apparent\n1939661 - support new AWS region ap-northeast-3\n1939726 - clusteroperator/network should not change condition/Degraded during normal serial test execution\n1939731 - Image registry operator reports unavailable during normal serial run\n1939734 - Node Fanout Causes Excessive WATCH Secret Calls, Taking Down Clusters\n1939740 - dual stack nodes with OVN single ipv6 fails on bootstrap phase\n1939752 - ovnkube-master sbdb container does not set requests on cpu or memory\n1939753 - Delete HCO is stucking if there is still VM in the cluster\n1939815 - Change the Warning Alert for Encrypted PVs in Create StorageClass(provisioner:RBD) page\n1939853 - [DOC] Creating manifests API should not allow folder in the \"file_name\"\n1939865 - GCP PD CSI driver does not have CSIDriver instance\n1939869 - [e2e][automation] Add annotations to datavolume for HPP\n1939873 - Unlimited number of characters accepted for base domain name\n1939943 - `cluster-kube-apiserver-operator check-endpoints` observed a panic: runtime error: invalid memory address or nil pointer dereference\n1940030 - cluster-resource-override: fix spelling mistake for run-level match expression in webhook configuration\n1940057 - Openshift builds should use a wach instead of polling when checking for pod status\n1940142 - 4.6-\u003e4.7 updates stick on OpenStackCinderCSIDriverOperatorCR_OpenStackCinderDriverControllerServiceController_Deploying\n1940159 - [OSP] cluster destruction fails to remove router in BYON (with provider network) with Kuryr as primary network\n1940206 - Selector and VolumeTableRows not i18ned\n1940207 - 4.7-\u003e4.6 rollbacks stuck on prometheusrules admission webhook \"no route to host\"\n1940314 - Failed to get type for Dashboard Kubernetes / Compute Resources / Namespace (Workloads)\n1940318 - No data under \u0027Current Bandwidth\u0027 for Dashboard \u0027Kubernetes / Networking / Pod\u0027\n1940322 - Split of dashbard  is wrong, many Network parts\n1940337 - rhos-ipi installer fails with not clear message when openstack tenant doesn\u0027t have flavors needed for compute machines\n1940361 - [e2e][automation] Fix vm action tests with storageclass HPP\n1940432 - Gather datahubs.installers.datahub.sap.com resources from SAP clusters\n1940488 - After fix for CVE-2021-3344, Builds do not mount node entitlement keys\n1940498 - pods may fail to add logical port due to lr-nat-del/lr-nat-add error messages\n1940499 - hybrid-overlay not logging properly before exiting due to an error\n1940518 - Components in bare metal components lack resource requests\n1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n1940704 - prjquota is dropped from rootflags if rootfs is reprovisioned\n1940755 - [Web-console][Local Storage] LocalVolumeSet could not be created from web-console without detail error info\n1940865 - Add BareMetalPlatformType into e2e upgrade service unsupported list\n1940876 - Components in ovirt components lack resource requests\n1940889 - Installation failures in OpenStack release jobs\n1940933 - [sig-arch] Check if alerts are firing during or after upgrade success: AggregatedAPIDown on v1beta1.metrics.k8s.io\n1940939 - Wrong Openshift node IP as kubelet setting VIP as node IP\n1940940 - csi-snapshot-controller goes unavailable when machines are added removed to cluster\n1940950 - vsphere: client/bootstrap CSR double create\n1940972 - vsphere: [4.6] CSR approval delayed for unknown reason\n1941000 - cinder storageclass creates persistent volumes with wrong label failure-domain.beta.kubernetes.io/zone in multi availability zones architecture on OSP 16. \n1941334 - [RFE] Cluster-api-provider-ovirt should handle auto pinning policy\n1941342 - Add `kata-osbuilder-generate.service` as part of the default presets\n1941456 - Multiple pods stuck in ContainerCreating status with the message \"failed to create container for [kubepods burstable podxxx] : dbus: connection closed by user\" being seen in the journal log\n1941526 - controller-manager-operator: Observed a panic: nil pointer dereference\n1941592 - HAProxyDown not Firing\n1941606 - [assisted operator] Assisted Installer Operator CSV related images should be digests for icsp\n1941625 - Developer -\u003e Topology - i18n misses\n1941635 - Developer -\u003e Monitoring - i18n misses\n1941636 - BM worker nodes deployment with virtual media failed while trying to clean raid\n1941645 - Developer -\u003e Builds - i18n misses\n1941655 - Developer -\u003e Pipelines - i18n misses\n1941667 - Developer -\u003e Project - i18n misses\n1941669 - Developer -\u003e ConfigMaps - i18n misses\n1941759 - Errored pre-flight checks should not prevent install\n1941798 - Some details pages don\u0027t have internationalized ResourceKind labels\n1941801 - Many filter toolbar dropdowns haven\u0027t been internationalized\n1941815 - From the web console the terminal can no longer connect after using leaving and returning to the terminal view\n1941859 - [assisted operator] assisted pod deploy first time in error state\n1941901 - Toleration merge logic does not account for multiple entries with the same key\n1941915 - No validation against template name in boot source customization\n1941936 - when setting parameters in containerRuntimeConfig, it will show incorrect information on its description\n1941980 - cluster-kube-descheduler operator is broken when upgraded from 4.7 to 4.8\n1941990 - Pipeline metrics endpoint changed in osp-1.4\n1941995 - fix backwards incompatible trigger api changes in osp1.4\n1942086 - Administrator -\u003e Home - i18n misses\n1942117 - Administrator -\u003e Workloads - i18n misses\n1942125 - Administrator -\u003e Serverless - i18n misses\n1942193 - Operand creation form - broken/cutoff blue line on the Accordion component (fieldGroup)\n1942207 - [vsphere] hostname are changed when upgrading from 4.6 to 4.7.x causing upgrades to fail\n1942271 - Insights operator doesn\u0027t gather pod information from openshift-cluster-version\n1942375 - CRI-O failing with error \"reserving ctr name\"\n1942395 - The status is always \"Updating\" on dc detail page after deployment has failed. \n1942521 - [Assisted-4.7] [Staging][OCS] Minimum memory for selected role is failing although minimum OCP requirement satisfied\n1942522 - Resolution fails to sort channel if inner entry does not satisfy predicate\n1942536 - Corrupted image preventing containers from starting\n1942548 - Administrator -\u003e Networking - i18n misses\n1942553 - CVE-2021-22133 go.elastic.co/apm: leaks sensitive HTTP headers during panic\n1942555 - Network policies in ovn-kubernetes don\u0027t support external traffic from router when the endpoint publishing strategy is HostNetwork\n1942557 - Query is reporting \"no datapoint\" when label cluster=\"\" is set but work when the label is removed or when running directly in Prometheus\n1942608 - crictl cannot list the images with an error: error locating item named \"manifest\" for image with ID\n1942614 - Administrator -\u003e Storage - i18n misses\n1942641 - Administrator -\u003e Builds - i18n misses\n1942673 - Administrator -\u003e Pipelines - i18n misses\n1942694 - Resource names with a colon do not display property in the browser window title\n1942715 - Administrator -\u003e User Management - i18n misses\n1942716 - Quay Container Security operator has Medium \u003c-\u003e Low colors reversed\n1942725 - [SCC] openshift-apiserver degraded when creating new pod after installing Stackrox which creates a less privileged SCC [4.8]\n1942736 - Administrator -\u003e Administration - i18n misses\n1942749 - Install Operator form should use info icon for popovers\n1942837 - [OCPv4.6] unable to deploy pod with unsafe sysctls\n1942839 - Windows VMs fail to start on air-gapped environments\n1942856 - Unable to assign nodes for EgressIP even if the egress-assignable label is set\n1942858 - [RFE]Confusing detach volume UX\n1942883 - AWS EBS CSI driver does not support partitions\n1942894 - IPA error when provisioning masters due to an error from ironic.conductor - /dev/sda is busy\n1942935 - must-gather improvements\n1943145 - vsphere: client/bootstrap CSR double create\n1943175 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies (set azure storage account TLS version default to 1.2)\n1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl()\n1943219 - unable to install IPI PRIVATE OpenShift cluster in Azure - SSH access from the Internet should be blocked\n1943224 - cannot upgrade openshift-kube-descheduler from 4.7.2 to latest\n1943238 - The conditions table does not occupy 100% of the width. \n1943258 - [Assisted-4.7][Staging][Advanced Networking] Cluster install fails while waiting for control plane\n1943314 - [OVN SCALE] Combine Logical Flows inside Southbound DB. \n1943315 - avoid workload disruption for ICSP changes\n1943320 - Baremetal node loses connectivity with bonded interface and OVNKubernetes\n1943329 - TLSSecurityProfile missing from KubeletConfig CRD Manifest\n1943356 - Dynamic plugins surfaced in the UI should be referred to as \"Console plugins\"\n1943539 - crio-wipe is failing to start \"Failed to shutdown storage before wiping: A layer is mounted: layer is in use by a container\"\n1943543 - DeploymentConfig Rollback doesn\u0027t reset params correctly\n1943558 - [assisted operator] Assisted Service pod unable to reach self signed local registry in disco environement\n1943578 - CoreDNS caches NXDOMAIN responses for up to 900 seconds\n1943614 - add bracket logging on openshift/builder calls into buildah to assist test-platform team triage\n1943637 - upgrade from ocp 4.5 to 4.6 does not clear SNAT rules on ovn\n1943649 - don\u0027t use hello-openshift for network-check-target\n1943667 - KubeDaemonSetRolloutStuck fires during upgrades too often because it does not accurately detect progress\n1943719 - storage-operator/vsphere-problem-detector causing upgrades to fail that would have succeeded in past versions\n1943804 - API server on AWS takes disruption between 70s and 110s after pod begins termination via external LB\n1943845 - Router pods should have startup probes configured\n1944121 - OVN-kubernetes references AddressSets after deleting them, causing ovn-controller errors\n1944160 - CNO: nbctl daemon should log reconnection info\n1944180 - OVN-Kube Master does not release election lock on shutdown\n1944246 - Ironic fails to inspect and move node to \"manageable\u0027 but get bmh remains in \"inspecting\"\n1944268 - openshift-install AWS SDK is missing endpoints for the ap-northeast-3 region\n1944509 - Translatable texts without context in ssh expose component\n1944581 - oc project not works with cluster proxy\n1944587 - VPA could not take actions based on the recommendation when min-replicas=1\n1944590 - The field name \"VolumeSnapshotContent\" is wrong on VolumeSnapshotContent detail page\n1944602 - Consistant fallures of features/project-creation.feature Cypress test in CI\n1944631 - openshif authenticator should not accept non-hashed tokens\n1944655 - [manila-csi-driver-operator] openstack-manila-csi-nodeplugin pods stucked with \".. still connecting to unix:///var/lib/kubelet/plugins/csi-nfsplugin/csi.sock\"\n1944660 - dm-multipath race condition on bare metal causing /boot partition mount failures\n1944674 - Project field become to \"All projects\" and disabled in \"Review and create virtual machine\" step in devconsole\n1944678 - Whereabouts IPAM CNI duplicate IP addresses assigned to pods\n1944761 - field level help instances do not use common util component \u003cFieldLevelHelp\u003e\n1944762 - Drain on worker node during an upgrade fails due to PDB set for image registry pod when only a single replica is present\n1944763 - field level help instances do not use common util component \u003cFieldLevelHelp\u003e\n1944853 - Update to nodejs \u003e=14.15.4 for ARM\n1944974 - Duplicate KubeControllerManagerDown/KubeSchedulerDown alerts\n1944986 - Clarify the ContainerRuntimeConfiguration cr description on the validation\n1945027 - Button \u0027Copy SSH Command\u0027 does not work\n1945085 - Bring back API data in etcd test\n1945091 - In k8s 1.21 bump Feature:IPv6DualStack tests are disabled\n1945103 - \u0027User credentials\u0027 shows even the VM is not running\n1945104 - In k8s 1.21 bump \u0027[sig-storage] [cis-hostpath] [Testpattern: Generic Ephemeral-volume\u0027 tests are disabled\n1945146 - Remove pipeline Tech preview badge for pipelines GA operator\n1945236 - Bootstrap ignition shim doesn\u0027t follow proxy settings\n1945261 - Operator dependency not consistently chosen from default channel\n1945312 - project deletion does not reset UI project context\n1945326 - console-operator: does not check route health periodically\n1945387 - Image Registry deployment should have 2 replicas and hard anti-affinity rules\n1945398 - 4.8 CI failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n1945431 - alerts: SystemMemoryExceedsReservation triggers too quickly\n1945443 - operator-lifecycle-manager-packageserver flaps Available=False with no reason or message\n1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service\n1945548 - catalog resource update failed if spec.secrets set to \"\"\n1945584 - Elasticsearch  operator fails to install on 4.8 cluster on ppc64le/s390x\n1945599 - Optionally set KERNEL_VERSION and RT_KERNEL_VERSION\n1945630 - Pod log filename no longer in \u003cpod-name\u003e-\u003ccontainer-name\u003e.log format\n1945637 - QE- Automation- Fixing smoke test suite for pipeline-plugin\n1945646 - gcp-routes.sh running as initrc_t unnecessarily\n1945659 - [oVirt] remove ovirt_cafile from ovirt-credentials secret\n1945677 - Need ACM Managed Cluster Info metric enabled for OCP monitoring telemetry\n1945687 - Dockerfile needs updating to new container CI registry\n1945700 - Syncing boot mode after changing device should be restricted to Supermicro\n1945816 - \" Ingresses \" should be kept in English for Chinese\n1945818 - Chinese translation issues: Operator should be the same with English `Operators`\n1945849 - Unnecessary series churn when a new version of kube-state-metrics is rolled out\n1945910 - [aws] support byo iam roles for instances\n1945948 - SNO: pods can\u0027t reach ingress when the ingress uses a different IPv6. \n1946079 - Virtual master is not getting an IP address\n1946097 - [oVirt] oVirt credentials secret contains unnecessary \"ovirt_cafile\"\n1946119 - panic parsing install-config\n1946243 - No relevant error when pg limit is reached in block pools page\n1946307 - [CI] [UPI] use a standardized and reliable way to install google cloud SDK in UPI image\n1946320 - Incorrect error message in Deployment Attach Storage Page\n1946449 - [e2e][automation] Fix cloud-init tests as UI changed\n1946458 - Edit Application action overwrites Deployment envFrom values on save\n1946459 - In bare metal IPv6 environment, [sig-storage] [Driver: nfs] tests are failing in CI. \n1946479 - In k8s 1.21 bump BoundServiceAccountTokenVolume is disabled by default\n1946497 - local-storage-diskmaker pod logs \"DeviceSymlinkExists\" and \"not symlinking, could not get lock: \u003cnil\u003e\"\n1946506 - [on-prem] mDNS plugin no longer needed\n1946513 - honor use specified system reserved with auto node sizing\n1946540 - auth operator: only configure webhook authenticators for internal auth when oauth-apiserver pods are ready\n1946584 - Machine-config controller fails to generate MC, when machine config pool with dashes in name presents under the cluster\n1946607 - etcd readinessProbe is not reflective of actual readiness\n1946705 - Fix issues with \"search\" capability in the Topology Quick Add component\n1946751 - DAY2 Confusing event when trying to add hosts to a cluster that completed installation\n1946788 - Serial tests are broken because of router\n1946790 - Marketplace operator flakes Available=False OperatorStarting during updates\n1946838 - Copied CSVs show up as adopted components\n1946839 - [Azure] While mirroring images to private registry throwing error: invalid character \u0027\u003c\u0027 looking for beginning of value\n1946865 - no \"namespace:kube_pod_container_resource_requests_cpu_cores:sum\" and \"namespace:kube_pod_container_resource_requests_memory_bytes:sum\" metrics\n1946893 - the error messages are inconsistent in DNS status conditions if the default service IP is taken\n1946922 - Ingress details page doesn\u0027t show referenced secret name and link\n1946929 - the default dns operator\u0027s Progressing status is always True and cluster operator dns Progressing status is False\n1947036 - \"failed to create Matchbox client or connect\" on e2e-metal jobs or metal clusters via cluster-bot\n1947066 - machine-config-operator pod crashes when noProxy is *\n1947067 - [Installer] Pick up upstream fix for installer console output\n1947078 - Incorrect skipped status for conditional tasks in the pipeline run\n1947080 - SNO IPv6 with \u0027temporary 60-day domain\u0027 option fails with IPv4 exception\n1947154 - [master] [assisted operator] Unable to re-register an SNO instance if deleting CRDs during install\n1947164 - Print \"Successfully pushed\" even if the build push fails. \n1947176 - OVN-Kubernetes leaves stale AddressSets around if the deletion was missed. \n1947293 - IPv6 provision addresses range larger then /64 prefix (e.g. /48)\n1947311 - When adding a new node to localvolumediscovery UI does not show pre-existing node name\u0027s\n1947360 - [vSphere csi driver operator] operator pod runs as \u201cBestEffort\u201d qosClass\n1947371 - [vSphere csi driver operator] operator doesn\u0027t create \u201ccsidriver\u201d instance\n1947402 - Single Node cluster upgrade: AWS EBS CSI driver deployment is stuck on rollout\n1947478 - discovery v1 beta1 EndpointSlice is deprecated in Kubernetes 1.21 (OCP 4.8)\n1947490 - If Clevis on a managed LUKs volume with Ignition enables, the system will fails to automatically open the LUKs volume on system boot\n1947498 - policy v1 beta1 PodDisruptionBudget is deprecated in Kubernetes 1.21 (OCP 4.8)\n1947663 - disk details are not synced in web-console\n1947665 - Internationalization values for ceph-storage-plugin should be in file named after plugin\n1947684 - MCO on SNO sometimes has rendered configs and sometimes does not\n1947712 - [OVN] Many faults and Polling interval stuck for 4 seconds every roughly 5 minutes intervals. \n1947719 - 8 APIRemovedInNextReleaseInUse info alerts display\n1947746 - Show wrong kubernetes version from kube-scheduler/kube-controller-manager operator pods\n1947756 - [azure-disk-csi-driver-operator] Should allow more nodes to be updated simultaneously for speeding up cluster upgrade\n1947767 - [azure-disk-csi-driver-operator] Uses the same storage type in the sc created by it as the default sc?\n1947771 - [kube-descheduler]descheduler operator pod should not run as \u201cBestEffort\u201d qosClass\n1947774 - CSI driver operators use \"Always\" imagePullPolicy in some containers\n1947775 - [vSphere csi driver operator] doesn\u2019t use the downstream images from payload. \n1947776 - [vSphere csi driver operator] Should allow more nodes to be updated simultaneously for speeding up cluster upgrade\n1947779 - [LSO] Should allow more nodes to be updated simultaneously for speeding up LSO upgrade\n1947785 - Cloud Compute: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947789 - Console: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947791 - MCO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947793 - DevEx: APIRemovedInNextReleaseInUse info alerts display\n1947794 - OLM: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component does not trigger APIRemovedInNextReleaseInUse alert\n1947795 - Networking: APIRemovedInNextReleaseInUse info alerts display\n1947797 - CVO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947798 - Images: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947800 - Ingress: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947801 - Kube Storage Version Migrator APIRemovedInNextReleaseInUse info alerts display\n1947803 - Openshift Apiserver: APIRemovedInNextReleaseInUse info alerts display\n1947806 - Re-enable h2spec, http/2 and grpc-interop e2e tests in openshift/origin\n1947828 - `download it` link should save pod log in \u003cpod-name\u003e-\u003ccontainer-name\u003e.log format\n1947866 - disk.csi.azure.com.spec.operatorLogLevel is not updated when CSO loglevel  is changed\n1947917 - Egress Firewall does not reliably apply firewall rules\n1947946 - Operator upgrades can delete existing CSV before completion\n1948011 - openshift-controller-manager constantly reporting type \"Upgradeable\" status Unknown\n1948012 - service-ca constantly reporting type \"Upgradeable\" status Unknown\n1948019 - [4.8] Large number of requests to the infrastructure cinder volume service\n1948022 - Some on-prem namespaces missing from must-gather\n1948040 - cluster-etcd-operator: etcd is using deprecated logger\n1948082 - Monitoring should not set Available=False with no reason on updates\n1948137 - CNI DEL not called on node reboot - OCP 4 CRI-O. \n1948232 - DNS operator performs spurious updates in response to API\u0027s defaulting of daemonset\u0027s maxSurge and service\u0027s ipFamilies and ipFamilyPolicy fields\n1948311 - Some jobs failing due to excessive watches: the server has received too many requests and has asked us to try again later\n1948359 - [aws] shared tag was not removed from user provided IAM role\n1948410 - [LSO] Local Storage Operator uses imagePullPolicy as \"Always\"\n1948415 - [vSphere csi driver operator] clustercsidriver.spec.logLevel doesn\u0027t take effective after changing\n1948427 - No action is triggered after click \u0027Continue\u0027 button on \u0027Show community Operator\u0027 windows\n1948431 - TechPreviewNoUpgrade does not enable CSI migration\n1948436 - The outbound traffic was broken intermittently after shutdown one egressIP node\n1948443 - OCP 4.8 nightly still showing v1.20 even after 1.21 merge\n1948471 - [sig-auth][Feature:OpenShiftAuthorization][Serial] authorization  TestAuthorizationResourceAccessReview should succeed [Suite:openshift/conformance/serial]\n1948505 - [vSphere csi driver operator] vmware-vsphere-csi-driver-operator pod restart every 10 minutes\n1948513 - get-resources.sh doesn\u0027t honor the no_proxy settings\n1948524 - \u0027DeploymentUpdated\u0027 Updated Deployment.apps/downloads -n openshift-console because it changed message is printed every minute\n1948546 - VM of worker is in error state when a network has port_security_enabled=False\n1948553 - When setting etcd spec.LogLevel is not propagated to etcd operand\n1948555 - A lot of events \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\" were seen in azure disk csi driver verification test\n1948563 - End-to-End Secure boot deployment fails \"Invalid value for input variable\"\n1948582 - Need ability to specify local gateway mode in CNO config\n1948585 - Need a CI jobs to test local gateway mode with bare metal\n1948592 - [Cluster Network Operator] Missing Egress Router Controller\n1948606 - DNS e2e test fails \"[sig-arch] Only known images used by tests\" because it does not use a known image\n1948610 - External Storage [Driver: disk.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]\n1948626 - TestRouteAdmissionPolicy e2e test is failing often\n1948628 - ccoctl needs to plan for future (non-AWS) platform support in the CLI\n1948634 - upgrades: allow upgrades without version change\n1948640 - [Descheduler] operator log reports key failed with : kubedeschedulers.operator.openshift.io \"cluster\" not found\n1948701 - unneeded CCO alert already covered by CVO\n1948703 - p\u0026f: probes should not get 429s\n1948705 - [assisted operator] SNO deployment fails - ClusterDeployment shows `bootstrap.ign was not found`\n1948706 - Cluster Autoscaler Operator manifests missing annotation for ibm-cloud-managed profile\n1948708 - cluster-dns-operator includes a deployment with node selector of masters for the IBM cloud managed profile\n1948711 - thanos querier and prometheus-adapter should have 2 replicas\n1948714 - cluster-image-registry-operator targets master nodes in ibm-cloud-managed-profile\n1948716 - cluster-ingress-operator deployment targets master nodes for ibm-cloud-managed profile\n1948718 - cluster-network-operator deployment manifest for ibm-cloud-managed profile contains master node selector\n1948719 - Machine API components should use 1.21 dependencies\n1948721 - cluster-storage-operator deployment targets master nodes for ibm-cloud-managed profile\n1948725 - operator lifecycle manager does not include profile annotations for ibm-cloud-managed\n1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing\n1948771 - ~50% of GCP upgrade jobs in 4.8 failing with \"AggregatedAPIDown\" alert on packages.coreos.com\n1948782 - Stale references to the single-node-production-edge cluster profile\n1948787 - secret.StringData shouldn\u0027t be used for reads\n1948788 - Clicking an empty metrics graph (when there is no data) should still open metrics viewer\n1948789 - Clicking on a metrics graph should show request and limits queries as well on the resulting metrics page\n1948919 - Need minor update in message on channel modal\n1948923 - [aws] installer forces the platform.aws.amiID option to be set, while installing a cluster into GovCloud or C2S region\n1948926 - Memory Usage of Dashboard \u0027Kubernetes / Compute Resources / Pod\u0027 contain wrong CPU query\n1948936 - [e2e][automation][prow] Prow script point to deleted resource\n1948943 - (release-4.8) Limit the number of collected pods in the workloads gatherer\n1948953 - Uninitialized cloud provider error when provisioning a cinder volume\n1948963 - [RFE] Cluster-api-provider-ovirt should handle hugepages\n1948966 - Add the ability to run a gather done by IO via a Kubernetes Job\n1948981 - Align dependencies and libraries with latest ironic code\n1948998 - style fixes by GoLand and golangci-lint\n1948999 - Can not assign multiple EgressIPs to a namespace by using automatic way. \n1949019 - PersistentVolumes page cannot sync project status automatically which will block user to create PV\n1949022 - Openshift 4 has a zombie problem\n1949039 - Wrong env name to get podnetinfo for hugepage in app-netutil\n1949041 - vsphere: wrong image names in bundle\n1949042 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the http2 tests  (on OpenStack)\n1949050 - Bump k8s to latest 1.21\n1949061 - [assisted operator][nmstate] Continuous attempts to reconcile InstallEnv  in the case of invalid NMStateConfig\n1949063 - [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service\n1949075 - Extend openshift/api for Add card customization\n1949093 - PatternFly v4.96.2 regression results in a.pf-c-button hover issues\n1949096 - Restore private git clone tests\n1949099 - network-check-target code cleanup\n1949105 - NetworkPolicy ... should enforce ingress policy allowing any port traffic to a server on a specific protocol\n1949145 - Move openshift-user-critical priority class to CCO\n1949155 - Console doesn\u0027t correctly check for favorited or last namespace on load if project picker used\n1949180 - Pipelines plugin model kinds aren\u0027t picked up by parser\n1949202 - sriov-network-operator not available from operatorhub on ppc64le\n1949218 - ccoctl not included in container image\n1949237 - Bump OVN: Lots of conjunction warnings in ovn-controller container logs\n1949277 - operator-marketplace: deployment manifests for ibm-cloud-managed profile have master node selectors\n1949294 - [assisted operator] OPENSHIFT_VERSIONS in assisted operator subscription does not propagate\n1949306 - need a way to see top API accessors\n1949313 - Rename vmware-vsphere-* images to vsphere-* images before 4.8 ships\n1949316 - BaremetalHost resource automatedCleaningMode ignored due to outdated vendoring\n1949347 - apiserver-watcher support for dual-stack\n1949357 - manila-csi-controller pod not running due to secret lack(in another ns)\n1949361 - CoreDNS resolution failure for external hostnames with \"A: dns: overflow unpacking uint16\"\n1949364 - Mention scheduling profiles in scheduler operator repository\n1949370 - Testability of: Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error\n1949384 - Edit Default Pull Secret modal - i18n misses\n1949387 - Fix the typo in auto node sizing script\n1949404 - label selector on pvc creation page - i18n misses\n1949410 - The referred role doesn\u0027t exist if create rolebinding from rolebinding tab of role page\n1949411 - VolumeSnapshot, VolumeSnapshotClass and VolumeSnapshotConent Details tab is not translated - i18n misses\n1949413 - Automatic boot order setting is done incorrectly when using by-path style device names\n1949418 - Controller factory workers should always restart on panic()\n1949419 - oauth-apiserver logs \"[SHOULD NOT HAPPEN] failed to update managedFields for authentication.k8s.io/v1, Kind=TokenReview: failed to convert new object (authentication.k8s.io/v1, Kind=TokenReview)\"\n1949420 - [azure csi driver operator] pvc.status.capacity and pv.spec.capacity are processed not the same as in-tree plugin\n1949435 - ingressclass controller doesn\u0027t recreate the openshift-default ingressclass after deleting it\n1949480 - Listeners timeout are constantly being updated\n1949481 - cluster-samples-operator restarts approximately two times per day and logs too many same messages\n1949509 - Kuryr should manage API LB instead of CNO\n1949514 - URL is not visible for routes at narrow screen widths\n1949554 - Metrics of vSphere CSI driver sidecars are not collected\n1949582 - OCP v4.7 installation with OVN-Kubernetes fails with error \"egress bandwidth restriction -1 is not equals\"\n1949589 - APIRemovedInNextEUSReleaseInUse Alert Missing\n1949591 - Alert does not catch removed api usage during end-to-end tests. \n1949593 - rename DeprecatedAPIInUse alert to APIRemovedInNextReleaseInUse\n1949612 - Install with 1.21 Kubelet is spamming logs with failed to get stats failed command \u0027du\u0027\n1949626 - machine-api fails to create AWS client in new regions\n1949661 - Kubelet Workloads Management changes for OCPNODE-529\n1949664 - Spurious keepalived liveness probe failures\n1949671 - System services such as openvswitch are stopped before pod containers on system shutdown or reboot\n1949677 - multus is the first pod on a new node and the last to go ready\n1949711 - cvo unable to reconcile deletion of openshift-monitoring namespace\n1949721 - Pick 99237: Use the audit ID of a request for better correlation\n1949741 - Bump golang version of cluster-machine-approver\n1949799 - ingresscontroller should deny the setting when spec.tuningOptions.threadCount exceed 64\n1949810 - OKD 4.7  unable to access Project  Topology View\n1949818 - Add e2e test to perform MCO operation Single Node OpenShift\n1949820 - Unable to use `oc adm top is` shortcut when asking for `imagestreams`\n1949862 - The ccoctl tool hits the panic sometime when running the delete subcommand\n1949866 - The ccoctl fails to create authentication file when running the command `ccoctl aws create-identity-provider` with `--output-dir` parameter\n1949880 - adding providerParameters.gcp.clientAccess to existing ingresscontroller doesn\u0027t work\n1949882 - service-idler build error\n1949898 - Backport RP#848 to OCP 4.8\n1949907 - Gather summary of PodNetworkConnectivityChecks\n1949923 - some defined rootVolumes zones not used on installation\n1949928 - Samples Operator updates break CI tests\n1949935 - Fix  incorrect access review check on start pipeline kebab action\n1949956 - kaso: add minreadyseconds to ensure we don\u0027t have an LB outage on kas\n1949967 - Update Kube dependencies in MCO to 1.21\n1949972 - Descheduler metrics: populate build info data and make the metrics entries more readeable\n1949978 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the h2spec conformance tests [Suite:openshift/conformance/parallel/minimal]\n1949990 - (release-4.8) Extend the OLM operator gatherer to include CSV display name\n1949991 - openshift-marketplace pods are crashlooping\n1950007 - [CI] [UPI] easy_install is not reliable enough to be used in an image\n1950026 - [Descheduler] Need better way to handle evicted pod count for removeDuplicate pod strategy\n1950047 - CSV deployment template custom annotations are not propagated to deployments\n1950112 - SNO: machine-config pool is degraded:   error running chcon -R -t var_run_t /run/mco-machine-os-content/os-content-321709791\n1950113 - in-cluster operators need an API for additional AWS tags\n1950133 - MCO creates empty conditions on the kubeletconfig object\n1950159 - Downstream ovn-kubernetes repo should have no linter errors\n1950175 - Update Jenkins and agent base image to Go 1.16\n1950196 - ssh Key is added even with \u0027Expose SSH access to this virtual machine\u0027 unchecked\n1950210 - VPA CRDs use deprecated API version\n1950219 - KnativeServing is not shown in list on global config page\n1950232 - [Descheduler] - The minKubeVersion should be 1.21\n1950236 - Update OKD imagestreams to prefer centos7 images\n1950270 - should use \"kubernetes.io/os\" in the dns/ingresscontroller node selector description when executing oc explain command\n1950284 - Tracking bug for NE-563 - support user-defined tags on AWS load balancers\n1950341 - NetworkPolicy: allow-from-router policy does not allow access to service when the endpoint publishing strategy is HostNetwork on OpenshiftSDN network\n1950379 - oauth-server is in pending/crashbackoff at beginning 50% of CI runs\n1950384 - [sig-builds][Feature:Builds][sig-devex][Feature:Jenkins][Slow] openshift pipeline build  perm failing\n1950409 - Descheduler operator code and docs still reference v1beta1\n1950417 - The Marketplace Operator is building with EOL k8s versions\n1950430 - CVO serves metrics over HTTP, despite a lack of consumers\n1950460 - RFE: Change Request Size Input to Number Spinner Input\n1950471 - e2e-metal-ipi-ovn-dualstack is failing with etcd unable to bootstrap\n1950532 - Include \"update\" when referring to operator approval and channel\n1950543 - Document non-HA behaviors in the MCO (SingleNodeOpenshift)\n1950590 - CNO: Too many OVN netFlows collectors causes ovnkube pods CrashLoopBackOff\n1950653 - BuildConfig ignores Args\n1950761 - Monitoring operator deployments anti-affinity rules prevent their rollout on single-node\n1950908 - kube_pod_labels metric does not contain k8s labels\n1950912 - [e2e][automation] add devconsole tests\n1950916 - [RFE]console page show error when vm is poused\n1950934 - Unnecessary rollouts can happen due to unsorted endpoints\n1950935 - Updating cluster-network-operator builder \u0026 base images to be consistent with ART\n1950978 - the ingressclass cannot be removed even after deleting the related custom ingresscontroller\n1951007 - ovn master pod crashed\n1951029 - Drainer panics on missing context for node patch\n1951034 - (release-4.8) Split up the GatherClusterOperators into smaller parts\n1951042 - Panics every few minutes in kubelet logs post-rebase\n1951043 - Start Pipeline Modal Parameters should accept empty string defaults\n1951058 - [gcp-pd-csi-driver-operator] topology and multipods capabilities are not enabled in e2e tests\n1951066 - [IBM][ROKS] Enable volume snapshot controllers on IBM Cloud\n1951084 - avoid benign \"Path \\\"/run/secrets/etc-pki-entitlement\\\" from \\\"/etc/containers/mounts.conf\\\" doesn\u0027t exist, skipping\" messages\n1951158 - Egress Router CRD missing Addresses entry\n1951169 - Improve API Explorer discoverability from the Console\n1951174 - re-pin libvirt to 6.0.0\n1951203 - oc adm catalog mirror can generate ICSPs that exceed etcd\u0027s size limit\n1951209 - RerunOnFailure runStrategy shows wrong VM status (Starting) on Succeeded VMI\n1951212 - User/Group details shows unrelated subjects in role bindings tab\n1951214 - VM list page crashes when the volume type is sysprep\n1951339 - Cluster-version operator does not manage operand container environments when manifest lacks opinions\n1951387 - opm index add doesn\u0027t respect deprecated bundles\n1951412 - Configmap gatherer can fail incorrectly\n1951456 - Docs and linting fixes\n1951486 - Replace \"kubevirt_vmi_network_traffic_bytes_total\" with new metrics names\n1951505 - Remove deprecated techPreviewUserWorkload field from CMO\u0027s configmap\n1951558 - Backport Upstream 101093 for Startup Probe Fix\n1951585 - enterprise-pod fails to build\n1951636 - assisted service operator use default serviceaccount in operator bundle\n1951637 - don\u0027t rollout a new kube-apiserver revision on oauth accessTokenInactivityTimeout changes\n1951639 - Bootstrap API server unclean shutdown causes reconcile delay\n1951646 - Unexpected memory climb while container not in use\n1951652 - Add retries to opm index add\n1951670 - Error gathering bootstrap log after pivot: The bootstrap machine did not execute the release-image.service systemd unit\n1951671 - Excessive writes to ironic Nodes\n1951705 - kube-apiserver needs alerts on CPU utlization\n1951713 - [OCP-OSP] After changing image in machine object it enters in Failed - Can\u0027t find created instance\n1951853 - dnses.operator.openshift.io resource\u0027s spec.nodePlacement.tolerations godoc incorrectly describes default behavior\n1951858 - unexpected text \u00270\u0027 on filter toolbar on RoleBinding tab\n1951860 - [4.8] add Intel XXV710 NIC model (1572) support in SR-IOV Operator\n1951870 - sriov network resources injector: user defined injection removed existing pod annotations\n1951891 - [migration] cannot change ClusterNetwork CIDR during migration\n1951952 - [AWS CSI Migration] Metrics for cloudprovider error requests are lost\n1952001 - Delegated authentication: reduce the number of watch requests\n1952032 - malformatted assets in CMO\n1952045 - Mirror nfs-server image used in jenkins-e2e\n1952049 - Helm: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1952079 - rebase openshift/sdn to kube 1.21\n1952111 - Optimize importing from @patternfly/react-tokens\n1952174 - DNS operator claims to be done upgrading before it even starts\n1952179 - OpenStack Provider Ports UI Underscore Variables\n1952187 - Pods stuck in ImagePullBackOff with errors like rpc error: code = Unknown desc = Error committing the finished image: image with ID \"SomeLongID\" already exists, but uses a different top layer: that ID\n1952211 - cascading mounts happening exponentially on when deleting openstack-cinder-csi-driver-node pods\n1952214 - Console Devfile Import Dev Preview broken\n1952238 - Catalog pods don\u0027t report termination logs to catalog-operator\n1952262 - Need support external gateway via hybrid overlay\n1952266 - etcd operator bumps status.version[name=operator] before operands update\n1952268 - etcd operator should not set Degraded=True EtcdMembersDegraded on healthy machine-config node reboots\n1952282 - CSR approver races with nodelink controller and does not requeue\n1952310 - VM cannot start up if the ssh key is added by another template\n1952325 - [e2e][automation] Check support modal in ssh tests and skip template parentSupport\n1952333 - openshift/kubernetes vulnerable to CVE-2021-3121\n1952358 - Openshift-apiserver CO unavailable in fresh OCP 4.7.5 installations\n1952367 - No VM status on overview page when VM is pending\n1952368 - worker pool went degraded due to no rpm-ostree on rhel worker during applying new mc\n1952372 - VM stop action should not be there if the VM is not running\n1952405 - console-operator is not reporting correct Available status\n1952448 - Switch from Managed to Disabled mode: no IP removed from configuration and no container metal3-static-ip-manager stopped\n1952460 - In k8s 1.21 bump \u0027[sig-network] Firewall rule control plane should not expose well-known ports\u0027 test is disabled\n1952473 - Monitor pod placement during upgrades\n1952487 - Template filter does not work properly\n1952495 - \u201cCreate\u201d button on the Templates page is confuse\n1952527 - [Multus] multi-networkpolicy does wrong filtering\n1952545 - Selection issue when inserting YAML snippets\n1952585 - Operator links for \u0027repository\u0027 and \u0027container image\u0027 should be clickable in OperatorHub\n1952604 - Incorrect port in external loadbalancer config\n1952610 - [aws] image-registry panics when the cluster is installed in a new region\n1952611 - Tracking bug for OCPCLOUD-1115 - support user-defined tags on AWS EC2 Instances\n1952618 - 4.7.4-\u003e4.7.8 Upgrade Caused OpenShift-Apiserver Outage\n1952625 - Fix translator-reported text issues\n1952632 - 4.8 installer should default ClusterVersion channel to stable-4.8\n1952635 - Web console displays a blank page- white space instead of cluster information\n1952665 - [Multus] multi-networkpolicy pod continue restart due to OOM (out of memory)\n1952666 - Implement Enhancement 741 for Kubelet\n1952667 - Update Readme for cluster-baremetal-operator with details about the operator\n1952684 - cluster-etcd-operator: metrics controller panics on invalid response from client\n1952728 - It was not clear for users why Snapshot feature was not available\n1952730 - \u201cCustomize virtual machine\u201d and the \u201cAdvanced\u201d feature are confusing in wizard\n1952732 - Users did not understand the boot source labels\n1952741 - Monitoring DB: after set Time Range as Custom time range, no data display\n1952744 - PrometheusDuplicateTimestamps with user workload monitoring enabled\n1952759 - [RFE]It was not immediately clear what the Star icon meant\n1952795 - cloud-network-config-controller CRD does not specify correct plural name\n1952819 - failed to configure pod interface: error while waiting on flows for pod: timed out waiting for OVS flows\n1952820 - [LSO] Delete localvolume pv is failed\n1952832 - [IBM][ROKS] Enable the Web console UI to deploy OCS in External mode on IBM Cloud\n1952891 - Upgrade failed due to cinder csi driver not deployed\n1952904 - Linting issues in gather/clusterconfig package\n1952906 - Unit tests for configobserver.go\n1952931 - CI does not check leftover PVs\n1952958 - Runtime error loading console in Safari 13\n1953019 - [Installer][baremetal][metal3] The baremetal IPI installer fails on delete cluster with: failed to clean baremetal bootstrap storage pool\n1953035 - Installer should error out if publish: Internal is set while deploying OCP cluster on any on-prem platform\n1953041 - openshift-authentication-operator uses 3.9k% of its requested CPU\n1953077 - Handling GCP\u0027s: Error 400: Permission accesscontextmanager.accessLevels.list is not valid for this resource\n1953102 - kubelet CPU use during an e2e run increased 25% after rebase\n1953105 - RHCOS system components registered a 3.5x increase in CPU use over an e2e run before and after 4/9\n1953169 - endpoint slice controller doesn\u0027t handle services target port correctly\n1953257 - Multiple EgressIPs per node for one namespace when \"oc get hostsubnet\"\n1953280 - DaemonSet/node-resolver is not recreated by dns operator after deleting it\n1953291 - cluster-etcd-operator: peer cert DNS SAN is populated incorrectly\n1953418 - [e2e][automation] Fix vm wizard validate tests\n1953518 - thanos-ruler pods failed to start up for \"cannot unmarshal DNS message\"\n1953530 - Fix openshift/sdn unit test flake\n1953539 - kube-storage-version-migrator: priorityClassName not set\n1953543 - (release-4.8) Add missing sample archive data\n1953551 - build failure: unexpected trampoline for shared or dynamic linking\n1953555 - GlusterFS tests fail on ipv6 clusters\n1953647 - prometheus-adapter should have a PodDisruptionBudget in HA topology\n1953670 - ironic container image build failing because esp partition size is too small\n1953680 - ipBlock ignoring all other cidr\u0027s apart from the last one specified\n1953691 - Remove unused mock\n1953703 - Inconsistent usage of Tech preview badge in OCS plugin of OCP Console\n1953726 - Fix issues related to loading dynamic plugins\n1953729 - e2e unidling test is flaking heavily on SNO jobs\n1953795 - Ironic can\u0027t virtual media attach ISOs sourced from ingress routes\n1953798 - GCP e2e (parallel and upgrade) regularly trigger KubeAPIErrorBudgetBurn alert, also happens on AWS\n1953803 - [AWS] Installer should do pre-check to ensure user-provided private hosted zone name is valid for OCP cluster\n1953810 - Allow use of storage policy in VMC environments\n1953830 - The oc-compliance build does not available for OCP4.8\n1953846 - SystemMemoryExceedsReservation alert should consider hugepage reservation\n1953977 - [4.8] packageserver pods restart many times on the SNO cluster\n1953979 - Ironic caching virtualmedia images results in disk space limitations\n1954003 - Alerts shouldn\u0027t report any alerts in firing or pending state: openstack-cinder-csi-driver-controller-metrics TargetDown\n1954025 - Disk errors while scaling up a node with multipathing enabled\n1954087 - Unit tests for kube-scheduler-operator\n1954095 - Apply user defined tags in AWS Internal Registry\n1954105 - TaskRuns Tab in PipelineRun Details Page makes cluster based calls for TaskRuns\n1954124 - oc set volume not adding storageclass to pvc which leads to issues using snapshots\n1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js\n1954177 - machine-api: admissionReviewVersions v1beta1 is going to be removed in 1.22\n1954187 - multus: admissionReviewVersions v1beta1 is going to be removed in 1.22\n1954248 - Disable Alertmanager Protractor e2e tests\n1954317 - [assisted operator] Environment variables set in the subscription not being inherited by the assisted-service container\n1954330 - NetworkPolicy: allow-from-router with label policy-group.network.openshift.io/ingress: \"\" does not work on a upgraded cluster\n1954421 - Get \u0027Application is not available\u0027 when access Prometheus UI\n1954459 - Error: Gateway Time-out display on Alerting console\n1954460 - UI, The status of \"Used Capacity Breakdown [Pods]\"  is \"Not available\"\n1954509 - FC volume is marked as unmounted after failed reconstruction\n1954540 - Lack translation for local language on pages under storage menu\n1954544 - authn operator: endpoints controller should use the context it creates\n1954554 - Add e2e tests for auto node sizing\n1954566 - Cannot update a component (`UtilizationCard`) error when switching perspectives manually\n1954597 - Default image for GCP does not support ignition V3\n1954615 - Undiagnosed panic detected in pod: pods/openshift-cloud-credential-operator_cloud-credential-operator\n1954634 - apirequestcounts does not honor max users\n1954638 - apirequestcounts should indicate removedinrelease of empty instead of 2.0\n1954640 - Support of gatherers with different periods\n1954671 - disable volume expansion support in vsphere csi driver storage class\n1954687 - localvolumediscovery and localvolumset e2es are disabled\n1954688 - LSO has missing examples for localvolumesets\n1954696 - [API-1009] apirequestcounts should indicate useragent\n1954715 - Imagestream imports become very slow when doing many in parallel\n1954755 - Multus configuration should allow for net-attach-defs referenced in the openshift-multus namespace\n1954765 - CCO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1954768 - baremetal-operator: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1954770 - Backport upstream fix for Kubelet getting stuck in DiskPressure\n1954773 - OVN: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component does not trigger APIRemovedInNextReleaseInUse alert\n1954783 - [aws] support byo private hosted zone\n1954790 - KCM Alert PodDisruptionBudget At and Limit do not alert with maxUnavailable or MinAvailable by percentage\n1954830 - verify-client-go job is failing for release-4.7 branch\n1954865 - Add necessary priority class to pod-identity-webhook deployment\n1954866 - Add necessary priority class to downloads\n1954870 - Add necessary priority class to network components\n1954873 - dns server may not be specified for clusters with more than 2 dns servers specified by  openstack. \n1954891 - Add necessary priority class to pruner\n1954892 - Add necessary priority class to ingress-canary\n1954931 - (release-4.8) Remove legacy URL anonymization in the ClusterOperator related resources\n1954937 - [API-1009] `oc get apirequestcount` shows blank for column REQUESTSINCURRENTHOUR\n1954959 - unwanted decorator shown for revisions in topology though should only be shown only for knative services\n1954972 - TechPreviewNoUpgrade featureset can be undone\n1954973 - \"read /proc/pressure/cpu: operation not supported\" in node-exporter logs\n1954994 - should update to 2.26.0 for prometheus resources label\n1955051 - metrics \"kube_node_status_capacity_cpu_cores\" does not exist\n1955089 - Support [sig-cli] oc observe works as expected test for IPv6\n1955100 - Samples: APIRemovedInNextReleaseInUse info alerts display\n1955102 - Add vsphere_node_hw_version_total metric to the collected metrics\n1955114 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM\n1955196 - linuxptp-daemon crash on 4.8\n1955226 - operator updates apirequestcount CRD over and over\n1955229 - release-openshift-origin-installer-e2e-aws-calico-4.7 is permfailing\n1955256 - stop collecting API that no longer exists\n1955324 - Kubernetes Autoscaler should use Go 1.16 for testing scripts\n1955336 - Failure to Install OpenShift on GCP due to Cluster Name being similar to / contains \"google\"\n1955414 - 4.8 -\u003e 4.7 rollbacks broken on unrecognized flowschema openshift-etcd-operator\n1955445 - Drop crio image metrics with high cardinality\n1955457 - Drop container_memory_failures_total metric because of high cardinality\n1955467 - Disable collection of node_mountstats_nfs metrics in node_exporter\n1955474 - [aws-ebs-csi-driver] rebase from version v1.0.0\n1955478 - Drop high-cardinality metrics from kube-state-metrics which aren\u0027t used\n1955517 - Failed to upgrade from 4.6.25 to 4.7.8 due to the machine-config degradation\n1955548 - [IPI][OSP] OCP 4.6/4.7 IPI with kuryr exceeds defined serviceNetwork range\n1955554 - MAO does not react to events triggered from Validating Webhook Configurations\n1955589 - thanos-querier should have a PodDisruptionBudget in HA topology\n1955595 - Add DevPreviewLongLifecycle Descheduler profile\n1955596 - Pods stuck in creation phase on realtime kernel SNO\n1955610 - release-openshift-origin-installer-old-rhcos-e2e-aws-4.7 is permfailing\n1955622 - 4.8-e2e-metal-assisted jobs: Timeout of 360 seconds expired waiting for Cluster to be in status [\u0027installing\u0027, \u0027error\u0027]\n1955701 - [4.8] RHCOS boot image bump for RHEL 8.4 Beta\n1955749 - OCP branded templates need to be translated\n1955761 - packageserver clusteroperator does not set reason or message for Available condition\n1955783 - NetworkPolicy: ACL audit log message for allow-from-router policy should also include the namespace to distinguish between two policies similarly named configured in respective namespaces\n1955803 - OperatorHub - console accepts any value for \"Infrastructure features\" annotation\n1955822 - CIS Benchmark 5.4.1 Fails on ROKS 4: Prefer using secrets as files over secrets as environment variables\n1955854 - Ingress clusteroperator reports Degraded=True/Available=False if any ingresscontroller is degraded or unavailable\n1955862 - Local Storage Operator using LocalVolume CR fails to create PV\u0027s when backend storage failure is simulated\n1955874 - Webscale: sriov vfs are not created and sriovnetworknodestate indicates sync succeeded - state is not correct\n1955879 - Customer tags cannot be seen in S3 level when set spec.managementState from Managed-\u003e Removed-\u003e Managed in configs.imageregistry with high ratio\n1955969 - Workers cannot be deployed attached to multiple networks. \n1956079 - Installer gather doesn\u0027t collect any networking information\n1956208 - Installer should validate root volume type\n1956220 - Set htt proxy system properties as expected by kubernetes-client\n1956281 - Disconnected installs are failing with kubelet trying to pause image from the internet\n1956334 - Event Listener Details page does not show Triggers section\n1956353 - test: analyze job consistently fails\n1956372 - openshift-gcp-routes causes disruption during upgrade by stopping before all pods terminate\n1956405 - Bump k8s dependencies in cluster resource override admission operator\n1956411 - Apply custom tags to AWS EBS volumes\n1956480 - [4.8] Bootimage bump tracker\n1956606 - probes FlowSchema manifest not included in any cluster profile\n1956607 - Multiple manifests lack cluster profile annotations\n1956609 - [cluster-machine-approver] CSRs for replacement control plane nodes not approved after restore from backup\n1956610 - manage-helm-repos manifest lacks cluster profile annotations\n1956611 - OLM CRD schema validation failing against CRs where the value of a string field is a blank string\n1956650 - The container disk URL is empty for Windows guest tools\n1956768 - aws-ebs-csi-driver-controller-metrics TargetDown\n1956826 - buildArgs does not work when the value is taken from a secret\n1956895 - Fix chatty kubelet log message\n1956898 - fix log files being overwritten on container state loss\n1956920 - can\u0027t open terminal for pods that have more than one container running\n1956959 - ipv6 disconnected sno crd deployment hive reports success status and clusterdeployrmet reporting false\n1956978 - Installer gather doesn\u0027t include pod names in filename\n1957039 - Physical VIP for pod -\u003e Svc -\u003e Host is incorrectly set to an IP of 169.254.169.2 for Local GW\n1957041 - Update CI e2echart with more node info\n1957127 - Delegated authentication: reduce the number of watch requests\n1957131 - Conformance tests for OpenStack require the Cinder client that is not included in the \"tests\" image\n1957146 - Only run test/extended/router/idle tests on OpenshiftSDN or OVNKubernetes\n1957149 - CI: \"Managed cluster should start all core operators\" fails with: OpenStackCinderDriverStaticResourcesControllerDegraded: \"volumesnapshotclass.yaml\" (string): missing dynamicClient\n1957179 - Incorrect VERSION in node_exporter\n1957190 - CI jobs failing due too many watch requests (prometheus-operator)\n1957198 - Misspelled console-operator condition\n1957227 - Issue replacing the EnvVariables using the unsupported ConfigMap\n1957260 - [4.8] [gcp] Installer is missing new region/zone europe-central2\n1957261 - update godoc for new build status image change trigger fields\n1957295 - Apply priority classes conventions as test to openshift/origin repo\n1957315 - kuryr-controller doesn\u0027t indicate being out of quota\n1957349 - [Azure] Machine object showing Failed phase even node is ready and VM is running properly\n1957374 - mcddrainerr doesn\u0027t list specific pod\n1957386 - Config serve and validate command should be under alpha\n1957446 - prepare CCO for future without v1beta1 CustomResourceDefinitions\n1957502 - Infrequent panic in kube-apiserver in aws-serial job\n1957561 - lack of pseudolocalization for some text on Cluster Setting page\n1957584 - Routes are not getting created  when using hostname  without FQDN standard\n1957597 - Public DNS records were not deleted when destroying a cluster which is using byo private hosted zone\n1957645 - Event \"Updated PrometheusRule.monitoring.coreos.com/v1 because it changed\" is frequently looped with weird empty {} changes\n1957708 - e2e-metal-ipi and related jobs fail to bootstrap due to multiple VIP\u0027s\n1957726 - Pod stuck in ContainerCreating - Failed to start transient scope unit: Connection timed out\n1957748 - Ptp operator pod should have CPU and memory requests set but not limits\n1957756 - Device Replacemet UI, The status of the disk is \"replacement ready\" before I clicked on \"start replacement\"\n1957772 - ptp daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1957775 - CVO creating cloud-controller-manager too early causing upgrade failures\n1957809 - [OSP] Install with invalid platform.openstack.machinesSubnet results in runtime error\n1957822 - Update apiserver tlsSecurityProfile description to include Custom profile\n1957832 - CMO end-to-end tests work only on AWS\n1957856 - \u0027resource name may not be empty\u0027 is shown in CI testing\n1957869 - baremetal IPI power_interface for irmc is inconsistent\n1957879 - cloud-controller-manage ClusterOperator manifest does not declare relatedObjects\n1957889 - Incomprehensible documentation of the GatherClusterOperatorPodsAndEvents gatherer\n1957893 - ClusterDeployment / Agent conditions show \"ClusterAlreadyInstalling\" during each spoke install\n1957895 - Cypress helper projectDropdown.shouldContain is not an assertion\n1957908 - Many e2e failed requests caused by kube-storage-version-migrator-operator\u0027s version reads\n1957926 - \"Add Capacity\" should allow to add n*3 (or n*4) local devices at once\n1957951 - [aws] destroy can get blocked on instances stuck in shutting-down state\n1957967 - Possible test flake in listPage Cypress view\n1957972 - Leftover templates from mdns\n1957976 - Ironic execute_deploy_steps command to ramdisk times out, resulting in a failed deployment in 4.7\n1957982 - Deployment Actions clickable for view-only projects\n1957991 - ClusterOperatorDegraded can fire during installation\n1958015 - \"config-reloader-cpu\" and \"config-reloader-memory\" flags have been deprecated for prometheus-operator\n1958080 - Missing i18n for login, error and selectprovider pages\n1958094 - Audit log files are corrupted sometimes\n1958097 - don\u0027t show \"old, insecure token format\" if the token does not actually exist\n1958114 - Ignore staged vendor files in pre-commit script\n1958126 - [OVN]Egressip doesn\u0027t take effect\n1958158 - OAuth proxy container for AlertManager and Thanos are flooding the logs\n1958216 - ocp libvirt: dnsmasq options in install config should allow duplicate option names\n1958245 - cluster-etcd-operator: static pod revision is not visible from etcd logs\n1958285 - Deployment considered unhealthy despite being available and at latest generation\n1958296 - OLM must explicitly alert on deprecated APIs in use\n1958329 - pick 97428: add more context to log after a request times out\n1958367 - Build metrics do not aggregate totals by build strategy\n1958391 - Update MCO KubeletConfig to mixin the API Server TLS Security Profile Singleton\n1958405 - etcd: current health checks and reporting are not adequate to ensure availability\n1958406 - Twistlock flags mode of /var/run/crio/crio.sock\n1958420 - openshift-install 4.7.10 fails with segmentation error\n1958424 - aws: support more auth options in manual mode\n1958439 - Install/Upgrade button on Install/Upgrade Helm Chart page does not work with Form View\n1958492 - CCO: pod-identity-webhook still accesses APIRemovedInNextReleaseInUse\n1958643 - All pods creation stuck due to SR-IOV webhook timeout\n1958679 - Compression on pool can\u0027t be disabled via UI\n1958753 - VMI nic tab is not loadable\n1958759 - Pulling Insights report is missing retry logic\n1958811 - VM creation fails on API version mismatch\n1958812 - Cluster upgrade halts as machine-config-daemon fails to parse `rpm-ostree status` during cluster upgrades\n1958861 - [CCO] pod-identity-webhook certificate request failed\n1958868 - ssh copy is missing when vm is running\n1958884 - Confusing error message when volume AZ not found\n1958913 - \"Replacing an unhealthy etcd member whose node is not ready\" procedure results in new etcd pod in CrashLoopBackOff\n1958930 - network config in machine configs prevents addition of new nodes with static networking via kargs\n1958958 - [SCALE] segfault with ovnkube adding to address set\n1958972 - [SCALE] deadlock in ovn-kube when scaling up to 300 nodes\n1959041 - LSO Cluster UI,\"Troubleshoot\" link does not exist after scale down osd pod\n1959058 - ovn-kubernetes has lock contention on the LSP cache\n1959158 - packageserver clusteroperator Available condition set to false on any Deployment spec change\n1959177 - Descheduler dev manifests are missing permissions\n1959190 - Set LABEL io.openshift.release.operator=true for driver-toolkit image addition to payload\n1959194 - Ingress controller should use minReadySeconds because otherwise it is disrupted during deployment updates\n1959278 - Should remove prometheus servicemonitor from openshift-user-workload-monitoring\n1959294 - openshift-operator-lifecycle-manager:olm-operator-serviceaccount should not rely on external networking for health check\n1959327 - Degraded nodes on upgrade - Cleaning bootversions: Read-only file system\n1959406 - Difficult to debug performance on ovn-k without pprof enabled\n1959471 - Kube sysctl conformance tests are disabled, meaning we can\u0027t submit conformance results\n1959479 - machines doesn\u0027t support dual-stack loadbalancers on Azure\n1959513 - Cluster-kube-apiserver does not use library-go for audit pkg\n1959519 - Operand details page only renders one status donut no matter how many \u0027podStatuses\u0027 descriptors are used\n1959550 - Overly generic CSS rules for dd and dt elements breaks styling elsewhere in console\n1959564 - Test verify /run filesystem contents failing\n1959648 - oc adm top --help indicates that oc adm top can display storage usage while it cannot\n1959650 - Gather SDI-related MachineConfigs\n1959658 - showing a lot \"constructing many client instances from the same exec auth config\"\n1959696 - Deprecate \u0027ConsoleConfigRoute\u0027 struct in console-operator config\n1959699 - [RFE] Collect LSO pod log and daemonset log managed by LSO\n1959703 - Bootstrap gather gets into an infinite loop on bootstrap-in-place mode\n1959711 - Egressnetworkpolicy  doesn\u0027t work when configure the EgressIP\n1959786 - [dualstack]EgressIP doesn\u0027t work on dualstack cluster for IPv6\n1959916 - Console not works well against a proxy in front of openshift clusters\n1959920 - UEFISecureBoot set not on the right master node\n1959981 - [OCPonRHV] - Affinity Group should not create by default if we define empty affinityGroupsNames: []\n1960035 - iptables is missing from ose-keepalived-ipfailover image\n1960059 - Remove \"Grafana UI\" link from Console Monitoring \u003e Dashboards page\n1960089 - ImageStreams list page, detail page and breadcrumb are not following CamelCase conventions\n1960129 - [e2e][automation] add smoke tests about VM pages and actions\n1960134 - some origin images are not public\n1960171 - Enable SNO checks for image-registry\n1960176 - CCO should recreate a user for the component when it was removed from the cloud providers\n1960205 - The kubelet log flooded with reconcileState message once CPU manager enabled\n1960255 - fixed obfuscation permissions\n1960257 - breaking changes in pr template\n1960284 - ExternalTrafficPolicy Local does not preserve connections correctly on shutdown, policy Cluster has significant performance cost\n1960323 - Address issues raised by coverity security scan\n1960324 - manifests: extra \"spec.version\" in console quickstarts makes CVO hotloop\n1960330 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1960334 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1960337 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1960339 - manifests: unset \"preemptionPolicy\" makes CVO hotloop\n1960531 - Items under \u0027Current Bandwidth\u0027 for Dashboard \u0027Kubernetes / Networking / Pod\u0027 keep added for every access\n1960534 - Some graphs of console dashboards have no legend and tooltips are difficult to undstand compared with grafana\n1960546 - Add virt_platform metric to the collected metrics\n1960554 - Remove rbacv1beta1 handling code\n1960612 - Node disk info in overview/details does not account for second drive where /var is located\n1960619 - Image registry integration tests use old-style OAuth tokens\n1960683 - GlobalConfigPage is constantly requesting resources\n1960711 - Enabling IPsec runtime causing incorrect MTU on Pod interfaces\n1960716 - Missing details for debugging\n1960732 - Outdated manifests directory in CSI driver operator repositories\n1960757 - [OVN] hostnetwork pod can access MCS port 22623 or 22624 on master\n1960758 - oc debug / oc adm must-gather do not require openshift/tools and openshift/must-gather to be \"the newest\"\n1960767 - /metrics endpoint of the Grafana UI is accessible without authentication\n1960780 - CI: failed to create PDB \"service-test\" the server could not find the requested resource\n1961064 - Documentation link to network policies is outdated\n1961067 - Improve log gathering logic\n1961081 - policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget in CMO logs\n1961091 - Gather MachineHealthCheck definitions\n1961120 - CSI driver operators fail when upgrading a cluster\n1961173 - recreate existing static pod manifests instead of updating\n1961201 - [sig-network-edge] DNS should answer A and AAAA queries for a dual-stack service is constantly failing\n1961314 - Race condition in operator-registry pull retry unit tests\n1961320 - CatalogSource does not emit any metrics to indicate if it\u0027s ready or not\n1961336 - Devfile sample for BuildConfig is not defined\n1961356 - Update single quotes to double quotes in string\n1961363 - Minor string update for \" No Storage classes found in cluster, adding source is disabled.\"\n1961393 - DetailsPage does not work with group~version~kind\n1961452 - Remove \"Alertmanager UI\" link from Console Monitoring \u003e Alerting page\n1961466 - Some dropdown placeholder text on route creation page is not translated\n1961472 - openshift-marketplace pods in CrashLoopBackOff state after RHACS installed with an SCC with readOnlyFileSystem set to true\n1961506 - NodePorts do not work on RHEL 7.9 workers (was \"4.7 -\u003e 4.8 upgrade is stuck at Ingress operator Degraded with rhel 7.9 workers\")\n1961536 - clusterdeployment without pull secret is crashing assisted service pod\n1961538 - manifests: invalid namespace in ClusterRoleBinding makes CVO hotloop\n1961545 - Fixing Documentation Generation\n1961550 - HAproxy pod logs showing error \"another server named \u0027pod:httpd-7c7ccfffdc-wdkvk:httpd:8080-tcp:10.128.x.x:8080\u0027 was already defined at line 326, please use distinct names\"\n1961554 - respect the shutdown-delay-duration from OpenShiftAPIServerConfig\n1961561 - The encryption controllers send lots of request to an API server\n1961582 - Build failure on s390x\n1961644 - NodeAuthenticator tests are failing in IPv6\n1961656 - driver-toolkit missing some release metadata\n1961675 - Kebab menu of taskrun contains Edit options which should not be present\n1961701 - Enhance gathering of events\n1961717 - Update runtime dependencies to Wallaby builds for bugfixes\n1961829 - Quick starts prereqs not shown when description is long\n1961852 - Excessive lock contention when adding many pods selected by the same NetworkPolicy\n1961878 - Add Sprint 199 translations\n1961897 - Remove history listener before console UI is unmounted\n1961925 - New ManagementCPUsOverride admission plugin blocks pod creation in clusters with no nodes\n1962062 - Monitoring dashboards should support default values of \"All\"\n1962074 - SNO:the pod get stuck in CreateContainerError and prompt \"failed to add conmon to systemd sandbox cgroup: dial unix /run/systemd/private: connect: resource temporarily unavailable\" after adding a performanceprofile\n1962095 - Replace gather-job image without FQDN\n1962153 - VolumeSnapshot routes are ambiguous, too generic\n1962172 - Single node CI e2e tests kubelet metrics endpoints intermittent downtime\n1962219 - NTO relies on unreliable leader-for-life implementation. \n1962256 - use RHEL8 as the vm-example\n1962261 - Monitoring components requesting more memory than they use\n1962274 - OCP on RHV installer fails to generate an install-config with only 2 hosts in RHV cluster\n1962347 - Cluster does not exist logs after successful installation\n1962392 - After upgrade from 4.5.16 to 4.6.17, customer\u0027s application is seeing re-transmits\n1962415 - duplicate zone information for in-tree PV after enabling migration\n1962429 - Cannot create windows vm because kubemacpool.io denied the request\n1962525 - [Migration] SDN migration stuck on MCO on RHV cluster\n1962569 - NetworkPolicy details page should also show Egress rules\n1962592 - Worker nodes restarting during OS installation\n1962602 - Cloud credential operator scrolls info \"unable to provide upcoming...\" on unsupported platform\n1962630 - NTO: Ship the current upstream TuneD\n1962687 - openshift-kube-storage-version-migrator pod failed due to Error: container has runAsNonRoot and image will run as root\n1962698 - Console-operator can not create resource console-public configmap in the openshift-config-managed namespace\n1962718 - CVE-2021-29622 prometheus: open redirect under the /new endpoint\n1962740 - Add documentation to Egress Router\n1962850 - [4.8] Bootimage bump tracker\n1962882 - Version pod does not set priorityClassName\n1962905 - Ramdisk ISO source defaulting to \"http\" breaks deployment on a good amount of BMCs\n1963068 - ironic container should not specify the entrypoint\n1963079 - KCM/KS: ability to enforce localhost communication with the API server. \n1963154 - Current BMAC reconcile flow skips Ironic\u0027s deprovision step\n1963159 - Add Sprint 200 translations\n1963204 - Update to 8.4 IPA images\n1963205 - Installer is using old redirector\n1963208 - Translation typos/inconsistencies for Sprint 200 files\n1963209 - Some strings in public.json have errors\n1963211 - Fix grammar issue in kubevirt-plugin.json string\n1963213 - Memsource download script running into API error\n1963219 - ImageStreamTags not internationalized\n1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment\n1963267 - Warning: Invalid DOM property `classname`. Did you mean `className`? console warnings in volumes table\n1963502 - create template from is not descriptive\n1963676 - in vm wizard when selecting an os template it looks like selecting the flavor too\n1963833 - Cluster monitoring operator crashlooping on single node clusters due to segfault\n1963848 - Use OS-shipped stalld vs. the NTO-shipped one. \n1963866 - NTO: use the latest k8s 1.21.1 and openshift vendor dependencies\n1963871 - cluster-etcd-operator:[build] upgrade to go 1.16\n1963896 - The VM disks table does not show easy links to PVCs\n1963912 - \"[sig-network] DNS should provide DNS for {services, cluster, subdomain, hostname}\" failures on vsphere\n1963932 - Installation failures in bootstrap in OpenStack release jobs\n1963964 - Characters are not escaped on config ini file causing Kuryr bootstrap to fail\n1964059 - rebase openshift/sdn to kube 1.21.1\n1964197 - Failing Test vendor/k8s.io/kube-aggregator/pkg/apiserver TestProxyCertReload due to hardcoded certificate expiration\n1964203 - e2e-metal-ipi, e2e-metal-ipi-ovn-dualstack and e2e-metal-ipi-ovn-ipv6 are failing due to \"Unknown provider baremetal\"\n1964243 - The `oc compliance fetch-raw` doesn\u2019t work for disconnected cluster\n1964270 - Failed to install \u0027cluster-kube-descheduler-operator\u0027 with error: \"clusterkubedescheduleroperator.4.8.0-202105211057.p0.assembly.stream\\\": must be no more than 63 characters\"\n1964319 - Network policy \"deny all\" interpreted as \"allow all\" in description page\n1964334 - alertmanager/prometheus/thanos-querier /metrics endpoints are not secured\n1964472 - Make project and namespace requirements more visible rather than giving me an error after submission\n1964486 - Bulk adding of CIDR IPS to whitelist is not working\n1964492 - Pick 102171: Implement support for watch initialization in P\u0026F\n1964625 - NETID duplicate check is only required in NetworkPolicy Mode\n1964748 - Sync upstream 1.7.2 downstream\n1964756 - PVC status is always in \u0027Bound\u0027 status when it is actually cloning\n1964847 - Sanity check test suite missing from the repo\n1964888 - opoenshift-apiserver imagestreamimports depend on \u003e34s timeout support, WAS: transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\n1964936 - error log for \"oc adm catalog mirror\" is not correct\n1964979 - Add mapping from ACI to infraenv to handle creation order issues\n1964997 - Helm Library charts are showing and can be installed from Catalog\n1965024 - [DR] backup and restore should perform consistency checks on etcd snapshots\n1965092 - [Assisted-4.7] [Staging][OLM] Operators deployments start before all workers finished installation\n1965283 - 4.7-\u003e4.8 upgrades: cluster operators are not ready: openshift-controller-manager (Upgradeable=Unknown NoData: ), service-ca (Upgradeable=Unknown NoData:\n1965330 - oc image extract fails due to security capabilities on files\n1965334 - opm index add fails during image extraction\n1965367 - Typo in in etcd-metric-serving-ca resource name\n1965370 - \"Route\" is not translated in Korean or Chinese\n1965391 - When storage class is already present wizard do not jumps to \"Stoarge and nodes\"\n1965422 - runc is missing Provides oci-runtime in rpm spec\n1965522 - [v2v] Multiple typos on VM Import screen\n1965545 - Pod stuck in ContainerCreating: Unit ...slice already exists\n1965909 - Replace \"Enable Taint Nodes\" by \"Mark nodes as dedicated\"\n1965921 - [oVirt] High performance VMs shouldn\u0027t be created with Existing policy\n1965929 - kube-apiserver should use cert auth when reaching out to the oauth-apiserver with a TokenReview request\n1966077 - `hidden` descriptor is visible in the Operator instance details page`\n1966116 - DNS SRV request which worked in 4.7.9 stopped working in 4.7.11\n1966126 - root_ca_cert_publisher_sync_duration_seconds metric can have an excessive cardinality\n1966138 - (release-4.8) Update K8s \u0026 OpenShift API versions\n1966156 - Issue with Internal Registry CA on the service pod\n1966174 - No storage class is installed, OCS and CNV installations fail\n1966268 - Workaround for Network Manager not supporting nmconnections priority\n1966401 - Revamp Ceph Table in Install Wizard flow\n1966410 - kube-controller-manager should not trigger APIRemovedInNextReleaseInUse alert\n1966416 - (release-4.8) Do not exceed the data size limit\n1966459 - \u0027policy/v1beta1 PodDisruptionBudget\u0027 and \u0027batch/v1beta1 CronJob\u0027 appear in image-registry-operator log\n1966487 - IP address in Pods list table are showing node IP other than pod IP\n1966520 - Add button from ocs add capacity should not be enabled if there are no PV\u0027s\n1966523 - (release-4.8) Gather MachineAutoScaler definitions\n1966546 - [master] KubeAPI - keep day1 after cluster is successfully installed\n1966561 - Workload partitioning annotation workaround needed for CSV annotation propagation bug\n1966602 - don\u0027t require manually setting IPv6DualStack feature gate in 4.8\n1966620 - The bundle.Dockerfile in the repo is obsolete\n1966632 - [4.8.0] [assisted operator] Unable to re-register an SNO instance if deleting CRDs during install\n1966654 - Alertmanager PDB is not created, but Prometheus UWM is\n1966672 - Add Sprint 201 translations\n1966675 - Admin console string updates\n1966677 - Change comma to semicolon\n1966683 - Translation bugs from Sprint 201 files\n1966684 - Verify \"Creating snapshot for claim \u003c1\u003e{pvcName}\u003c/1\u003e\" displays correctly\n1966697 - Garbage collector logs every interval - move to debug level\n1966717 - include full timestamps in the logs\n1966759 - Enable downstream plugin for Operator SDK\n1966795 - [tests] Release 4.7 broken due to the usage of wrong OCS version\n1966813 - \"Replacing an unhealthy etcd member whose node is not ready\" procedure results in new etcd pod in CrashLoopBackOff\n1966862 - vsphere IPI - local dns prepender is not prepending nameserver 127.0.0.1\n1966892 - [master] [Assisted-4.8][SNO] SNO node cannot transition into \"Writing image to disk\" from \"Waiting for bootkub[e\"\n1966952 - [4.8.0] [Assisted-4.8][SNO][Dual Stack] DHCPv6 settings \"ipv6.dhcp-duid=ll\" missing from dual stack install\n1967104 - [4.8.0] InfraEnv ctrl: log the amount of NMstate Configs baked into the image\n1967126 - [4.8.0] [DOC] KubeAPI docs should clarify that the InfraEnv Spec pullSecretRef is currently ignored\n1967197 - 404 errors loading some i18n namespaces\n1967207 - Getting started card: console customization resources link shows other resources\n1967208 - Getting started card should use semver library for parsing the version instead of string manipulation\n1967234 - Console is continuously polling for ConsoleLink acm-link\n1967275 - Awkward wrapping in getting started dashboard card\n1967276 - Help menu tooltip overlays dropdown\n1967398 - authentication operator still uses previous deleted pod ip rather than the new created pod ip to do health check\n1967403 - (release-4.8) Increase workloads fingerprint gatherer pods limit\n1967423 - [master] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion\n1967444 - openshift-local-storage pods found with invalid priority class, should be openshift-user-critical or begin with system- while running e2e tests\n1967531 - the ccoctl tool should extend MaxItems when listRoles, the default value 100 is a little small\n1967578 - [4.8.0] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion\n1967591 - The ManagementCPUsOverride admission plugin should not mutate containers with the limit\n1967595 - Fixes the remaining lint issues\n1967614 - prometheus-k8s pods can\u0027t be scheduled due to volume node affinity conflict\n1967623 - [OCPonRHV] - ./openshift-install installation with install-config doesn\u0027t work if ovirt-config.yaml doesn\u0027t exist and user should fill the FQDN URL\n1967625 - Add OpenShift Dockerfile for cloud-provider-aws\n1967631 - [4.8.0] Cluster install failed due to timeout while \"Waiting for control plane\"\n1967633 - [4.8.0] [Assisted-4.8][SNO] SNO node cannot transition into \"Writing image to disk\" from \"Waiting for bootkube\"\n1967639 - Console whitescreens if user preferences fail to load\n1967662 - machine-api-operator should not use deprecated \"platform\" field in infrastructures.config.openshift.io\n1967667 - Add Sprint 202 Round 1 translations\n1967713 - Insights widget shows invalid link to the OCM\n1967717 - Insights Advisor widget is missing a description paragraph and contains deprecated naming\n1967745 - When setting DNS node placement by toleration to not tolerate master node, effect value should not allow string other than \"NoExecute\"\n1967803 - should update to 7.5.5 for grafana resources version label\n1967832 - Add more tests for periodic.go\n1967833 - Add tasks pool to tasks_processing\n1967842 - Production logs are spammed on \"OCS requirements validation status Insufficient hosts to deploy OCS. A minimum of 3 hosts is required to deploy OCS\"\n1967843 - Fix null reference to messagesToSearch in gather_logs.go\n1967902 - [4.8.0] Assisted installer chrony manifests missing index numberring\n1967933 - Network-Tools debug scripts not working as expected\n1967945 - [4.8.0] [assisted operator] Assisted Service Postgres crashes msg: \"mkdir: cannot create directory \u0027/var/lib/pgsql/data/userdata\u0027: Permission denied\"\n1968019 - drain timeout and pool degrading period is too short\n1968067 - [master] Agent validation not including reason for being insufficient\n1968168 - [4.8.0] KubeAPI - keep day1 after cluster is successfully installed\n1968175 - [4.8.0] Agent validation not including reason for being insufficient\n1968373 - [4.8.0] BMAC re-attaches installed node on ISO regeneration\n1968385 - [4.8.0] Infra env require pullSecretRef although it shouldn\u0027t be required\n1968435 - [4.8.0] Unclear message in case of missing clusterImageSet\n1968436 - Listeners timeout updated to remain using default value\n1968449 - [4.8.0] Wrong Install-config override documentation\n1968451 - [4.8.0] Garbage collector not cleaning up directories of removed clusters\n1968452 - [4.8.0] [doc] \"Mirror Registry Configuration\" doc section needs clarification of functionality and limitations\n1968454 - [4.8.0] backend events generated with wrong namespace for agent\n1968455 - [4.8.0] Assisted Service operator\u0027s controllers are starting before the base service is ready\n1968515 - oc should set user-agent when talking with registry\n1968531 - Sync upstream 1.8.0 downstream\n1968558 - [sig-cli] oc adm storage-admin [Suite:openshift/conformance/parallel] doesn\u0027t clean up properly\n1968567 - [OVN] Egress router pod not running and openshift.io/scc is restricted\n1968625 - Pods using sr-iov interfaces failign to start for Failed to create pod sandbox\n1968700 - catalog-operator crashes when status.initContainerStatuses[].state.waiting is nil\n1968701 - Bare metal IPI installation is failed due to worker inspection failure\n1968754 - CI: e2e-metal-ipi-upgrade failing on KubeletHasDiskPressure, which triggers machine-config RequiredPoolsFailed\n1969212 - [FJ OCP4.8 Bug - PUBLIC VERSION]: Masters repeat reboot every few minutes during workers provisioning\n1969284 - Console Query Browser: Can\u0027t reset zoom to fixed time range after dragging to zoom\n1969315 - [4.8.0] BMAC doesn\u0027t check if ISO Url changed before queuing BMH for reconcile\n1969352 - [4.8.0] Creating BareMetalHost without the \"inspect.metal3.io\" does not automatically add it\n1969363 - [4.8.0] Infra env should show the time that ISO was generated. \n1969367 - [4.8.0] BMAC should wait for an ISO to exist for 1 minute before using it\n1969386 - Filesystem\u0027s Utilization doesn\u0027t show in VM overview tab\n1969397 - OVN bug causing subports to stay DOWN fails installations\n1969470 - [4.8.0] Misleading error in case of install-config override bad input\n1969487 - [FJ OCP4.8 Bug]: Avoid always do delete_configuration clean step\n1969525 - Replace golint with revive\n1969535 - Topology edit icon does not link correctly when branch name contains slash\n1969538 - Install a VolumeSnapshotClass by default on CSI Drivers that support it\n1969551 - [4.8.0] Assisted service times out on GetNextSteps due to `oc adm release info` taking too long\n1969561 - Test \"an end user can use OLM can subscribe to the operator\" generates deprecation alert\n1969578 - installer: accesses v1beta1 RBAC APIs and causes APIRemovedInNextReleaseInUse to fire\n1969599 - images without registry are being prefixed with registry.hub.docker.com instead of docker.io\n1969601 - manifest for networks.config.openshift.io CRD uses deprecated apiextensions.k8s.io/v1beta1\n1969626 - Portfoward stream cleanup can cause kubelet to panic\n1969631 - EncryptionPruneControllerDegraded: etcdserver: request timed out\n1969681 - MCO: maxUnavailable of ds/machine-config-daemon does not get updated due to missing resourcemerge check\n1969712 - [4.8.0] Assisted service reports a malformed iso when we fail to download the base iso\n1969752 - [4.8.0] [assisted operator] Installed Clusters are missing DNS setups\n1969773 - [4.8.0] Empty cluster name on handleEnsureISOErrors log after applying InfraEnv.yaml\n1969784 - WebTerminal widget should send resize events\n1969832 - Applying a profile with multiple inheritance where parents include a common ancestor fails\n1969891 - Fix rotated pipelinerun status icon issue in safari\n1969900 - Test files should not use deprecated APIs that will trigger APIRemovedInNextReleaseInUse\n1969903 - Provisioning a large number of hosts results in an unexpected delay in hosts becoming available\n1969951 - Cluster local doesn\u0027t work for knative services created from dev console\n1969969 - ironic-rhcos-downloader container uses and old base image\n1970062 - ccoctl does not work with STS authentication\n1970068 - ovnkube-master logs \"Failed to find node ips for gateway\" error\n1970126 - [4.8.0] Disable \"metrics-events\" when deploying using the operator\n1970150 - master pool is still upgrading when machine config reports level / restarts on osimageurl change\n1970262 - [4.8.0] Remove Agent CRD Status fields not needed\n1970265 - [4.8.0] Add State and StateInfo to DebugInfo in ACI and Agent CRDs\n1970269 - [4.8.0] missing role in agent CRD\n1970271 - [4.8.0] Add ProgressInfo to Agent and AgentClusterInstalll CRDs\n1970381 - Monitoring dashboards: Custom time range inputs should retain their values\n1970395 - [4.8.0] SNO with AI/operator - kubeconfig secret is not created until the spoke is deployed\n1970401 - [4.8.0] AgentLabelSelector is required yet not supported\n1970415 - SR-IOV Docs needs documentation for disabling port security on a network\n1970470 - Add pipeline annotation to Secrets which are created for a private repo\n1970494 - [4.8.0] Missing value-filling of log line in assisted-service operator pod\n1970624 - 4.7-\u003e4.8 updates: AggregatedAPIDown for v1beta1.metrics.k8s.io\n1970828 - \"500 Internal Error\" for all openshift-monitoring routes\n1970975 - 4.7 -\u003e 4.8 upgrades on AWS take longer than expected\n1971068 - Removing invalid AWS instances from the CF templates\n1971080 - 4.7-\u003e4.8 CI: KubePodNotReady due to MCD\u0027s 5m sleep between drain attempts\n1971188 - Web Console does not show OpenShift Virtualization Menu with VirtualMachine CRDs of version v1alpha3 !\n1971293 - [4.8.0] Deleting agent from one namespace causes all agents with the same name to be deleted from all namespaces\n1971308 - [4.8.0] AI KubeAPI AgentClusterInstall confusing \"Validated\" condition about VIP not matching machine network\n1971529 - [Dummy bug for robot] 4.7.14 upgrade to 4.8 and then downgrade back to 4.7.14 doesn\u0027t work - clusteroperator/kube-apiserver is not upgradeable\n1971589 - [4.8.0] Telemetry-client won\u0027t report metrics in case the cluster was installed using the assisted operator\n1971630 - [4.8.0] ACM/ZTP with Wan emulation fails to start the agent service\n1971632 - [4.8.0] ACM/ZTP with Wan emulation, several clusters fail to step past discovery\n1971654 - [4.8.0] InfraEnv controller should always requeue for backend response HTTP StatusConflict (code 409)\n1971739 - Keep /boot RW when kdump is enabled\n1972085 - [4.8.0] Updating configmap within AgentServiceConfig is not logged properly\n1972128 - ironic-static-ip-manager container still uses 4.7 base image\n1972140 - [4.8.0] ACM/ZTP with Wan emulation, SNO cluster installs do not show as installed although they are\n1972167 - Several operators degraded because Failed to create pod sandbox when installing an sts cluster\n1972213 - Openshift Installer| UEFI mode | BM hosts have BIOS halted\n1972262 - [4.8.0] \"baremetalhost.metal3.io/detached\" uses boolean value where string is expected\n1972426 - Adopt failure can trigger deprovisioning\n1972436 - [4.8.0] [DOCS] AgentServiceConfig examples in operator.md doc should each contain databaseStorage + filesystemStorage\n1972526 - [4.8.0] clusterDeployments controller should send an event to InfraEnv for backend cluster registration\n1972530 - [4.8.0] no indication for missing debugInfo in AgentClusterInstall\n1972565 - performance issues due to lost node, pods taking too long to relaunch\n1972662 - DPDK KNI modules need some additional tools\n1972676 - Requirements for authenticating kernel modules with X.509\n1972687 - Using bound SA tokens causes causes failures to /apis/authorization.openshift.io/v1/clusterrolebindings\n1972690 - [4.8.0] infra-env condition message isn\u0027t informative in case of missing pull secret\n1972702 - [4.8.0] Domain dummy.com (not belonging to Red Hat) is being used in a default configuration\n1972768 - kube-apiserver setup fail while installing SNO due to port being used\n1972864 - New `local-with-fallback` service annotation does not preserve source IP\n1973018 - Ironic rhcos downloader breaks image cache in upgrade process from 4.7 to 4.8\n1973117 - No storage class is installed, OCS and CNV installations fail\n1973233 - remove kubevirt images and references\n1973237 - RHCOS-shipped stalld systemd units do not use SCHED_FIFO to run stalld. \n1973428 - Placeholder bug for OCP 4.8.0 image release\n1973667 - [4.8] NetworkPolicy tests were mistakenly marked skipped\n1973672 - fix ovn-kubernetes NetworkPolicy 4.7-\u003e4.8 upgrade issue\n1973995 - [Feature:IPv6DualStack] tests are failing in dualstack\n1974414 - Uninstalling kube-descheduler clusterkubedescheduleroperator.4.6.0-202106010807.p0.git.5db84c5 removes some clusterrolebindings\n1974447 - Requirements for nvidia GPU driver container for driver toolkit\n1974677 - [4.8.0] KubeAPI CVO progress is not available on CR/conditions only in events. \n1974718 - Tuned net plugin fails to handle net devices with n/a value for a channel\n1974743 - [4.8.0] All resources not being cleaned up after clusterdeployment deletion\n1974746 - [4.8.0] File system usage not being logged appropriately\n1974757 - [4.8.0] Assisted-service deployed on an IPv6 cluster installed with proxy: agentclusterinstall shows error pulling an image from quay. \n1974773 - Using bound SA tokens causes fail to query cluster resource especially in a sts cluster\n1974839 - CVE-2021-29059 nodejs-is-svg: Regular expression denial of service if the application is provided and checks a crafted invalid SVG string\n1974850 - [4.8] coreos-installer failing Execshield\n1974931 - [4.8.0] Assisted Service Operator should be Infrastructure Operator for Red Hat OpenShift\n1974978 - 4.8.0.rc0 upgrade hung, stuck on DNS clusteroperator progressing\n1975155 - Kubernetes service IP cannot be accessed for rhel worker\n1975227 - [4.8.0] KubeAPI Move conditions consts to CRD types\n1975360 - [4.8.0] [master] timeout on kubeAPI subsystem test: SNO full install and validate MetaData\n1975404 - [4.8.0] Confusing behavior when multi-node spoke workers present when only controlPlaneAgents specified\n1975432 - Alert InstallPlanStepAppliedWithWarnings does not resolve\n1975527 - VMware UPI is configuring static IPs via ignition rather than afterburn\n1975672 - [4.8.0] Production logs are spammed on \"Found unpreparing host: id 08f22447-2cf1-a107-eedf-12c7421f7380 status insufficient\"\n1975789 - worker nodes rebooted when we simulate a case where the api-server is down\n1975938 - gcp-realtime: e2e test failing [sig-storage] Multi-AZ Cluster Volumes should only be allowed to provision PDs in zones where nodes exist [Suite:openshift/conformance/parallel] [Suite:k8s]\n1975964 - 4.7 nightly upgrade to 4.8 and then downgrade back to 4.7 nightly doesn\u0027t work -  ingresscontroller \"default\" is degraded\n1976079 - [4.8.0] Openshift Installer| UEFI mode | BM hosts have BIOS halted\n1976263 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel]\n1976376 - disable jenkins client plugin test whose Jenkinsfile references master branch openshift/origin artifacts\n1976590 - [Tracker] [SNO][assisted-operator][nmstate] Bond Interface is down when booting from the discovery ISO\n1977233 - [4.8] Unable to authenticate against IDP after upgrade to 4.8-rc.1\n1977351 - CVO pod skipped by workload partitioning with incorrect error stating cluster is not SNO\n1977352 - [4.8.0] [SNO] No DNS to cluster API from assisted-installer-controller\n1977426 - Installation of OCP 4.6.13 fails when teaming interface is used with OVNKubernetes\n1977479 - CI failing on firing CertifiedOperatorsCatalogError due to slow livenessProbe responses\n1977540 - sriov webhook not worked when upgrade from 4.7 to 4.8\n1977607 - [4.8.0] Post making changes to AgentServiceConfig assisted-service operator is not detecting the change and redeploying assisted-service pod\n1977924 - Pod fails to run when a custom SCC with a specific set of volumes is used\n1980788 - NTO-shipped stalld can segfault\n1981633 - enhance service-ca injection\n1982250 - Performance Addon Operator fails to install after catalog source becomes ready\n1982252 - olm Operator is in CrashLoopBackOff state with error \"couldn\u0027t cleanup cross-namespace ownerreferences\"\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2016-2183\nhttps://access.redhat.com/security/cve/CVE-2020-7774\nhttps://access.redhat.com/security/cve/CVE-2020-15106\nhttps://access.redhat.com/security/cve/CVE-2020-15112\nhttps://access.redhat.com/security/cve/CVE-2020-15113\nhttps://access.redhat.com/security/cve/CVE-2020-15114\nhttps://access.redhat.com/security/cve/CVE-2020-15136\nhttps://access.redhat.com/security/cve/CVE-2020-26160\nhttps://access.redhat.com/security/cve/CVE-2020-26541\nhttps://access.redhat.com/security/cve/CVE-2020-28469\nhttps://access.redhat.com/security/cve/CVE-2020-28500\nhttps://access.redhat.com/security/cve/CVE-2020-28852\nhttps://access.redhat.com/security/cve/CVE-2021-3114\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/cve/CVE-2021-3516\nhttps://access.redhat.com/security/cve/CVE-2021-3517\nhttps://access.redhat.com/security/cve/CVE-2021-3518\nhttps://access.redhat.com/security/cve/CVE-2021-3520\nhttps://access.redhat.com/security/cve/CVE-2021-3537\nhttps://access.redhat.com/security/cve/CVE-2021-3541\nhttps://access.redhat.com/security/cve/CVE-2021-3636\nhttps://access.redhat.com/security/cve/CVE-2021-20206\nhttps://access.redhat.com/security/cve/CVE-2021-20271\nhttps://access.redhat.com/security/cve/CVE-2021-20291\nhttps://access.redhat.com/security/cve/CVE-2021-21419\nhttps://access.redhat.com/security/cve/CVE-2021-21623\nhttps://access.redhat.com/security/cve/CVE-2021-21639\nhttps://access.redhat.com/security/cve/CVE-2021-21640\nhttps://access.redhat.com/security/cve/CVE-2021-21648\nhttps://access.redhat.com/security/cve/CVE-2021-22133\nhttps://access.redhat.com/security/cve/CVE-2021-23337\nhttps://access.redhat.com/security/cve/CVE-2021-23362\nhttps://access.redhat.com/security/cve/CVE-2021-23368\nhttps://access.redhat.com/security/cve/CVE-2021-23382\nhttps://access.redhat.com/security/cve/CVE-2021-25735\nhttps://access.redhat.com/security/cve/CVE-2021-25737\nhttps://access.redhat.com/security/cve/CVE-2021-26539\nhttps://access.redhat.com/security/cve/CVE-2021-26540\nhttps://access.redhat.com/security/cve/CVE-2021-27292\nhttps://access.redhat.com/security/cve/CVE-2021-28092\nhttps://access.redhat.com/security/cve/CVE-2021-29059\nhttps://access.redhat.com/security/cve/CVE-2021-29622\nhttps://access.redhat.com/security/cve/CVE-2021-32399\nhttps://access.redhat.com/security/cve/CVE-2021-33034\nhttps://access.redhat.com/security/cve/CVE-2021-33194\nhttps://access.redhat.com/security/cve/CVE-2021-33909\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYQCOF9zjgjWX9erEAQjsEg/+NSFQdRcZpqA34LWRtxn+01y2MO0WLroQ\nd4o+3h0ECKYNRFKJe6n7z8MdmPpvV2uNYN0oIwidTESKHkFTReQ6ZolcV/sh7A26\nZ7E+hhpTTObxAL7Xx8nvI7PNffw3CIOZSpnKws5TdrwuMkH5hnBSSZntP5obp9Vs\nImewWWl7CNQtFewtXbcmUojNzIvU1mujES2DTy2ffypLoOW6kYdJzyWubigIoR6h\ngep9HKf1X4oGPuDNF5trSdxKwi6W68+VsOA25qvcNZMFyeTFhZqowot/Jh1HUHD8\nTWVpDPA83uuExi/c8tE8u7VZgakWkRWcJUsIw68VJVOYGvpP6K/MjTpSuP2itgUX\nX//1RGQM7g6sYTCSwTOIrMAPbYH0IMbGDjcS4fSZcfg6c+WJnEpZ72ZgjHZV8mxb\n1BtQSs2lil48/cwDKM0yMO2nYsKiz4DCCx2W5izP0rLwNA8Hvqh9qlFgkxJWWOvA\nmtBCelB0E74qrE4NXbX+MIF7+ZQKjd1evE91/VWNs0FLR/xXdP3C5ORLU3Fag0G/\n0oTV73NdxP7IXVAdsECwU2AqS9ne1y01zJKtd7hq7H/wtkbasqCNq5J7HikJlLe6\ndpKh5ZRQzYhGeQvho9WQfz/jd4HZZTcB6wxrWubbd05bYt/i/0gau90LpuFEuSDx\n+bLvJlpGiMg=\n=NJcM\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.3.0 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \n\nBugs:\n\n* RFE Make the source code for the endpoint-metrics-operator public (BZ#\n1913444)\n\n* cluster became offline after apiserver health check (BZ# 1942589)\n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension\n1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag\n1913444 - RFE Make the source code for the endpoint-metrics-operator public\n1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull\n1927520 - RHACM 2.3.0 images\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection\n1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate\n1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms\n1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization\n1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string\n1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application\n1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call\n1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS\n1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service\n1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service\n1942589 - cluster became offline after apiserver health check\n1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl()\n1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character\n1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data\n1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service\n1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option\n1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing\n1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js\n1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service\n1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS)\n1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option\n1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command\n1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets\n1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs\n1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method\n1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions\n1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id\n1983131 - Defragmenting an etcd member doesn\u0027t reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters\n\n5. VDSM manages and monitors the host\u0027s storage, memory and\nnetworks as well as virtual machine creation, other host administration\ntasks, statistics gathering, and log collection. \n\nBug Fix(es):\n\n* An update in libvirt has changed the way block threshold events are\nsubmitted. \nAs a result, the VDSM was confused by the libvirt event, and tried to look\nup a drive, logging a warning about a missing drive. \nIn this release, the VDSM has been adapted to handle the new libvirt\nbehavior, and does not log warnings about missing drives. (BZ#1948177)\n\n* Previously, when a virtual machine was powered off on the source host of\na live migration and the migration finished successfully at the same time,\nthe two events  interfered with each other, and sometimes prevented\nmigration cleanup resulting in additional migrations from the host being\nblocked. \nIn this release, additional migrations are not blocked. (BZ#1959436)\n\n* Previously, when failing to execute a snapshot and re-executing it later,\nthe second try would fail due to using the previous execution data. In this\nrelease, this data will be used only when needed, in recovery mode. \n(BZ#1984209)\n\n4. Then engine deletes the volume and causes data corruption. \n1998017 - Keep cinbderlib dependencies optional for 4.4.8\n\n6. \n\nBug Fix(es):\n\n* Documentation is referencing deprecated API for Service Export -\nSubmariner (BZ#1936528)\n\n* Importing of cluster fails due to error/typo in generated command\n(BZ#1936642)\n\n* RHACM 2.2.2 images (BZ#1938215)\n\n* 2.2 clusterlifecycle fails to allow provision `fips: true` clusters on\naws, vsphere (BZ#1941778)\n\n3. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.7.4 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-23337"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001309"
      },
      {
        "db": "VULHUB",
        "id": "VHN-381798"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-23337"
      },
      {
        "db": "PACKETSTORM",
        "id": "163276"
      },
      {
        "db": "PACKETSTORM",
        "id": "162901"
      },
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "db": "PACKETSTORM",
        "id": "164090"
      },
      {
        "db": "PACKETSTORM",
        "id": "162151"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      }
    ],
    "trust": 2.43
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2021-23337",
        "trust": 4.1
      },
      {
        "db": "SIEMENS",
        "id": "SSA-637483",
        "trust": 1.7
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-22-258-05",
        "trust": 1.4
      },
      {
        "db": "PACKETSTORM",
        "id": "162901",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "162151",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU99475301",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001309",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "163690",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "164090",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.1225",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.1871",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4616",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5790",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3036",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2232",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.2182",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2555",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2657",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4568",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.2555",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5150",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022072040",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021062703",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021051230",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022012753",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022011901",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022052615",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021090922",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1137",
        "trust": 0.6
      },
      {
        "db": "VULHUB",
        "id": "VHN-381798",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-23337",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "163276",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "163747",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168352",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-381798"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-23337"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001309"
      },
      {
        "db": "PACKETSTORM",
        "id": "163276"
      },
      {
        "db": "PACKETSTORM",
        "id": "162901"
      },
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "db": "PACKETSTORM",
        "id": "164090"
      },
      {
        "db": "PACKETSTORM",
        "id": "162151"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23337"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1137"
      }
    ]
  },
  "id": "VAR-202102-1466",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-381798"
      }
    ],
    "trust": 0.30766129
  },
  "last_update_date": "2023-12-18T10:45:22.903000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "NTAP-20210312-0006",
        "trust": 0.8,
        "url": "https://security.netapp.com/advisory/ntap-20210312-0006/"
      },
      {
        "title": "IBM: Security Bulletin: IBM App Connect Enterprise Certified Container may be vulnerable to a command injection vulnerability (CVE-2021-23337)",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=a6ab32faf6383cb0cedc0fcc02621330"
      },
      {
        "title": "Debian CVElist Bug Report Logs: CVE-2021-23337 CVE-2020-28500",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=705b23b69122ed473c796891371a9f52"
      },
      {
        "title": "IBM: Security Bulletin: A security vulnerability in Node.js lodash module affects IBM Cloud Pak for Multicloud Management Managed Service",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=be717afa91143ef04a4f0fde16d094de"
      },
      {
        "title": "IBM: Security Bulletin: IBM Watson OpenScale on Cloud Pak for Data is impacted by Vulnerabilities in Node.js",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=3a6796f7c08575af6f64adb2d3b31adb"
      },
      {
        "title": "Red Hat: Important: Migration Toolkit for Containers (MTC) 1.7.4 security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226429 - security advisory"
      },
      {
        "title": "blank",
        "trust": 0.1,
        "url": "https://github.com/cduplantis/blank "
      },
      {
        "title": "Example.EWA.TypeScript.WebApplication",
        "trust": 0.1,
        "url": "https://github.com/refinitiv-api-samples/example.ewa.typescript.webapplication "
      },
      {
        "title": "loginServer",
        "trust": 0.1,
        "url": "https://github.com/did-create-board/loginserver "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-23337"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001309"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-94",
        "trust": 1.1
      },
      {
        "problemtype": "Command injection (CWE-77) [NVD evaluation ]",
        "trust": 0.8
      },
      {
        "problemtype": "CWE-77",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-381798"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001309"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23337"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.3,
        "url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
      },
      {
        "trust": 1.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23337"
      },
      {
        "trust": 1.7,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf"
      },
      {
        "trust": 1.7,
        "url": "https://security.netapp.com/advisory/ntap-20210312-0006/"
      },
      {
        "trust": 1.7,
        "url": "https://github.com/lodash/lodash/blob/ddfd9b11a0126db2302cb70ec9973b66baec0975/lodash.js%23l14851"
      },
      {
        "trust": 1.7,
        "url": "https://snyk.io/vuln/snyk-java-orgfujionwebjars-1074932"
      },
      {
        "trust": 1.7,
        "url": "https://snyk.io/vuln/snyk-java-orgwebjars-1074930"
      },
      {
        "trust": 1.7,
        "url": "https://snyk.io/vuln/snyk-java-orgwebjarsbower-1074928"
      },
      {
        "trust": 1.7,
        "url": "https://snyk.io/vuln/snyk-java-orgwebjarsbowergithublodash-1074931"
      },
      {
        "trust": 1.7,
        "url": "https://snyk.io/vuln/snyk-java-orgwebjarsnpm-1074929"
      },
      {
        "trust": 1.7,
        "url": "https://snyk.io/vuln/snyk-js-lodash-1040724"
      },
      {
        "trust": 1.7,
        "url": "https://www.oracle.com//security-alerts/cpujul2021.html"
      },
      {
        "trust": 1.7,
        "url": "https://www.oracle.com/security-alerts/cpujan2022.html"
      },
      {
        "trust": 1.7,
        "url": "https://www.oracle.com/security-alerts/cpujul2022.html"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu99475301/"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-28500"
      },
      {
        "trust": 0.7,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2021-23337"
      },
      {
        "trust": 0.7,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28500"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-a-security-vulnerability-in-node-js-lodash-module-affects-ibm-cloud-automation-manager/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-watson-discovery-for-ibm-cloud-pak-for-data-affected-by-vulnerability-in-node-js-3/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2657"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.1225"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/162901/red-hat-security-advisory-2021-2179-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-security-guardium-insights-is-affected-by-multiple-vulnerabilities-5/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-integration-bus-ibm-app-connect-enterprise-v11-are-affected-by-vulnerabilities-in-node-js-cve-2021-23337/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-potential-vulnerability-with-node-js-lodash-module-3/"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022012753"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/164090/red-hat-security-advisory-2021-3459-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/support/pages/node/6494365"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.1871"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/support/pages/node/6493751"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022011901"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3036"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021090922"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.2555"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-a-security-vulnerability-in-node-js-lodash-module-affects-ibm-cloud-pak-for-multicloud-management-managed-service-2/"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022052615"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-a-security-vulnerability-in-node-js-lodash-module-affects-ibm-cloud-automation-manager-3/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/support/pages/node/6486333"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/support/pages/node/6524656"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4616"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/162151/red-hat-security-advisory-2021-1168-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022072040"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021062703"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021051230"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-cloud-pak-for-integration-is-vulnerable-to-node-js-lodash-vulnerability-cve-2021-23337/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-watson-openscale-on-cloud-pak-for-data-is-impacted-by-vulnerabilities-in-node-js/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2232"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/163690/red-hat-security-advisory-2021-2438-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5150"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2555"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.2182"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5790"
      },
      {
        "trust": 0.6,
        "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-app-connect-enterprise-certified-container-may-be-vulnerable-to-a-command-injection-vulnerability-cve-2021-23337/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4568"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-3449"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-3450"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28852"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-28852"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-25013"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29362"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29361"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-2708"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8286"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-28196"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-20305"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15358"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15358"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8927"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13434"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2017-14502"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29362"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8285"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-9169"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29363"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3114"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27618"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29361"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-13434"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-2708"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2016-10228"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8231"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3326"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9169"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-27219"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8284"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-27618"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28196"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/articles/2974891"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-28469"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33034"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-28092"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3520"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3537"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3121"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33909"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3518"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-32399"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3516"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-23368"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-23362"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3517"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3541"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28469"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-20271"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-27292"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-23382"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28851"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-21321"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-23841"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-28851"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-23840"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-21322"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26116"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8284"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23336"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20305"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13949"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28362"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8285"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8286"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/jaeger/jaeger_install/rhb"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26116"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3842"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8927"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13776"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27619"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2543"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24977"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-3842"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13776"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23336"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3177"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13949"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8231"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27619"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24977"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/ht"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2179"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/technical_notes"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21419"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15112"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25737"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.8/updating/updating-cluster"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21639"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7774"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20291"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-26540"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23368"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21419"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33194"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-26539"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15106"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29059"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25735"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2016-2183"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26160"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21623"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2438"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15112"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20206"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25735"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20206"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22133"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15113"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21640"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26160"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21640"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7774"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2437"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15136"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23382"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21623"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21639"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21648"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15106"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15136"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29622"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21648"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20291"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15113"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15114"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22133"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20271"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-2183"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15114"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3636"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20454"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20934"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29418"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13050"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-20843"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-1730"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29482"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27358"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23369"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11668"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23364"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23343"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21309"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23383"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-28918"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3560"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33033"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-14889"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-1730"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13627"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20934"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25217"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:3016"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3377"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21272"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29477"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23346"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29478"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11668"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23839"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19906"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33623"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15903"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33910"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:3459"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:1168"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29529"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29529"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3121"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3347"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3449"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28374"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27364"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-26708"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27365"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0466"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27152"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27363"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21322"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27152"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3347"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3450"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14040"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21321"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27365"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-0466"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27364"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14040"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28374"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-26708"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36084"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36085"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8559"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30629"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20838"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1785"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1897"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1927"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4189"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20095"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2526"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-5827"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-29154"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0691"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2097"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3634"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3580"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2068"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24370"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0686"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32206"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-25313"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32208"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-29824"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16845"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23177"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3737"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14155"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19603"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-42771"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1292"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0639"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36087"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6429"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20231"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-40528"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30631"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20232"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25219"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-31566"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-25314"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17595"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36086"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16845"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0512"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15586"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28493"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1650"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13435"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-381798"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001309"
      },
      {
        "db": "PACKETSTORM",
        "id": "163276"
      },
      {
        "db": "PACKETSTORM",
        "id": "162901"
      },
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "db": "PACKETSTORM",
        "id": "164090"
      },
      {
        "db": "PACKETSTORM",
        "id": "162151"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23337"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1137"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-381798"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-23337"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001309"
      },
      {
        "db": "PACKETSTORM",
        "id": "163276"
      },
      {
        "db": "PACKETSTORM",
        "id": "162901"
      },
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "db": "PACKETSTORM",
        "id": "164090"
      },
      {
        "db": "PACKETSTORM",
        "id": "162151"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23337"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1137"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2021-02-15T00:00:00",
        "db": "VULHUB",
        "id": "VHN-381798"
      },
      {
        "date": "2021-02-15T00:00:00",
        "db": "VULMON",
        "id": "CVE-2021-23337"
      },
      {
        "date": "2021-04-05T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2021-001309"
      },
      {
        "date": "2021-06-24T17:54:53",
        "db": "PACKETSTORM",
        "id": "163276"
      },
      {
        "date": "2021-06-01T15:17:45",
        "db": "PACKETSTORM",
        "id": "162901"
      },
      {
        "date": "2021-07-28T14:53:49",
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "date": "2021-08-06T14:02:37",
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "date": "2021-09-09T13:33:33",
        "db": "PACKETSTORM",
        "id": "164090"
      },
      {
        "date": "2021-04-13T15:38:30",
        "db": "PACKETSTORM",
        "id": "162151"
      },
      {
        "date": "2022-09-13T15:42:14",
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "date": "2021-02-15T13:15:12.560000",
        "db": "NVD",
        "id": "CVE-2021-23337"
      },
      {
        "date": "2021-02-15T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202102-1137"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-09-13T00:00:00",
        "db": "VULHUB",
        "id": "VHN-381798"
      },
      {
        "date": "2022-09-13T00:00:00",
        "db": "VULMON",
        "id": "CVE-2021-23337"
      },
      {
        "date": "2022-09-20T06:02:00",
        "db": "JVNDB",
        "id": "JVNDB-2021-001309"
      },
      {
        "date": "2022-09-13T21:25:02.093000",
        "db": "NVD",
        "id": "CVE-2021-23337"
      },
      {
        "date": "2022-11-11T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202102-1137"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1137"
      }
    ],
    "trust": 0.7
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Lodash\u00a0 Command injection vulnerability in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001309"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "code injection",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1137"
      }
    ],
    "trust": 0.6
  }
}

var-202102-1492
Vulnerability from variot

Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions. Lodash Exists in unspecified vulnerabilities.Service operation interruption (DoS) It may be in a state. lodash is an open source JavaScript utility library. There is a security vulnerability in Lodash. Please keep an eye on CNNVD or manufacturer announcements. Description:

The ovirt-engine package provides the manager for virtualization environments. This manager enables admins to define hosts and networks, as well as to add storage, create VMs and manage user permissions.

Bug Fix(es):

  • This release adds the queue attribute to the virtio-scsi driver in the virtual machine configuration. This improvement enables multi-queue performance with the virtio-scsi driver. (BZ#911394)

  • With this release, source-load-balancing has been added as a new sub-option for xmit_hash_policy. It can be configured for bond modes balance-xor (2), 802.3ad (4) and balance-tlb (5), by specifying xmit_hash_policy=vlan+srcmac. (BZ#1683987)

  • The default DataCenter/Cluster will be set to compatibility level 4.6 on new installations of Red Hat Virtualization 4.4.6.; (BZ#1950348)

  • With this release, support has been added for copying disks between regular Storage Domains and Managed Block Storage Domains. It is now possible to migrate disks between Managed Block Storage Domains and regular Storage Domains. (BZ#1906074)

  • Previously, the engine-config value LiveSnapshotPerformFreezeInEngine was set by default to false and was supposed to be uses in cluster compatibility levels below 4.4. The value was set to general version. With this release, each cluster level has it's own value, defaulting to false for 4.4 and above. This will reduce unnecessary overhead in removing time outs of the file system freeze command. (BZ#1932284)

  • With this release, running virtual machines is supported for up to 16TB of RAM on x86_64 architectures. (BZ#1944723)

  • This release adds the gathering of oVirt/RHV related certificates to allow easier debugging of issues for faster customer help and issue resolution. Information from certificates is now included as part of the sosreport. Note that no corresponding private key information is gathered, due to security considerations. (BZ#1845877)

  • Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/2974891

  1. Bugs fixed (https://bugzilla.redhat.com/):

1113630 - [RFE] indicate vNICs that are out-of-sync from their configuration on engine 1310330 - [RFE] Provide a way to remove stale LUNs from hypervisors 1589763 - [downstream clone] Error changing CD for a running VM when ISO image is on a block domain 1621421 - [RFE] indicate vNIC is out of sync on network QoS modification on engine 1717411 - improve engine logging when migration fail 1766414 - [downstream] [UI] hint after updating mtu on networks connected to running VMs 1775145 - Incorrect message from hot-plugging memory 1821199 - HP VM fails to migrate between identical hosts (the same cpu flags) not supporting TSC. 1845877 - [RFE] Collect information about RHV PKI 1875363 - engine-setup failing on FIPS enabled rhel8 machine 1906074 - [RFE] Support disks copy between regular and managed block storage domains 1910858 - vm_ovf_generations is not cleared while detaching the storage domain causing VM import with old stale configuration 1917718 - [RFE] Collect memory usage from guests without ovirt-guest-agent and memory ballooning 1919195 - Unable to create snapshot without saving memory of running VM from VM Portal. 1919984 - engine-setup failse to deploy the grafana service in an external DWH server 1924610 - VM Portal shows N/A as the VM IP address even if the guest agent is running and the IP is shown in the webadmin portal 1926018 - Failed to run VM after FIPS mode is enabled 1926823 - Integrating ELK with RHV-4.4 fails as RHVH is missing 'rsyslog-gnutls' package. 1928158 - Rename 'CA Certificate' link in welcome page to 'Engine CA certificate' 1928188 - Failed to parse 'writeOps' value 'XXXX' to integer: For input string: "XXXX" 1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 1929211 - Failed to parse 'writeOps' value 'XXXX' to integer: For input string: "XXXX" 1930522 - [RHV-4.4.5.5] Failed to deploy RHEL AV 8.4.0 host to RHV with error "missing groups or modules: virt:8.4" 1930565 - Host upgrade failed in imgbased but RHVM shows upgrade successful 1930895 - RHEL 8 virtual machine with qemu-guest-agent installed displays Guest OS Memory Free/Cached/Buffered: Not Configured 1932284 - Engine handled FS freeze is not fast enough for Windows systems 1935073 - Ansible ovirt_disk module can create disks with conflicting IDs that cannot be removed 1942083 - upgrade ovirt-cockpit-sso to 0.1.4-2 1943267 - Snapshot creation is failing for VM having vGPU. 1944723 - [RFE] Support virtual machines with 16TB memory 1948577 - [welcome page] remove "Infrastructure Migration" section (obsoleted) 1949543 - rhv-log-collector-analyzer fails to run MAC Pools rule 1949547 - rhv-log-collector-analyzer report contains 'b characters 1950348 - Set compatibility level 4.6 for Default DataCenter/Cluster during new installations of RHV 4.4.6 1950466 - Host installation failed 1954401 - HP VMs pinning is wiped after edit->ok and pinned to first physical CPUs. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Moderate: OpenShift Container Platform 4.8.2 bug fix and security update Advisory ID: RHSA-2021:2438-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2021:2438 Issue date: 2021-07-27 CVE Names: CVE-2016-2183 CVE-2020-7774 CVE-2020-15106 CVE-2020-15112 CVE-2020-15113 CVE-2020-15114 CVE-2020-15136 CVE-2020-26160 CVE-2020-26541 CVE-2020-28469 CVE-2020-28500 CVE-2020-28852 CVE-2021-3114 CVE-2021-3121 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3537 CVE-2021-3541 CVE-2021-3636 CVE-2021-20206 CVE-2021-20271 CVE-2021-20291 CVE-2021-21419 CVE-2021-21623 CVE-2021-21639 CVE-2021-21640 CVE-2021-21648 CVE-2021-22133 CVE-2021-23337 CVE-2021-23362 CVE-2021-23368 CVE-2021-23382 CVE-2021-25735 CVE-2021-25737 CVE-2021-26539 CVE-2021-26540 CVE-2021-27292 CVE-2021-28092 CVE-2021-29059 CVE-2021-29622 CVE-2021-32399 CVE-2021-33034 CVE-2021-33194 CVE-2021-33909 =====================================================================

  1. Summary:

Red Hat OpenShift Container Platform release 4.8.2 is now available with updates to packages and images that fix several bugs and add enhancements.

This release includes a security update for Red Hat OpenShift Container Platform 4.8.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

This advisory contains the container images for Red Hat OpenShift Container Platform 4.8.2. See the following advisory for the RPM packages for this release:

https://access.redhat.com/errata/RHSA-2021:2437

Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:

https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel ease-notes.html

Security Fix(es):

  • SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32) (CVE-2016-2183)

  • gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)

  • nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)

  • etcd: Large slice causes panic in decodeRecord method (CVE-2020-15106)

  • etcd: DoS in wal/wal.go (CVE-2020-15112)

  • etcd: directories created via os.MkdirAll are not checked for permissions (CVE-2020-15113)

  • etcd: gateway can include itself as an endpoint resulting in resource exhaustion and leads to DoS (CVE-2020-15114)

  • etcd: no authentication is performed against endpoints provided in the

  • --endpoints flag (CVE-2020-15136)

  • jwt-go: access restriction bypass vulnerability (CVE-2020-26160)

  • nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)

  • nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions (CVE-2020-28500)

  • golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag (CVE-2020-28852)

  • golang: crypto/elliptic: incorrect operations on the P-224 curve (CVE-2021-3114)

  • containernetworking-cni: Arbitrary path injection via type field in CNI configuration (CVE-2021-20206)

  • containers/storage: DoS via malicious image (CVE-2021-20291)

  • prometheus: open redirect under the /new endpoint (CVE-2021-29622)

  • golang: x/net/html: infinite loop in ParseFragment (CVE-2021-33194)

  • go.elastic.co/apm: leaks sensitive HTTP headers during panic (CVE-2021-22133)

Space precludes listing in detail the following additional CVEs fixes: (CVE-2021-27292), (CVE-2021-28092), (CVE-2021-29059), (CVE-2021-23382), (CVE-2021-26539), (CVE-2021-26540), (CVE-2021-23337), (CVE-2021-23362) and (CVE-2021-23368)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Additional Changes:

You may download the oc tool and use it to inspect release image metadata as follows:

(For x86_64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.8.2-x86_64

The image digest is ssha256:0e82d17ababc79b10c10c5186920232810aeccbccf2a74c691487090a2c98ebc

(For s390x architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.8.2-s390x

The image digest is sha256:a284c5c3fa21b06a6a65d82be1dc7e58f378aa280acd38742fb167a26b91ecb5

(For ppc64le architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.8.2-ppc64le

The image digest is sha256:da989b8e28bccadbb535c2b9b7d3597146d14d254895cd35f544774f374cdd0f

All OpenShift Container Platform 4.8 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.8/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor

  1. Solution:

For OpenShift Container Platform 4.8 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel ease-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.8/updating/updating-cluster - -cli.html

  1. Bugs fixed (https://bugzilla.redhat.com/):

1369383 - CVE-2016-2183 SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32) 1725981 - oc explain does not work well with full resource.group names 1747270 - [osp] Machine with name "-worker"couldn't join the cluster 1772993 - rbd block devices attached to a host are visible in unprivileged container pods 1786273 - [4.6] KAS pod logs show "error building openapi models ... has invalid property: anyOf" for CRDs 1786314 - [IPI][OSP] Install fails on OpenStack with self-signed certs unless the node running the installer has the CA cert in its system trusts 1801407 - Router in v4v6 mode puts brackets around IPv4 addresses in the Forwarded header 1812212 - ArgoCD example application cannot be downloaded from github 1817954 - [ovirt] Workers nodes are not numbered sequentially 1824911 - PersistentVolume yaml editor is read-only with system:persistent-volume-provisioner ClusterRole 1825219 - openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with "Unable to connect to the server" 1825417 - The containerruntimecontroller doesn't roll back to CR-1 if we delete CR-2 1834551 - ClusterOperatorDown fires when operator is only degraded; states will block upgrades 1835264 - Intree provisioner doesn't respect PVC.spec.dataSource sometimes 1839101 - Some sidebar links in developer perspective don't follow same project 1840881 - The KubeletConfigController cannot process multiple confs for a pool/ pool changes 1846875 - Network setup test high failure rate 1848151 - Console continues to poll the ClusterVersion resource when the user doesn't have authority 1850060 - After upgrading to 3.11.219 timeouts are appearing. 1852637 - Kubelet sets incorrect image names in node status images section 1852743 - Node list CPU column only show usage 1853467 - container_fs_writes_total is inconsistent with CPU/memory in summarizing cgroup values 1857008 - [Edge] [BareMetal] Not provided STATE value for machines 1857477 - Bad helptext for storagecluster creation 1859382 - check-endpoints panics on graceful shutdown 1862084 - Inconsistency of time formats in the OpenShift web-console 1864116 - Cloud credential operator scrolls warnings about unsupported platform 1866222 - Should output all options when runing operator-sdk init --help 1866318 - [RHOCS Usability Study][Dashboard] Users found it difficult to navigate to the OCS dashboard 1866322 - [RHOCS Usability Study][Dashboard] Alert details page does not help to explain the Alert 1866331 - [RHOCS Usability Study][Dashboard] Users need additional tooltips or definitions 1868755 - [vsphere] terraform provider vsphereprivate crashes when network is unavailable on host 1868870 - CVE-2020-15113 etcd: directories created via os.MkdirAll are not checked for permissions 1868872 - CVE-2020-15112 etcd: DoS in wal/wal.go 1868874 - CVE-2020-15114 etcd: gateway can include itself as an endpoint resulting in resource exhaustion and leads to DoS 1868880 - CVE-2020-15136 etcd: no authentication is performed against endpoints provided in the --endpoints flag 1868883 - CVE-2020-15106 etcd: Large slice causes panic in decodeRecord method 1871303 - [sig-instrumentation] Prometheus when installed on the cluster should have important platform topology metrics 1871770 - [IPI baremetal] The Keepalived.conf file is not indented evenly 1872659 - ClusterAutoscaler doesn't scale down when a node is not needed anymore 1873079 - SSH to api and console route is possible when the clsuter is hosted on Openstack 1873649 - proxy.config.openshift.io should validate user inputs 1874322 - openshift/oauth-proxy: htpasswd using SHA1 to store credentials 1874931 - Accessibility - Keyboard shortcut to exit YAML editor not easily discoverable 1876918 - scheduler test leaves taint behind 1878199 - Remove Log Level Normalization controller in cluster-config-operator release N+1 1878655 - [aws-custom-region] creating manifests take too much time when custom endpoint is unreachable 1878685 - Ingress resource with "Passthrough" annotation does not get applied when using the newer "networking.k8s.io/v1" API 1879077 - Nodes tainted after configuring additional host iface 1879140 - console auth errors not understandable by customers 1879182 - switch over to secure access-token logging by default and delete old non-sha256 tokens 1879184 - CVO must detect or log resource hotloops 1879495 - [4.6] namespace \“openshift-user-workload-monitoring\” does not exist” 1879638 - Binary file uploaded to a secret in OCP 4 GUI is not properly converted to Base64-encoded string 1879944 - [OCP 4.8] Slow PV creation with vsphere 1880757 - AWS: master not removed from LB/target group when machine deleted 1880758 - Component descriptions in cloud console have bad description (Managed by Terraform) 1881210 - nodePort for router-default metrics with NodePortService does not exist 1881481 - CVO hotloops on some service manifests 1881484 - CVO hotloops on deployment manifests 1881514 - CVO hotloops on imagestreams from cluster-samples-operator 1881520 - CVO hotloops on (some) clusterrolebindings 1881522 - CVO hotloops on clusterserviceversions packageserver 1881662 - Error getting volume limit for plugin kubernetes.io/ in kubelet logs 1881694 - Evidence of disconnected installs pulling images from the local registry instead of quay.io 1881938 - migrator deployment doesn't tolerate masters 1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability 1883587 - No option for user to select volumeMode 1883993 - Openshift 4.5.8 Deleting pv disk vmdk after delete machine 1884053 - cluster DNS experiencing disruptions during cluster upgrade in insights cluster 1884800 - Failed to set up mount unit: Invalid argument 1885186 - Removing ssh keys MC does not remove the key from authorized_keys 1885349 - [IPI Baremetal] Proxy Information Not passed to metal3 1885717 - activeDeadlineSeconds DeadlineExceeded does not show terminated container statuses 1886572 - auth: error contacting auth provider when extra ingress (not default) goes down 1887849 - When creating new storage class failure_domain is missing. 1888712 - Worker nodes do not come up on a baremetal IPI deployment with control plane network configured on a vlan on top of bond interface due to Pending CSRs 1889689 - AggregatedAPIErrors alert may never fire 1890678 - Cypress: Fix 'structure' accesibility violations 1890828 - Intermittent prune job failures causing operator degradation 1891124 - CP Conformance: CRD spec and status failures 1891301 - Deleting bmh by "oc delete bmh' get stuck 1891696 - [LSO] Add capacity UI does not check for node present in selected storageclass 1891766 - [LSO] Min-Max filter's from OCS wizard accepts Negative values and that cause PV not getting created 1892642 - oauth-server password metrics do not appear in UI after initial OCP installation 1892718 - HostAlreadyClaimed: The new route cannot be loaded with a new api group version 1893850 - Add an alert for requests rejected by the apiserver 1893999 - can't login ocp cluster with oc 4.7 client without the username 1895028 - [gcp-pd-csi-driver-operator] Volumes created by CSI driver are not deleted on cluster deletion 1895053 - Allow builds to optionally mount in cluster trust stores 1896226 - recycler-pod template should not be in kubelet static manifests directory 1896321 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types 1896751 - [RHV IPI] Worker nodes stuck in the Provisioning Stage if the machineset has a long name 1897415 - [Bare Metal - Ironic] provide the ability to set the cipher suite for ipmitool when doing a Bare Metal IPI install 1897621 - Auth test.Login test.logs in as kubeadmin user: Timeout 1897918 - [oVirt] e2e tests fail due to kube-apiserver not finishing 1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability 1899057 - fix spurious br-ex MAC address error log 1899187 - [Openstack] node-valid-hostname.service failes during the first boot leading to 5 minute provisioning delay 1899587 - [External] RGW usage metrics shown on Object Service Dashboard is incorrect 1900454 - Enable host-based disk encryption on Azure platform 1900819 - Scaled ingress replicas following sharded pattern don't balance evenly across multi-AZ 1901207 - Search Page - Pipeline resources table not immediately updated after Name filter applied or removed 1901535 - Remove the managingOAuthAPIServer field from the authentication.operator API 1901648 - "do you need to set up custom dns" tooltip inaccurate 1902003 - Jobs Completions column is not sorting when there are "0 of 1" and "1 of 1" in the list. 1902076 - image registry operator should monitor status of its routes 1902247 - openshift-oauth-apiserver apiserver pod crashloopbackoffs 1903055 - [OSP] Validation should fail when no any IaaS flavor or type related field are given 1903228 - Pod stuck in Terminating, runc init process frozen 1903383 - Latest RHCOS 47.83. builds failing to install: mount /root.squashfs failed 1903553 - systemd container renders node NotReady after deleting it 1903700 - metal3 Deployment doesn't have unique Pod selector 1904006 - The --dir option doest not work for command oc image extract 1904505 - Excessive Memory Use in Builds 1904507 - vsphere-problem-detector: implement missing metrics 1904558 - Random init-p error when trying to start pod 1905095 - Images built on OCP 4.6 clusters create manifests that result in quay.io (and other registries) rejecting those manifests 1905147 - ConsoleQuickStart Card's prerequisites is a combined text instead of a list 1905159 - Installation on previous unused dasd fails after formatting 1905331 - openshift-multus initContainer multus-binary-copy, etc. are not requesting required resources: cpu, memory 1905460 - Deploy using virtualmedia for disabled provisioning network on real BM(HPE) fails 1905577 - Control plane machines not adopted when provisioning network is disabled 1905627 - Warn users when using an unsupported browser such as IE 1905709 - Machine API deletion does not properly handle stopped instances on AWS or GCP 1905849 - Default volumesnapshotclass should be created when creating default storageclass 1906056 - Bundles skipped via the skips field cannot be pinned 1906102 - CBO produces standard metrics 1906147 - ironic-rhcos-downloader should not use --insecure 1906304 - Unexpected value NaN parsing x/y attribute when viewing pod Memory/CPU usage chart 1906740 - [aws]Machine should be "Failed" when creating a machine with invalid region 1907309 - Migrate controlflow v1alpha1 to v1beta1 in storage 1907315 - the internal load balancer annotation for AWS should use "true" instead of "0.0.0.0/0" as value 1907353 - [4.8] OVS daemonset is wasting resources even though it doesn't do anything 1907614 - Update kubernetes deps to 1.20 1908068 - Enable DownwardAPIHugePages feature gate 1908169 - The example of Import URL is "Fedora cloud image list" for all templates. 1908170 - sriov network resource injector: Hugepage injection doesn't work with mult container 1908343 - Input labels in Manage columns modal should be clickable 1908378 - [sig-network] pods should successfully create sandboxes by getting pod - Static Pod Failures 1908655 - "Evaluating rule failed" for "record: node:node_num_cpu:sum" rule 1908762 - [Dualstack baremetal cluster] multicast traffic is not working on ovn-kubernetes 1908765 - [SCALE] enable OVN lflow data path groups 1908774 - [SCALE] enable OVN DB memory trimming on compaction 1908916 - CNO: turn on OVN DB RAFT diffs once all master DB pods are capable of it 1909091 - Pod/node/ip/template isn't showing when vm is running 1909600 - Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error 1909849 - release-openshift-origin-installer-e2e-aws-upgrade-fips-4.4 is perm failing 1909875 - [sig-cluster-lifecycle] Cluster version operator acknowledges upgrade : timed out waiting for cluster to acknowledge upgrade 1910067 - UPI: openstacksdk fails on "server group list" 1910113 - periodic-ci-openshift-release-master-ocp-4.5-ci-e2e-44-stable-to-45-ci is never passing 1910318 - OC 4.6.9 Installer failed: Some pods are not scheduled: 3 node(s) didn't match node selector: AWS compute machines without status 1910378 - socket timeouts for webservice communication between pods 1910396 - 4.6.9 cred operator should back-off when provisioning fails on throttling 1910500 - Could not list CSI provisioner on web when create storage class on GCP platform 1911211 - Should show the cert-recovery-controller version correctly 1911470 - ServiceAccount Registry Authfiles Do Not Contain Entries for Public Hostnames 1912571 - libvirt: Support setting dnsmasq options through the install config 1912820 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade 1913112 - BMC details should be optional for unmanaged hosts 1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag 1913341 - GCP: strange cluster behavior in CI run 1913399 - switch to v1beta1 for the priority and fairness APIs 1913525 - Panic in OLM packageserver when invoking webhook authorization endpoint 1913532 - After a 4.6 to 4.7 upgrade, a node went unready 1913974 - snapshot test periodically failing with "can't open '/mnt/test/data': No such file or directory" 1914127 - Deletion of oc get svc router-default -n openshift-ingress hangs 1914446 - openshift-service-ca-operator and openshift-service-ca pods run as root 1914994 - Panic observed in k8s-prometheus-adapter since k8s 1.20 1915122 - Size of the hostname was preventing proper DNS resolution of the worker node names 1915693 - Not able to install gpu-operator on cpumanager enabled node. 1915971 - Role and Role Binding breadcrumbs do not work as expected 1916116 - the left navigation menu would not be expanded if repeat clicking the links in Overview page 1916118 - [OVN] Source IP is not EgressIP if configured allow 0.0.0.0/0 in the EgressFirewall 1916392 - scrape priority and fairness endpoints for must-gather 1916450 - Alertmanager: add title and text fields to Adv. config. section of Slack Receiver form 1916489 - [sig-scheduling] SchedulerPriorities [Serial] fails with "Error waiting for 1 pods to be running - probably a timeout: Timeout while waiting for pods with labels to be ready" 1916553 - Default template's description is empty on details tab 1916593 - Destroy cluster sometimes stuck in a loop 1916872 - need ability to reconcile exgw annotations on pod add 1916890 - [OCP 4.7] api or api-int not available during installation 1917241 - [en_US] The tooltips of Created date time is not easy to read in all most of UIs. 1917282 - [Migration] MCO stucked for rhel worker after enable the migration prepare state 1917328 - It should default to current namespace when create vm from template action on details page 1917482 - periodic-ci-openshift-release-master-ocp-4.7-e2e-metal-ipi failing with "cannot go from state 'deploy failed' to state 'manageable'" 1917485 - [oVirt] ovirt machine/machineset object has missing some field validations 1917667 - Master machine config pool updates are stalled during the migration from SDN to OVNKube. 1917906 - [oauth-server] bump k8s.io/apiserver to 1.20.3 1917931 - [e2e-gcp-upi] failing due to missing pyopenssl library 1918101 - [vsphere]Delete Provisioning machine took about 12 minutes 1918376 - Image registry pullthrough does not support ICSP, mirroring e2es do not pass 1918442 - Service Reject ACL does not work on dualstack 1918723 - installer fails to write boot record on 4k scsi lun on s390x 1918729 - Add hide/reveal button for the token field in the KMS configuration page 1918750 - CVE-2021-3114 golang: crypto/elliptic: incorrect operations on the P-224 curve 1918785 - Pod request and limit calculations in console are incorrect 1918910 - Scale from zero annotations should not requeue if instance type missing 1919032 - oc image extract - will not extract files from image rootdir - "error: unexpected directory from mapping tests.test" 1919048 - Whereabouts IPv6 addresses not calculated when leading hextets equal 0 1919151 - [Azure] dnsrecords with invalid domain should not be published to Azure dnsZone 1919168 - oc adm catalog mirror doesn't work for the air-gapped cluster 1919291 - [Cinder-csi-driver] Filesystem did not expand for on-line volume resize 1919336 - vsphere-problem-detector should check if datastore is part of datastore cluster 1919356 - Add missing profile annotation in cluster-update-keys manifests 1919391 - CVE-2021-20206 containernetworking-cni: Arbitrary path injection via type field in CNI configuration 1919398 - Permissive Egress NetworkPolicy (0.0.0.0/0) is blocking all traffic 1919406 - OperatorHub filter heading "Provider Type" should be "Source" 1919737 - hostname lookup delays when master node down 1920209 - Multus daemonset upgrade takes the longest time in the cluster during an upgrade 1920221 - GCP jobs exhaust zone listing query quota sometimes due to too many initializations of cloud provider in tests 1920300 - cri-o does not support configuration of stream idle time 1920307 - "VM not running" should be "Guest agent required" on vm details page in dev console 1920532 - Problem in trying to connect through the service to a member that is the same as the caller. 1920677 - Various missingKey errors in the devconsole namespace 1920699 - Operation cannot be fulfilled on clusterresourcequotas.quota.openshift.io error when creating different OpenShift resources 1920901 - [4.7]"500 Internal Error" for prometheus route in https_proxy cluster 1920903 - oc adm top reporting unknown status for Windows node 1920905 - Remove DNS lookup workaround from cluster-api-provider 1921106 - A11y Violation: button name(s) on Utilization Card on Cluster Dashboard 1921184 - kuryr-cni binds to wrong interface on machine with two interfaces 1921227 - Fix issues related to consuming new extensions in Console static plugins 1921264 - Bundle unpack jobs can hang indefinitely 1921267 - ResourceListDropdown not internationalized 1921321 - SR-IOV obliviously reboot the node 1921335 - ThanosSidecarUnhealthy 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1921720 - test: openshift-tests.[sig-cli] oc observe works as expected [Suite:openshift/conformance/parallel] 1921763 - operator registry has high memory usage in 4.7... cleanup row closes 1921778 - Push to stage now failing with semver issues on old releases 1921780 - Search page not fully internationalized 1921781 - DefaultList component not internationalized 1921878 - [kuryr] Egress network policy with namespaceSelector in Kuryr behaves differently than in OVN-Kubernetes 1921885 - Server-side Dry-run with Validation Downloads Entire OpenAPI spec often 1921892 - MAO: controller runtime manager closes event recorder 1921894 - Backport Avoid node disruption when kube-apiserver-to-kubelet-signer is rotated 1921937 - During upgrade /etc/hostname becomes a directory, nodes are set with kubernetes.io/hostname=localhost label 1921953 - ClusterServiceVersion property inference does not infer package and version 1922063 - "Virtual Machine" should be "Templates" in template wizard 1922065 - Rootdisk size is default to 15GiB in customize wizard 1922235 - [build-watch] e2e-aws-upi - e2e-aws-upi container setup failing because of Python code version mismatch 1922264 - Restore snapshot as a new PVC: RWO/RWX access modes are not click-able if parent PVC is deleted 1922280 - [v2v] on the upstream release, In VM import wizard I see RHV but no oVirt 1922646 - Panic in authentication-operator invoking webhook authorization 1922648 - FailedCreatePodSandBox due to "failed to pin namespaces [uts]: [pinns:e]: /var/run/utsns exists and is not a directory: File exists" 1922764 - authentication operator is degraded due to number of kube-apiservers 1922992 - some button text on YAML sidebar are not translated 1922997 - [Migration]The SDN migration rollback failed. 1923038 - [OSP] Cloud Info is loaded twice 1923157 - Ingress traffic performance drop due to NodePort services 1923786 - RHV UPI fails with unhelpful message when ASSET_DIR is not set. 1923811 - Registry claims Available=True despite .status.readyReplicas == 0 while .spec.replicas == 2 1923847 - Error occurs when creating pods if configuring multiple key-only labels in default cluster-wide node selectors or project-wide node selectors 1923984 - Incorrect anti-affinity for UWM prometheus 1924020 - panic: runtime error: index out of range [0] with length 0 1924075 - kuryr-controller restart when enablePortPoolsPrepopulation = true 1924083 - "Activity" Pane of Persistent Storage tab shows events related to Noobaa too 1924140 - [OSP] Typo in OPENSHFIT_INSTALL_SKIP_PREFLIGHT_VALIDATIONS variable 1924171 - ovn-kube must handle single-stack to dual-stack migration 1924358 - metal UPI setup fails, no worker nodes 1924502 - Failed to start transient scope unit: Argument list too long / systemd[1]: Failed to set up mount unit: Invalid argument 1924536 - 'More about Insights' link points to support link 1924585 - "Edit Annotation" are not correctly translated in Chinese 1924586 - Control Plane status and Operators status are not fully internationalized 1924641 - [User Experience] The message "Missing storage class" needs to be displayed after user clicks Next and needs to be rephrased 1924663 - Insights operator should collect related pod logs when operator is degraded 1924701 - Cluster destroy fails when using byo with Kuryr 1924728 - Difficult to identify deployment issue if the destination disk is too small 1924729 - Create Storageclass for CephFS provisioner assumes incorrect default FSName in external mode (side-effect of fix for Bug 1878086) 1924747 - InventoryItem doesn't internationalize resource kind 1924788 - Not clear error message when there are no NADs available for the user 1924816 - Misleading error messages in ironic-conductor log 1924869 - selinux avc deny after installing OCP 4.7 1924916 - PVC reported as Uploading when it is actually cloning 1924917 - kuryr-controller in crash loop if IP is removed from secondary interfaces 1924953 - newly added 'excessive etcd leader changes' test case failing in serial job 1924968 - Monitoring list page filter options are not translated 1924983 - some components in utils directory not localized 1925017 - [UI] VM Details-> Network Interfaces, 'Name,' is displayed instead on 'Name' 1925061 - Prometheus backed by a PVC may start consuming a lot of RAM after 4.6 -> 4.7 upgrade due to series churn 1925083 - Some texts are not marked for translation on idp creation page. 1925087 - Add i18n support for the Secret page 1925148 - Shouldn't create the redundant imagestream when use oc new-app --name=testapp2 -i with exist imagestream 1925207 - VM from custom template - cloudinit disk is not added if creating the VM from custom template using customization wizard 1925216 - openshift installer fails immediately failed to fetch Install Config 1925236 - OpenShift Route targets every port of a multi-port service 1925245 - oc idle: Clusters upgrading with an idled workload do not have annotations on the workload's service 1925261 - Items marked as mandatory in KMS Provider form are not enforced 1925291 - Baremetal IPI - While deploying with IPv6 provision network with subnet other than /64 masters fail to PXE boot 1925343 - [ci] e2e-metal tests are not using reserved instances 1925493 - Enable snapshot e2e tests 1925586 - cluster-etcd-operator is leaking transports 1925614 - Error: InstallPlan.operators.coreos.com not found 1925698 - On GCP, load balancers report kube-apiserver fails its /readyz check 50% of the time, causing load balancer backend churn and disruptions to apiservers 1926029 - [RFE] Either disable save or give warning when no disks support snapshot 1926054 - Localvolume CR is created successfully, when the storageclass name defined in the localvolume exists. 1926072 - Close button (X) does not work in the new "Storage cluster exists" Warning alert message(introduced via fix for Bug 1867400) 1926082 - Insights operator should not go degraded during upgrade 1926106 - [ja_JP][zh_CN] Create Project, Delete Project and Delete PVC modal are not fully internationalized 1926115 - Texts in “Insights” popover on overview page are not marked for i18n 1926123 - Pseudo bug: revert "force cert rotation every couple days for development" in 4.7 1926126 - some kebab/action menu translation issues 1926131 - Add HPA page is not fully internationalized 1926146 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it 1926154 - Create new pool with arbiter - wrong replica 1926278 - [oVirt] consume K8S 1.20 packages 1926279 - Pod ignores mtu setting from sriovNetworkNodePolicies in case of PF partitioning 1926285 - ignore pod not found status messages 1926289 - Accessibility: Modal content hidden from screen readers 1926310 - CannotRetrieveUpdates alerts on Critical severity 1926329 - [Assisted-4.7][Staging] monitoring stack in staging is being overloaded by the amount of metrics being exposed by assisted-installer pods and scraped by prometheus. 1926336 - Service details can overflow boxes at some screen widths 1926346 - move to go 1.15 and registry.ci.openshift.org 1926364 - Installer timeouts because proxy blocked connection to Ironic API running on bootstrap VM 1926465 - bootstrap kube-apiserver does not have --advertise-address set – was: [BM][IPI][DualStack] Installation fails cause Kubernetes service doesn't have IPv6 endpoints 1926484 - API server exits non-zero on 2 SIGTERM signals 1926547 - OpenShift installer not reporting IAM permission issue when removing the Shared Subnet Tag 1926579 - Setting .spec.policy is deprecated and will be removed eventually. Please use .spec.profile instead is being logged every 3 seconds in scheduler operator log 1926598 - Duplicate alert rules are displayed on console for thanos-querier api return wrong results 1926776 - "Template support" modal appears when select the RHEL6 common template 1926835 - [e2e][automation] prow gating use unsupported CDI version 1926843 - pipeline with finally tasks status is improper 1926867 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade 1926893 - When deploying the operator via OLM (after creating the respective catalogsource), the deployment "lost" the resources section. 1926903 - NTO may fail to disable stalld when relying on Tuned '[service]' plugin 1926931 - Inconsistent ovs-flow rule on one of the app node for egress node 1926943 - vsphere-problem-detector: Alerts in CI jobs 1926977 - [sig-devex][Feature:ImageEcosystem][Slow] openshift sample application repositories rails/nodejs 1927013 - Tables don't render properly at smaller screen widths 1927017 - CCO does not relinquish leadership when restarting for proxy CA change 1927042 - Empty static pod files on UPI deployments are confusing 1927047 - multiple external gateway pods will not work in ingress with IP fragmentation 1927068 - Workers fail to PXE boot when IPv6 provisionining network has subnet other than /64 1927075 - [e2e][automation] Fix pvc string in pvc.view 1927118 - OCP 4.7: NVIDIA GPU Operator DCGM metrics not displayed in OpenShift Console Monitoring Metrics page 1927244 - UPI installation with Kuryr timing out on bootstrap stage 1927263 - kubelet service takes around 43 secs to start container when started from stopped state 1927264 - FailedCreatePodSandBox due to multus inability to reach apiserver 1927310 - Performance: Console makes unnecessary requests for en-US messages on load 1927340 - Race condition in OperatorCondition reconcilation 1927366 - OVS configuration service unable to clone NetworkManager's connections in the overlay FS 1927391 - Fix flake in TestSyncPodsDeletesWhenSourcesAreReady 1927393 - 4.7 still points to 4.6 catalog images 1927397 - p&f: add auto update for priority & fairness bootstrap configuration objects 1927423 - Happy "Not Found" and no visible error messages on error-list page when /silences 504s 1927465 - Homepage dashboard content not internationalized 1927678 - Reboot interface defaults to softPowerOff so fencing is too slow 1927731 - /usr/lib/dracut/modules.d/30ignition/ignition --version sigsev 1927797 - 'Pod(s)' should be included in the pod donut label when a horizontal pod autoscaler is enabled 1927882 - Can't create cluster role binding from UI when a project is selected 1927895 - global RuntimeConfig is overwritten with merge result 1927898 - i18n Admin Notifier 1927902 - i18n Cluster Utilization dashboard duration 1927903 - "CannotRetrieveUpdates" - critical error in openshift web console 1927925 - Manually misspelled as Manualy 1927941 - StatusDescriptor detail item and Status component can cause runtime error when the status is an object or array 1927942 - etcd should use socket option (SO_REUSEADDR) instead of wait for port release on process restart 1927944 - cluster version operator cycles terminating state waiting for leader election 1927993 - Documentation Links in OKD Web Console are not Working 1928008 - Incorrect behavior when we click back button after viewing the node details in Internal-attached mode 1928045 - N+1 scaling Info message says "single zone" even if the nodes are spread across 2 or 0 zones 1928147 - Domain search set in the required domains in Option 119 of DHCP Server is ignored by RHCOS on RHV 1928157 - 4.7 CNO claims to be done upgrading before it even starts 1928164 - Traffic to outside the cluster redirected when OVN is used and NodePort service is configured 1928297 - HAProxy fails with 500 on some requests 1928473 - NetworkManager overlay FS not being created on None platform 1928512 - sap license management logs gatherer 1928537 - Cannot IPI with tang/tpm disk encryption 1928640 - Definite error message when using StorageClass based on azure-file / Premium_LRS 1928658 - Update plugins and Jenkins version to prepare openshift-sync-plugin 1.0.46 release 1928850 - Unable to pull images due to limited quota on Docker Hub 1928851 - manually creating NetNamespaces will break things and this is not obvious 1928867 - golden images - DV should not be created with WaitForFirstConsumer 1928869 - Remove css required to fix search bug in console caused by pf issue in 2021.1 1928875 - Update translations 1928893 - Memory Pressure Drop Down Info is stating "Disk" capacity is low instead of memory 1928931 - DNSRecord CRD is using deprecated v1beta1 API 1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 1929052 - Add new Jenkins agent maven dir for 3.6 1929056 - kube-apiserver-availability.rules are failing evaluation 1929110 - LoadBalancer service check test fails during vsphere upgrade 1929136 - openshift isn't able to mount nfs manila shares to pods 1929175 - LocalVolumeSet: PV is created on disk belonging to other provisioner 1929243 - Namespace column missing in Nodes Node Details / pods tab 1929277 - Monitoring workloads using too high a priorityclass 1929281 - Update Tech Preview badge to transparent border color when upgrading to PatternFly v4.87.1 1929314 - ovn-kubernetes endpoint slice controller doesn't run on CI jobs 1929359 - etcd-quorum-guard uses origin-cli [4.8] 1929577 - Edit Application action overwrites Deployment envFrom values on save 1929654 - Registry for Azure uses legacy V1 StorageAccount 1929693 - Pod stuck at "ContainerCreating" status 1929733 - oVirt CSI driver operator is constantly restarting 1929769 - Getting 404 after switching user perspective in another tab and reload Project details 1929803 - Pipelines shown in edit flow for Workloads created via ContainerImage flow 1929824 - fix alerting on volume name check for vsphere 1929917 - Bare-metal operator is firing for ClusterOperatorDown for 15m during 4.6 to 4.7 upgrade 1929944 - The etcdInsufficientMembers alert fires incorrectly when any instance is down and not when quorum is lost 1930007 - filter dropdown item filter and resource list dropdown item filter doesn't support multi selection 1930015 - OS list is overlapped by buttons in template wizard 1930064 - Web console crashes during VM creation from template when no storage classes are defined 1930220 - Cinder CSI driver is not able to mount volumes under heavier load 1930240 - Generated clouds.yaml incomplete when provisioning network is disabled 1930248 - After creating a remediation flow and rebooting a worker there is no access to the openshift-web-console 1930268 - intel vfio devices are not expose as resources 1930356 - Darwin binary missing from mirror.openshift.com 1930393 - Gather info about unhealthy SAP pods 1930546 - Monitoring-dashboard-workload keep loading when user with cluster-role cluster-monitoring-view login develoer console 1930570 - Jenkins templates are displayed in Developer Catalog twice 1930620 - the logLevel field in containerruntimeconfig can't be set to "trace" 1930631 - Image local-storage-mustgather in the doc does not come from product registry 1930893 - Backport upstream patch 98956 for pod terminations 1931005 - Related objects page doesn't show the object when its name is empty 1931103 - remove periodic log within kubelet 1931115 - Azure cluster install fails with worker type workers Standard_D4_v2 1931215 - [RFE] Cluster-api-provider-ovirt should handle affinity groups 1931217 - [RFE] Installer should create RHV Affinity group for OCP cluster VMS 1931467 - Kubelet consuming a large amount of CPU and memory and node becoming unhealthy 1931505 - [IPI baremetal] Two nodes hold the VIP post remove and start of the Keepalived container 1931522 - Fresh UPI install on BM with bonding using OVN Kubernetes fails 1931529 - SNO: mentioning of 4 nodes in error message - Cluster network CIDR prefix 24 does not contain enough addresses for 4 hosts each one with 25 prefix (128 addresses) 1931629 - Conversational Hub Fails due to ImagePullBackOff 1931637 - Kubeturbo Operator fails due to ImagePullBackOff 1931652 - [single-node] etcd: discover-etcd-initial-cluster graceful termination race. 1931658 - [single-node] cluster-etcd-operator: cluster never pivots from bootstrapIP endpoint 1931674 - [Kuryr] Enforce nodes MTU for the Namespaces and Pods 1931852 - Ignition HTTP GET is failing, because DHCP IPv4 config is failing silently 1931883 - Fail to install Volume Expander Operator due to CrashLookBackOff 1931949 - Red Hat Integration Camel-K Operator keeps stuck in Pending state 1931974 - Operators cannot access kubeapi endpoint on OVNKubernetes on ipv6 1931997 - network-check-target causes upgrade to fail from 4.6.18 to 4.7 1932001 - Only one of multiple subscriptions to the same package is honored 1932097 - Apiserver liveness probe is marking it as unhealthy during normal shutdown 1932105 - machine-config ClusterOperator claims level while control-plane still updating 1932133 - AWS EBS CSI Driver doesn’t support “csi.storage.k8s.io/fsTyps” parameter 1932135 - When “iopsPerGB” parameter is not set, event for AWS EBS CSI Driver provisioning is not clear 1932152 - When “iopsPerGB” parameter is set to a wrong number, events for AWS EBS CSI Driver provisioning are not clear 1932154 - [AWS ] machine stuck in provisioned phase , no warnings or errors 1932182 - catalog operator causing CPU spikes and bad etcd performance 1932229 - Can’t find kubelet metrics for aws ebs csi volumes 1932281 - [Assisted-4.7][UI] Unable to change upgrade channel once upgrades were discovered 1932323 - CVE-2021-26540 sanitize-html: improper validation of hostnames set by the "allowedIframeHostnames" option can lead to bypass hostname whitelist for iframe element 1932324 - CRIO fails to create a Pod in sandbox stage - starting container process caused: process_linux.go:472: container init caused: Running hook #0:: error running hook: exit status 255, stdout: , stderr: \"\n" 1932362 - CVE-2021-26539 sanitize-html: improper handling of internationalized domain name (IDN) can lead to bypass hostname whitelist validation 1932401 - Cluster Ingress Operator degrades if external LB redirects http to https because of new "canary" route 1932453 - Update Japanese timestamp format 1932472 - Edit Form/YAML switchers cause weird collapsing/code-folding issue 1932487 - [OKD] origin-branding manifest is missing cluster profile annotations 1932502 - Setting MTU for a bond interface using Kernel arguments is not working 1932618 - Alerts during a test run should fail the test job, but were not 1932624 - ClusterMonitoringOperatorReconciliationErrors is pending at the end of an upgrade and probably should not be 1932626 - During a 4.8 GCP upgrade OLM fires an alert indicating the operator is unhealthy 1932673 - Virtual machine template provided by red hat should not be editable. The UI allows to edit and then reverse the change after it was made 1932789 - Proxy with port is unable to be validated if it overlaps with service/cluster network 1932799 - During a hive driven baremetal installation the process does not go beyond 80% in the bootstrap VM 1932805 - e2e: test OAuth API connections in the tests by that name 1932816 - No new local storage operator bundle image is built 1932834 - enforce the use of hashed access/authorize tokens 1933101 - Can not upgrade a Helm Chart that uses a library chart in the OpenShift dev console 1933102 - Canary daemonset uses default node selector 1933114 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it [Suite:openshift/conformance/parallel/minimal] 1933159 - multus DaemonSets should use maxUnavailable: 33% 1933173 - openshift-sdn/sdn DaemonSet should use maxUnavailable: 10% 1933174 - openshift-sdn/ovs DaemonSet should use maxUnavailable: 10% 1933179 - network-check-target DaemonSet should use maxUnavailable: 10% 1933180 - openshift-image-registry/node-ca DaemonSet should use maxUnavailable: 10% 1933184 - openshift-cluster-csi-drivers DaemonSets should use maxUnavailable: 10% 1933263 - user manifest with nodeport services causes bootstrap to block 1933269 - Cluster unstable replacing an unhealthy etcd member 1933284 - Samples in CRD creation are ordered arbitarly 1933414 - Machines are created with unexpected name for Ports 1933599 - bump k8s.io/apiserver to 1.20.3 1933630 - [Local Volume] Provision disk failed when disk label has unsupported value like ":" 1933664 - Getting Forbidden for image in a container template when creating a sample app 1933708 - Grafana is not displaying deployment config resources in dashboard Default /Kubernetes / Compute Resources / Namespace (Workloads) 1933711 - EgressDNS: Keep short lived records at most 30s 1933730 - [AI-UI-Wizard] Toggling "Use extra disks for local storage" checkbox highlights the "Next" button to move forward but grays out once clicked 1933761 - Cluster DNS service caps TTLs too low and thus evicts from its cache too aggressively 1933772 - MCD Crash Loop Backoff 1933805 - TargetDown alert fires during upgrades because of normal upgrade behavior 1933857 - Details page can throw an uncaught exception if kindObj prop is undefined 1933880 - Kuryr-Controller crashes when it's missing the status object 1934021 - High RAM usage on machine api termination node system oom 1934071 - etcd consuming high amount of memory and CPU after upgrade to 4.6.17 1934080 - Both old and new Clusterlogging CSVs stuck in Pending during upgrade 1934085 - Scheduling conformance tests failing in a single node cluster 1934107 - cluster-authentication-operator builds URL incorrectly for IPv6 1934112 - Add memory and uptime metadata to IO archive 1934113 - mcd panic when there's not enough free disk space 1934123 - [OSP] First public endpoint is used to fetch ignition config from Glance URL (with multiple endpoints) on OSP 1934163 - Thanos Querier restarting and gettin alert ThanosQueryHttpRequestQueryRangeErrorRateHigh 1934174 - rootfs too small when enabling NBDE 1934176 - Machine Config Operator degrades during cluster update with failed to convert Ignition config spec v2 to v3 1934177 - knative-camel-operator CreateContainerError "container_linux.go:366: starting container process caused: chdir to cwd (\"/home/nonroot\") set in config.json failed: permission denied" 1934216 - machineset-controller stuck in CrashLoopBackOff after upgrade to 4.7.0 1934229 - List page text filter has input lag 1934397 - Extend OLM operator gatherer to include Operator/ClusterServiceVersion conditions 1934400 - [ocp_4][4.6][apiserver-auth] OAuth API servers are not ready - PreconditionNotReady 1934516 - Setup different priority classes for prometheus-k8s and prometheus-user-workload pods 1934556 - OCP-Metal images 1934557 - RHCOS boot image bump for LUKS fixes 1934643 - Need BFD failover capability on ECMP routes 1934711 - openshift-ovn-kubernetes ovnkube-node DaemonSet should use maxUnavailable: 10% 1934773 - Canary client should perform canary probes explicitly over HTTPS (rather than redirect from HTTP) 1934905 - CoreDNS's "errors" plugin is not enabled for custom upstream resolvers 1935058 - Can’t finish install sts clusters on aws government region 1935102 - Error: specifying a root certificates file with the insecure flag is not allowed during oc login 1935155 - IGMP/MLD packets being dropped 1935157 - [e2e][automation] environment tests broken 1935165 - OCP 4.6 Build fails when filename contains an umlaut 1935176 - Missing an indication whether the deployed setup is SNO. 1935269 - Topology operator group shows child Jobs. Not shown in details view's resources. 1935419 - Failed to scale worker using virtualmedia on Dell R640 1935528 - [AWS][Proxy] ingress reports degrade with CanaryChecksSucceeding=False in the cluster with proxy setting 1935539 - Openshift-apiserver CO unavailable during cluster upgrade from 4.6 to 4.7 1935541 - console operator panics in DefaultDeployment with nil cm 1935582 - prometheus liveness probes cause issues while replaying WAL 1935604 - high CPU usage fails ingress controller 1935667 - pipelinerun status icon rendering issue 1935706 - test: Detect when the master pool is still updating after upgrade 1935732 - Update Jenkins agent maven directory to be version agnostic [ART ocp build data] 1935814 - Pod and Node lists eventually have incorrect row heights when additional columns have long text 1935909 - New CSV using ServiceAccount named "default" stuck in Pending during upgrade 1936022 - DNS operator performs spurious updates in response to API's defaulting of daemonset's terminationGracePeriod and service's clusterIPs 1936030 - Ingress operator performs spurious updates in response to API's defaulting of NodePort service's clusterIPs field 1936223 - The IPI installer has a typo. It is missing the word "the" in "the Engine". 1936336 - Updating multus-cni builder & base images to be consistent with ART 4.8 (closed) 1936342 - kuryr-controller restarting after 3 days cluster running - pools without members 1936443 - Hive based OCP IPI baremetal installation fails to connect to API VIP port 22623 1936488 - [sig-instrumentation][Late] Alerts shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured: Prometheus query error 1936515 - sdn-controller is missing some health checks 1936534 - When creating a worker with a used mac-address stuck on registering 1936585 - configure alerts if the catalogsources are missing 1936620 - OLM checkbox descriptor renders switch instead of checkbox 1936721 - network-metrics-deamon not associated with a priorityClassName 1936771 - [aws ebs csi driver] The event for Pod consuming a readonly PVC is not clear 1936785 - Configmap gatherer doesn't include namespace name (in the archive path) in case of a configmap with binary data 1936788 - RBD RWX PVC creation with Filesystem volume mode selection is creating RWX PVC with Block volume mode instead of disabling Filesystem volume mode selection 1936798 - Authentication log gatherer shouldn't scan all the pod logs in the openshift-authentication namespace 1936801 - Support ServiceBinding 0.5.0+ 1936854 - Incorrect imagestream is shown as selected in knative service container image edit flow 1936857 - e2e-ovirt-ipi-install-install is permafailing on 4.5 nightlies 1936859 - ovirt 4.4 -> 4.5 upgrade jobs are permafailing 1936867 - Periodic vsphere IPI install is broken - missing pip 1936871 - [Cinder CSI] Topology aware provisioning doesn't work when Nova and Cinder AZs are different 1936904 - Wrong output YAML when syncing groups without --confirm 1936983 - Topology view - vm details screen isntt stop loading 1937005 - when kuryr quotas are unlimited, we should not sent alerts 1937018 - FilterToolbar component does not handle 'null' value for 'rowFilters' prop 1937020 - Release new from image stream chooses incorrect ID based on status 1937077 - Blank White page on Topology 1937102 - Pod Containers Page Not Translated 1937122 - CAPBM changes to support flexible reboot modes 1937145 - [Local storage] PV provisioned by localvolumeset stays in "Released" status after the pod/pvc deleted 1937167 - [sig-arch] Managed cluster should have no crashlooping pods in core namespaces over four minutes 1937244 - [Local Storage] The model name of aws EBS doesn't be extracted well 1937299 - pod.spec.volumes.awsElasticBlockStore.partition is not respected on NVMe volumes 1937452 - cluster-network-operator CI linting fails in master branch 1937459 - Wrong Subnet retrieved for Service without Selector 1937460 - [CI] Network quota pre-flight checks are failing the installation 1937464 - openstack cloud credentials are not getting configured with correct user_domain_name across the cluster 1937466 - KubeClientCertificateExpiration alert is confusing, without explanation in the documentation 1937496 - Metrics viewer in OCP Console is missing date in a timestamp for selected datapoint 1937535 - Not all image pulls within OpenShift builds retry 1937594 - multiple pods in ContainerCreating state after migration from OpenshiftSDN to OVNKubernetes 1937627 - Bump DEFAULT_DOC_URL for 4.8 1937628 - Bump upgrade channels for 4.8 1937658 - Description for storage class encryption during storagecluster creation needs to be updated 1937666 - Mouseover on headline 1937683 - Wrong icon classification of output in buildConfig when the destination is a DockerImage 1937693 - ironic image "/" cluttered with files 1937694 - [oVirt] split ovirt providerIDReconciler logic into NodeController and ProviderIDController 1937717 - If browser default font size is 20, the layout of template screen breaks 1937722 - OCP 4.8 vuln due to BZ 1936445 1937929 - Operand page shows a 404:Not Found error for OpenShift GitOps Operator 1937941 - [RFE]fix wording for favorite templates 1937972 - Router HAProxy config file template is slow to render due to repetitive regex compilations 1938131 - [AWS] Missing iam:ListAttachedRolePolicies permission in permissions.go 1938321 - Cannot view PackageManifest objects in YAML on 'Home > Search' page nor 'CatalogSource details > Operators tab' 1938465 - thanos-querier should set a CPU request on the thanos-query container 1938466 - packageserver deployment sets neither CPU or memory request on the packageserver container 1938467 - The default cluster-autoscaler should get default cpu and memory requests if user omits them 1938468 - kube-scheduler-operator has a container without a CPU request 1938492 - Marketplace extract container does not request CPU or memory 1938493 - machine-api-operator declares restrictive cpu and memory limits where it should not 1938636 - Can't set the loglevel of the container: cluster-policy-controller and kube-controller-manager-recovery-controller 1938903 - Time range on dashboard page will be empty after drog and drop mouse in the graph 1938920 - ovnkube-master/ovs-node DaemonSets should use maxUnavailable: 10% 1938947 - Update blocked from 4.6 to 4.7 when using spot/preemptible instances 1938949 - [VPA] Updater failed to trigger evictions due to "vpa-admission-controller" not found 1939054 - machine healthcheck kills aws spot instance before generated 1939060 - CNO: nodes and masters are upgrading simultaneously 1939069 - Add source to vm template silently failed when no storage class is defined in the cluster 1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string 1939168 - Builds failing for OCP 3.11 since PR#25 was merged 1939226 - kube-apiserver readiness probe appears to be hitting /healthz, not /readyz 1939227 - kube-apiserver liveness probe appears to be hitting /healthz, not /livez 1939232 - CI tests using openshift/hello-world broken by Ruby Version Update 1939270 - fix co upgradeableFalse status and reason 1939294 - OLM may not delete pods with grace period zero (force delete) 1939412 - missed labels for thanos-ruler pods 1939485 - CVE-2021-20291 containers/storage: DoS via malicious image 1939547 - Include container="POD" in resource queries 1939555 - VSphereProblemDetectorControllerDegraded: context canceled during upgrade to 4.8.0 1939573 - after entering valid git repo url on add flow page, throwing warning message instead Validated 1939580 - Authentication operator is degraded during 4.8 to 4.8 upgrade and normal 4.8 e2e runs 1939606 - Attempting to put a host into maintenance mode warns about Ceph cluster health, but no storage cluster problems are apparent 1939661 - support new AWS region ap-northeast-3 1939726 - clusteroperator/network should not change condition/Degraded during normal serial test execution 1939731 - Image registry operator reports unavailable during normal serial run 1939734 - Node Fanout Causes Excessive WATCH Secret Calls, Taking Down Clusters 1939740 - dual stack nodes with OVN single ipv6 fails on bootstrap phase 1939752 - ovnkube-master sbdb container does not set requests on cpu or memory 1939753 - Delete HCO is stucking if there is still VM in the cluster 1939815 - Change the Warning Alert for Encrypted PVs in Create StorageClass(provisioner:RBD) page 1939853 - [DOC] Creating manifests API should not allow folder in the "file_name" 1939865 - GCP PD CSI driver does not have CSIDriver instance 1939869 - [e2e][automation] Add annotations to datavolume for HPP 1939873 - Unlimited number of characters accepted for base domain name 1939943 - cluster-kube-apiserver-operator check-endpoints observed a panic: runtime error: invalid memory address or nil pointer dereference 1940030 - cluster-resource-override: fix spelling mistake for run-level match expression in webhook configuration 1940057 - Openshift builds should use a wach instead of polling when checking for pod status 1940142 - 4.6->4.7 updates stick on OpenStackCinderCSIDriverOperatorCR_OpenStackCinderDriverControllerServiceController_Deploying 1940159 - [OSP] cluster destruction fails to remove router in BYON (with provider network) with Kuryr as primary network 1940206 - Selector and VolumeTableRows not i18ned 1940207 - 4.7->4.6 rollbacks stuck on prometheusrules admission webhook "no route to host" 1940314 - Failed to get type for Dashboard Kubernetes / Compute Resources / Namespace (Workloads) 1940318 - No data under 'Current Bandwidth' for Dashboard 'Kubernetes / Networking / Pod' 1940322 - Split of dashbard is wrong, many Network parts 1940337 - rhos-ipi installer fails with not clear message when openstack tenant doesn't have flavors needed for compute machines 1940361 - [e2e][automation] Fix vm action tests with storageclass HPP 1940432 - Gather datahubs.installers.datahub.sap.com resources from SAP clusters 1940488 - After fix for CVE-2021-3344, Builds do not mount node entitlement keys 1940498 - pods may fail to add logical port due to lr-nat-del/lr-nat-add error messages 1940499 - hybrid-overlay not logging properly before exiting due to an error 1940518 - Components in bare metal components lack resource requests 1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header 1940704 - prjquota is dropped from rootflags if rootfs is reprovisioned 1940755 - [Web-console][Local Storage] LocalVolumeSet could not be created from web-console without detail error info 1940865 - Add BareMetalPlatformType into e2e upgrade service unsupported list 1940876 - Components in ovirt components lack resource requests 1940889 - Installation failures in OpenStack release jobs 1940933 - [sig-arch] Check if alerts are firing during or after upgrade success: AggregatedAPIDown on v1beta1.metrics.k8s.io 1940939 - Wrong Openshift node IP as kubelet setting VIP as node IP 1940940 - csi-snapshot-controller goes unavailable when machines are added removed to cluster 1940950 - vsphere: client/bootstrap CSR double create 1940972 - vsphere: [4.6] CSR approval delayed for unknown reason 1941000 - cinder storageclass creates persistent volumes with wrong label failure-domain.beta.kubernetes.io/zone in multi availability zones architecture on OSP 16. 1941334 - [RFE] Cluster-api-provider-ovirt should handle auto pinning policy 1941342 - Add kata-osbuilder-generate.service as part of the default presets 1941456 - Multiple pods stuck in ContainerCreating status with the message "failed to create container for [kubepods burstable podxxx] : dbus: connection closed by user" being seen in the journal log 1941526 - controller-manager-operator: Observed a panic: nil pointer dereference 1941592 - HAProxyDown not Firing 1941606 - [assisted operator] Assisted Installer Operator CSV related images should be digests for icsp 1941625 - Developer -> Topology - i18n misses 1941635 - Developer -> Monitoring - i18n misses 1941636 - BM worker nodes deployment with virtual media failed while trying to clean raid 1941645 - Developer -> Builds - i18n misses 1941655 - Developer -> Pipelines - i18n misses 1941667 - Developer -> Project - i18n misses 1941669 - Developer -> ConfigMaps - i18n misses 1941759 - Errored pre-flight checks should not prevent install 1941798 - Some details pages don't have internationalized ResourceKind labels 1941801 - Many filter toolbar dropdowns haven't been internationalized 1941815 - From the web console the terminal can no longer connect after using leaving and returning to the terminal view 1941859 - [assisted operator] assisted pod deploy first time in error state 1941901 - Toleration merge logic does not account for multiple entries with the same key 1941915 - No validation against template name in boot source customization 1941936 - when setting parameters in containerRuntimeConfig, it will show incorrect information on its description 1941980 - cluster-kube-descheduler operator is broken when upgraded from 4.7 to 4.8 1941990 - Pipeline metrics endpoint changed in osp-1.4 1941995 - fix backwards incompatible trigger api changes in osp1.4 1942086 - Administrator -> Home - i18n misses 1942117 - Administrator -> Workloads - i18n misses 1942125 - Administrator -> Serverless - i18n misses 1942193 - Operand creation form - broken/cutoff blue line on the Accordion component (fieldGroup) 1942207 - [vsphere] hostname are changed when upgrading from 4.6 to 4.7.x causing upgrades to fail 1942271 - Insights operator doesn't gather pod information from openshift-cluster-version 1942375 - CRI-O failing with error "reserving ctr name" 1942395 - The status is always "Updating" on dc detail page after deployment has failed. 1942521 - [Assisted-4.7] [Staging][OCS] Minimum memory for selected role is failing although minimum OCP requirement satisfied 1942522 - Resolution fails to sort channel if inner entry does not satisfy predicate 1942536 - Corrupted image preventing containers from starting 1942548 - Administrator -> Networking - i18n misses 1942553 - CVE-2021-22133 go.elastic.co/apm: leaks sensitive HTTP headers during panic 1942555 - Network policies in ovn-kubernetes don't support external traffic from router when the endpoint publishing strategy is HostNetwork 1942557 - Query is reporting "no datapoint" when label cluster="" is set but work when the label is removed or when running directly in Prometheus 1942608 - crictl cannot list the images with an error: error locating item named "manifest" for image with ID 1942614 - Administrator -> Storage - i18n misses 1942641 - Administrator -> Builds - i18n misses 1942673 - Administrator -> Pipelines - i18n misses 1942694 - Resource names with a colon do not display property in the browser window title 1942715 - Administrator -> User Management - i18n misses 1942716 - Quay Container Security operator has Medium <-> Low colors reversed 1942725 - [SCC] openshift-apiserver degraded when creating new pod after installing Stackrox which creates a less privileged SCC [4.8] 1942736 - Administrator -> Administration - i18n misses 1942749 - Install Operator form should use info icon for popovers 1942837 - [OCPv4.6] unable to deploy pod with unsafe sysctls 1942839 - Windows VMs fail to start on air-gapped environments 1942856 - Unable to assign nodes for EgressIP even if the egress-assignable label is set 1942858 - [RFE]Confusing detach volume UX 1942883 - AWS EBS CSI driver does not support partitions 1942894 - IPA error when provisioning masters due to an error from ironic.conductor - /dev/sda is busy 1942935 - must-gather improvements 1943145 - vsphere: client/bootstrap CSR double create 1943175 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies (set azure storage account TLS version default to 1.2) 1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() 1943219 - unable to install IPI PRIVATE OpenShift cluster in Azure - SSH access from the Internet should be blocked 1943224 - cannot upgrade openshift-kube-descheduler from 4.7.2 to latest 1943238 - The conditions table does not occupy 100% of the width. 1943258 - [Assisted-4.7][Staging][Advanced Networking] Cluster install fails while waiting for control plane 1943314 - [OVN SCALE] Combine Logical Flows inside Southbound DB. 1943315 - avoid workload disruption for ICSP changes 1943320 - Baremetal node loses connectivity with bonded interface and OVNKubernetes 1943329 - TLSSecurityProfile missing from KubeletConfig CRD Manifest 1943356 - Dynamic plugins surfaced in the UI should be referred to as "Console plugins" 1943539 - crio-wipe is failing to start "Failed to shutdown storage before wiping: A layer is mounted: layer is in use by a container" 1943543 - DeploymentConfig Rollback doesn't reset params correctly 1943558 - [assisted operator] Assisted Service pod unable to reach self signed local registry in disco environement 1943578 - CoreDNS caches NXDOMAIN responses for up to 900 seconds 1943614 - add bracket logging on openshift/builder calls into buildah to assist test-platform team triage 1943637 - upgrade from ocp 4.5 to 4.6 does not clear SNAT rules on ovn 1943649 - don't use hello-openshift for network-check-target 1943667 - KubeDaemonSetRolloutStuck fires during upgrades too often because it does not accurately detect progress 1943719 - storage-operator/vsphere-problem-detector causing upgrades to fail that would have succeeded in past versions 1943804 - API server on AWS takes disruption between 70s and 110s after pod begins termination via external LB 1943845 - Router pods should have startup probes configured 1944121 - OVN-kubernetes references AddressSets after deleting them, causing ovn-controller errors 1944160 - CNO: nbctl daemon should log reconnection info 1944180 - OVN-Kube Master does not release election lock on shutdown 1944246 - Ironic fails to inspect and move node to "manageable' but get bmh remains in "inspecting" 1944268 - openshift-install AWS SDK is missing endpoints for the ap-northeast-3 region 1944509 - Translatable texts without context in ssh expose component 1944581 - oc project not works with cluster proxy 1944587 - VPA could not take actions based on the recommendation when min-replicas=1 1944590 - The field name "VolumeSnapshotContent" is wrong on VolumeSnapshotContent detail page 1944602 - Consistant fallures of features/project-creation.feature Cypress test in CI 1944631 - openshif authenticator should not accept non-hashed tokens 1944655 - [manila-csi-driver-operator] openstack-manila-csi-nodeplugin pods stucked with ".. still connecting to unix:///var/lib/kubelet/plugins/csi-nfsplugin/csi.sock" 1944660 - dm-multipath race condition on bare metal causing /boot partition mount failures 1944674 - Project field become to "All projects" and disabled in "Review and create virtual machine" step in devconsole 1944678 - Whereabouts IPAM CNI duplicate IP addresses assigned to pods 1944761 - field level help instances do not use common util component 1944762 - Drain on worker node during an upgrade fails due to PDB set for image registry pod when only a single replica is present 1944763 - field level help instances do not use common util component 1944853 - Update to nodejs >=14.15.4 for ARM 1944974 - Duplicate KubeControllerManagerDown/KubeSchedulerDown alerts 1944986 - Clarify the ContainerRuntimeConfiguration cr description on the validation 1945027 - Button 'Copy SSH Command' does not work 1945085 - Bring back API data in etcd test 1945091 - In k8s 1.21 bump Feature:IPv6DualStack tests are disabled 1945103 - 'User credentials' shows even the VM is not running 1945104 - In k8s 1.21 bump '[sig-storage] [cis-hostpath] [Testpattern: Generic Ephemeral-volume' tests are disabled 1945146 - Remove pipeline Tech preview badge for pipelines GA operator 1945236 - Bootstrap ignition shim doesn't follow proxy settings 1945261 - Operator dependency not consistently chosen from default channel 1945312 - project deletion does not reset UI project context 1945326 - console-operator: does not check route health periodically 1945387 - Image Registry deployment should have 2 replicas and hard anti-affinity rules 1945398 - 4.8 CI failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial] 1945431 - alerts: SystemMemoryExceedsReservation triggers too quickly 1945443 - operator-lifecycle-manager-packageserver flaps Available=False with no reason or message 1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service 1945548 - catalog resource update failed if spec.secrets set to "" 1945584 - Elasticsearch operator fails to install on 4.8 cluster on ppc64le/s390x 1945599 - Optionally set KERNEL_VERSION and RT_KERNEL_VERSION 1945630 - Pod log filename no longer in -.log format 1945637 - QE- Automation- Fixing smoke test suite for pipeline-plugin 1945646 - gcp-routes.sh running as initrc_t unnecessarily 1945659 - [oVirt] remove ovirt_cafile from ovirt-credentials secret 1945677 - Need ACM Managed Cluster Info metric enabled for OCP monitoring telemetry 1945687 - Dockerfile needs updating to new container CI registry 1945700 - Syncing boot mode after changing device should be restricted to Supermicro 1945816 - " Ingresses " should be kept in English for Chinese 1945818 - Chinese translation issues: Operator should be the same with English Operators 1945849 - Unnecessary series churn when a new version of kube-state-metrics is rolled out 1945910 - [aws] support byo iam roles for instances 1945948 - SNO: pods can't reach ingress when the ingress uses a different IPv6. 1946079 - Virtual master is not getting an IP address 1946097 - [oVirt] oVirt credentials secret contains unnecessary "ovirt_cafile" 1946119 - panic parsing install-config 1946243 - No relevant error when pg limit is reached in block pools page 1946307 - [CI] [UPI] use a standardized and reliable way to install google cloud SDK in UPI image 1946320 - Incorrect error message in Deployment Attach Storage Page 1946449 - [e2e][automation] Fix cloud-init tests as UI changed 1946458 - Edit Application action overwrites Deployment envFrom values on save 1946459 - In bare metal IPv6 environment, [sig-storage] [Driver: nfs] tests are failing in CI. 1946479 - In k8s 1.21 bump BoundServiceAccountTokenVolume is disabled by default 1946497 - local-storage-diskmaker pod logs "DeviceSymlinkExists" and "not symlinking, could not get lock: " 1946506 - [on-prem] mDNS plugin no longer needed 1946513 - honor use specified system reserved with auto node sizing 1946540 - auth operator: only configure webhook authenticators for internal auth when oauth-apiserver pods are ready 1946584 - Machine-config controller fails to generate MC, when machine config pool with dashes in name presents under the cluster 1946607 - etcd readinessProbe is not reflective of actual readiness 1946705 - Fix issues with "search" capability in the Topology Quick Add component 1946751 - DAY2 Confusing event when trying to add hosts to a cluster that completed installation 1946788 - Serial tests are broken because of router 1946790 - Marketplace operator flakes Available=False OperatorStarting during updates 1946838 - Copied CSVs show up as adopted components 1946839 - [Azure] While mirroring images to private registry throwing error: invalid character '<' looking for beginning of value 1946865 - no "namespace:kube_pod_container_resource_requests_cpu_cores:sum" and "namespace:kube_pod_container_resource_requests_memory_bytes:sum" metrics 1946893 - the error messages are inconsistent in DNS status conditions if the default service IP is taken 1946922 - Ingress details page doesn't show referenced secret name and link 1946929 - the default dns operator's Progressing status is always True and cluster operator dns Progressing status is False 1947036 - "failed to create Matchbox client or connect" on e2e-metal jobs or metal clusters via cluster-bot 1947066 - machine-config-operator pod crashes when noProxy is * 1947067 - [Installer] Pick up upstream fix for installer console output 1947078 - Incorrect skipped status for conditional tasks in the pipeline run 1947080 - SNO IPv6 with 'temporary 60-day domain' option fails with IPv4 exception 1947154 - [master] [assisted operator] Unable to re-register an SNO instance if deleting CRDs during install 1947164 - Print "Successfully pushed" even if the build push fails. 1947176 - OVN-Kubernetes leaves stale AddressSets around if the deletion was missed. 1947293 - IPv6 provision addresses range larger then /64 prefix (e.g. /48) 1947311 - When adding a new node to localvolumediscovery UI does not show pre-existing node name's 1947360 - [vSphere csi driver operator] operator pod runs as “BestEffort” qosClass 1947371 - [vSphere csi driver operator] operator doesn't create “csidriver” instance 1947402 - Single Node cluster upgrade: AWS EBS CSI driver deployment is stuck on rollout 1947478 - discovery v1 beta1 EndpointSlice is deprecated in Kubernetes 1.21 (OCP 4.8) 1947490 - If Clevis on a managed LUKs volume with Ignition enables, the system will fails to automatically open the LUKs volume on system boot 1947498 - policy v1 beta1 PodDisruptionBudget is deprecated in Kubernetes 1.21 (OCP 4.8) 1947663 - disk details are not synced in web-console 1947665 - Internationalization values for ceph-storage-plugin should be in file named after plugin 1947684 - MCO on SNO sometimes has rendered configs and sometimes does not 1947712 - [OVN] Many faults and Polling interval stuck for 4 seconds every roughly 5 minutes intervals. 1947719 - 8 APIRemovedInNextReleaseInUse info alerts display 1947746 - Show wrong kubernetes version from kube-scheduler/kube-controller-manager operator pods 1947756 - [azure-disk-csi-driver-operator] Should allow more nodes to be updated simultaneously for speeding up cluster upgrade 1947767 - [azure-disk-csi-driver-operator] Uses the same storage type in the sc created by it as the default sc? 1947771 - [kube-descheduler]descheduler operator pod should not run as “BestEffort” qosClass 1947774 - CSI driver operators use "Always" imagePullPolicy in some containers 1947775 - [vSphere csi driver operator] doesn’t use the downstream images from payload. 1947776 - [vSphere csi driver operator] Should allow more nodes to be updated simultaneously for speeding up cluster upgrade 1947779 - [LSO] Should allow more nodes to be updated simultaneously for speeding up LSO upgrade 1947785 - Cloud Compute: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947789 - Console: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947791 - MCO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947793 - DevEx: APIRemovedInNextReleaseInUse info alerts display 1947794 - OLM: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component does not trigger APIRemovedInNextReleaseInUse alert 1947795 - Networking: APIRemovedInNextReleaseInUse info alerts display 1947797 - CVO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947798 - Images: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947800 - Ingress: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947801 - Kube Storage Version Migrator APIRemovedInNextReleaseInUse info alerts display 1947803 - Openshift Apiserver: APIRemovedInNextReleaseInUse info alerts display 1947806 - Re-enable h2spec, http/2 and grpc-interop e2e tests in openshift/origin 1947828 - download it link should save pod log in -.log format 1947866 - disk.csi.azure.com.spec.operatorLogLevel is not updated when CSO loglevel is changed 1947917 - Egress Firewall does not reliably apply firewall rules 1947946 - Operator upgrades can delete existing CSV before completion 1948011 - openshift-controller-manager constantly reporting type "Upgradeable" status Unknown 1948012 - service-ca constantly reporting type "Upgradeable" status Unknown 1948019 - [4.8] Large number of requests to the infrastructure cinder volume service 1948022 - Some on-prem namespaces missing from must-gather 1948040 - cluster-etcd-operator: etcd is using deprecated logger 1948082 - Monitoring should not set Available=False with no reason on updates 1948137 - CNI DEL not called on node reboot - OCP 4 CRI-O. 1948232 - DNS operator performs spurious updates in response to API's defaulting of daemonset's maxSurge and service's ipFamilies and ipFamilyPolicy fields 1948311 - Some jobs failing due to excessive watches: the server has received too many requests and has asked us to try again later 1948359 - [aws] shared tag was not removed from user provided IAM role 1948410 - [LSO] Local Storage Operator uses imagePullPolicy as "Always" 1948415 - [vSphere csi driver operator] clustercsidriver.spec.logLevel doesn't take effective after changing 1948427 - No action is triggered after click 'Continue' button on 'Show community Operator' windows 1948431 - TechPreviewNoUpgrade does not enable CSI migration 1948436 - The outbound traffic was broken intermittently after shutdown one egressIP node 1948443 - OCP 4.8 nightly still showing v1.20 even after 1.21 merge 1948471 - [sig-auth][Feature:OpenShiftAuthorization][Serial] authorization TestAuthorizationResourceAccessReview should succeed [Suite:openshift/conformance/serial] 1948505 - [vSphere csi driver operator] vmware-vsphere-csi-driver-operator pod restart every 10 minutes 1948513 - get-resources.sh doesn't honor the no_proxy settings 1948524 - 'DeploymentUpdated' Updated Deployment.apps/downloads -n openshift-console because it changed message is printed every minute 1948546 - VM of worker is in error state when a network has port_security_enabled=False 1948553 - When setting etcd spec.LogLevel is not propagated to etcd operand 1948555 - A lot of events "rpc error: code = DeadlineExceeded desc = context deadline exceeded" were seen in azure disk csi driver verification test 1948563 - End-to-End Secure boot deployment fails "Invalid value for input variable" 1948582 - Need ability to specify local gateway mode in CNO config 1948585 - Need a CI jobs to test local gateway mode with bare metal 1948592 - [Cluster Network Operator] Missing Egress Router Controller 1948606 - DNS e2e test fails "[sig-arch] Only known images used by tests" because it does not use a known image 1948610 - External Storage [Driver: disk.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly] 1948626 - TestRouteAdmissionPolicy e2e test is failing often 1948628 - ccoctl needs to plan for future (non-AWS) platform support in the CLI 1948634 - upgrades: allow upgrades without version change 1948640 - [Descheduler] operator log reports key failed with : kubedeschedulers.operator.openshift.io "cluster" not found 1948701 - unneeded CCO alert already covered by CVO 1948703 - p&f: probes should not get 429s 1948705 - [assisted operator] SNO deployment fails - ClusterDeployment shows bootstrap.ign was not found 1948706 - Cluster Autoscaler Operator manifests missing annotation for ibm-cloud-managed profile 1948708 - cluster-dns-operator includes a deployment with node selector of masters for the IBM cloud managed profile 1948711 - thanos querier and prometheus-adapter should have 2 replicas 1948714 - cluster-image-registry-operator targets master nodes in ibm-cloud-managed-profile 1948716 - cluster-ingress-operator deployment targets master nodes for ibm-cloud-managed profile 1948718 - cluster-network-operator deployment manifest for ibm-cloud-managed profile contains master node selector 1948719 - Machine API components should use 1.21 dependencies 1948721 - cluster-storage-operator deployment targets master nodes for ibm-cloud-managed profile 1948725 - operator lifecycle manager does not include profile annotations for ibm-cloud-managed 1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing 1948771 - ~50% of GCP upgrade jobs in 4.8 failing with "AggregatedAPIDown" alert on packages.coreos.com 1948782 - Stale references to the single-node-production-edge cluster profile 1948787 - secret.StringData shouldn't be used for reads 1948788 - Clicking an empty metrics graph (when there is no data) should still open metrics viewer 1948789 - Clicking on a metrics graph should show request and limits queries as well on the resulting metrics page 1948919 - Need minor update in message on channel modal 1948923 - [aws] installer forces the platform.aws.amiID option to be set, while installing a cluster into GovCloud or C2S region 1948926 - Memory Usage of Dashboard 'Kubernetes / Compute Resources / Pod' contain wrong CPU query 1948936 - [e2e][automation][prow] Prow script point to deleted resource 1948943 - (release-4.8) Limit the number of collected pods in the workloads gatherer 1948953 - Uninitialized cloud provider error when provisioning a cinder volume 1948963 - [RFE] Cluster-api-provider-ovirt should handle hugepages 1948966 - Add the ability to run a gather done by IO via a Kubernetes Job 1948981 - Align dependencies and libraries with latest ironic code 1948998 - style fixes by GoLand and golangci-lint 1948999 - Can not assign multiple EgressIPs to a namespace by using automatic way. 1949019 - PersistentVolumes page cannot sync project status automatically which will block user to create PV 1949022 - Openshift 4 has a zombie problem 1949039 - Wrong env name to get podnetinfo for hugepage in app-netutil 1949041 - vsphere: wrong image names in bundle 1949042 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the http2 tests (on OpenStack) 1949050 - Bump k8s to latest 1.21 1949061 - [assisted operator][nmstate] Continuous attempts to reconcile InstallEnv in the case of invalid NMStateConfig 1949063 - [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service 1949075 - Extend openshift/api for Add card customization 1949093 - PatternFly v4.96.2 regression results in a.pf-c-button hover issues 1949096 - Restore private git clone tests 1949099 - network-check-target code cleanup 1949105 - NetworkPolicy ... should enforce ingress policy allowing any port traffic to a server on a specific protocol 1949145 - Move openshift-user-critical priority class to CCO 1949155 - Console doesn't correctly check for favorited or last namespace on load if project picker used 1949180 - Pipelines plugin model kinds aren't picked up by parser 1949202 - sriov-network-operator not available from operatorhub on ppc64le 1949218 - ccoctl not included in container image 1949237 - Bump OVN: Lots of conjunction warnings in ovn-controller container logs 1949277 - operator-marketplace: deployment manifests for ibm-cloud-managed profile have master node selectors 1949294 - [assisted operator] OPENSHIFT_VERSIONS in assisted operator subscription does not propagate 1949306 - need a way to see top API accessors 1949313 - Rename vmware-vsphere- images to vsphere- images before 4.8 ships 1949316 - BaremetalHost resource automatedCleaningMode ignored due to outdated vendoring 1949347 - apiserver-watcher support for dual-stack 1949357 - manila-csi-controller pod not running due to secret lack(in another ns) 1949361 - CoreDNS resolution failure for external hostnames with "A: dns: overflow unpacking uint16" 1949364 - Mention scheduling profiles in scheduler operator repository 1949370 - Testability of: Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error 1949384 - Edit Default Pull Secret modal - i18n misses 1949387 - Fix the typo in auto node sizing script 1949404 - label selector on pvc creation page - i18n misses 1949410 - The referred role doesn't exist if create rolebinding from rolebinding tab of role page 1949411 - VolumeSnapshot, VolumeSnapshotClass and VolumeSnapshotConent Details tab is not translated - i18n misses 1949413 - Automatic boot order setting is done incorrectly when using by-path style device names 1949418 - Controller factory workers should always restart on panic() 1949419 - oauth-apiserver logs "[SHOULD NOT HAPPEN] failed to update managedFields for authentication.k8s.io/v1, Kind=TokenReview: failed to convert new object (authentication.k8s.io/v1, Kind=TokenReview)" 1949420 - [azure csi driver operator] pvc.status.capacity and pv.spec.capacity are processed not the same as in-tree plugin 1949435 - ingressclass controller doesn't recreate the openshift-default ingressclass after deleting it 1949480 - Listeners timeout are constantly being updated 1949481 - cluster-samples-operator restarts approximately two times per day and logs too many same messages 1949509 - Kuryr should manage API LB instead of CNO 1949514 - URL is not visible for routes at narrow screen widths 1949554 - Metrics of vSphere CSI driver sidecars are not collected 1949582 - OCP v4.7 installation with OVN-Kubernetes fails with error "egress bandwidth restriction -1 is not equals" 1949589 - APIRemovedInNextEUSReleaseInUse Alert Missing 1949591 - Alert does not catch removed api usage during end-to-end tests. 1949593 - rename DeprecatedAPIInUse alert to APIRemovedInNextReleaseInUse 1949612 - Install with 1.21 Kubelet is spamming logs with failed to get stats failed command 'du' 1949626 - machine-api fails to create AWS client in new regions 1949661 - Kubelet Workloads Management changes for OCPNODE-529 1949664 - Spurious keepalived liveness probe failures 1949671 - System services such as openvswitch are stopped before pod containers on system shutdown or reboot 1949677 - multus is the first pod on a new node and the last to go ready 1949711 - cvo unable to reconcile deletion of openshift-monitoring namespace 1949721 - Pick 99237: Use the audit ID of a request for better correlation 1949741 - Bump golang version of cluster-machine-approver 1949799 - ingresscontroller should deny the setting when spec.tuningOptions.threadCount exceed 64 1949810 - OKD 4.7 unable to access Project Topology View 1949818 - Add e2e test to perform MCO operation Single Node OpenShift 1949820 - Unable to use oc adm top is shortcut when asking for imagestreams 1949862 - The ccoctl tool hits the panic sometime when running the delete subcommand 1949866 - The ccoctl fails to create authentication file when running the command ccoctl aws create-identity-provider with --output-dir parameter 1949880 - adding providerParameters.gcp.clientAccess to existing ingresscontroller doesn't work 1949882 - service-idler build error 1949898 - Backport RP#848 to OCP 4.8 1949907 - Gather summary of PodNetworkConnectivityChecks 1949923 - some defined rootVolumes zones not used on installation 1949928 - Samples Operator updates break CI tests 1949935 - Fix incorrect access review check on start pipeline kebab action 1949956 - kaso: add minreadyseconds to ensure we don't have an LB outage on kas 1949967 - Update Kube dependencies in MCO to 1.21 1949972 - Descheduler metrics: populate build info data and make the metrics entries more readeable 1949978 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the h2spec conformance tests [Suite:openshift/conformance/parallel/minimal] 1949990 - (release-4.8) Extend the OLM operator gatherer to include CSV display name 1949991 - openshift-marketplace pods are crashlooping 1950007 - [CI] [UPI] easy_install is not reliable enough to be used in an image 1950026 - [Descheduler] Need better way to handle evicted pod count for removeDuplicate pod strategy 1950047 - CSV deployment template custom annotations are not propagated to deployments 1950112 - SNO: machine-config pool is degraded: error running chcon -R -t var_run_t /run/mco-machine-os-content/os-content-321709791 1950113 - in-cluster operators need an API for additional AWS tags 1950133 - MCO creates empty conditions on the kubeletconfig object 1950159 - Downstream ovn-kubernetes repo should have no linter errors 1950175 - Update Jenkins and agent base image to Go 1.16 1950196 - ssh Key is added even with 'Expose SSH access to this virtual machine' unchecked 1950210 - VPA CRDs use deprecated API version 1950219 - KnativeServing is not shown in list on global config page 1950232 - [Descheduler] - The minKubeVersion should be 1.21 1950236 - Update OKD imagestreams to prefer centos7 images 1950270 - should use "kubernetes.io/os" in the dns/ingresscontroller node selector description when executing oc explain command 1950284 - Tracking bug for NE-563 - support user-defined tags on AWS load balancers 1950341 - NetworkPolicy: allow-from-router policy does not allow access to service when the endpoint publishing strategy is HostNetwork on OpenshiftSDN network 1950379 - oauth-server is in pending/crashbackoff at beginning 50% of CI runs 1950384 - [sig-builds][Feature:Builds][sig-devex][Feature:Jenkins][Slow] openshift pipeline build perm failing 1950409 - Descheduler operator code and docs still reference v1beta1 1950417 - The Marketplace Operator is building with EOL k8s versions 1950430 - CVO serves metrics over HTTP, despite a lack of consumers 1950460 - RFE: Change Request Size Input to Number Spinner Input 1950471 - e2e-metal-ipi-ovn-dualstack is failing with etcd unable to bootstrap 1950532 - Include "update" when referring to operator approval and channel 1950543 - Document non-HA behaviors in the MCO (SingleNodeOpenshift) 1950590 - CNO: Too many OVN netFlows collectors causes ovnkube pods CrashLoopBackOff 1950653 - BuildConfig ignores Args 1950761 - Monitoring operator deployments anti-affinity rules prevent their rollout on single-node 1950908 - kube_pod_labels metric does not contain k8s labels 1950912 - [e2e][automation] add devconsole tests 1950916 - [RFE]console page show error when vm is poused 1950934 - Unnecessary rollouts can happen due to unsorted endpoints 1950935 - Updating cluster-network-operator builder & base images to be consistent with ART 1950978 - the ingressclass cannot be removed even after deleting the related custom ingresscontroller 1951007 - ovn master pod crashed 1951029 - Drainer panics on missing context for node patch 1951034 - (release-4.8) Split up the GatherClusterOperators into smaller parts 1951042 - Panics every few minutes in kubelet logs post-rebase 1951043 - Start Pipeline Modal Parameters should accept empty string defaults 1951058 - [gcp-pd-csi-driver-operator] topology and multipods capabilities are not enabled in e2e tests 1951066 - [IBM][ROKS] Enable volume snapshot controllers on IBM Cloud 1951084 - avoid benign "Path \"/run/secrets/etc-pki-entitlement\" from \"/etc/containers/mounts.conf\" doesn't exist, skipping" messages 1951158 - Egress Router CRD missing Addresses entry 1951169 - Improve API Explorer discoverability from the Console 1951174 - re-pin libvirt to 6.0.0 1951203 - oc adm catalog mirror can generate ICSPs that exceed etcd's size limit 1951209 - RerunOnFailure runStrategy shows wrong VM status (Starting) on Succeeded VMI 1951212 - User/Group details shows unrelated subjects in role bindings tab 1951214 - VM list page crashes when the volume type is sysprep 1951339 - Cluster-version operator does not manage operand container environments when manifest lacks opinions 1951387 - opm index add doesn't respect deprecated bundles 1951412 - Configmap gatherer can fail incorrectly 1951456 - Docs and linting fixes 1951486 - Replace "kubevirt_vmi_network_traffic_bytes_total" with new metrics names 1951505 - Remove deprecated techPreviewUserWorkload field from CMO's configmap 1951558 - Backport Upstream 101093 for Startup Probe Fix 1951585 - enterprise-pod fails to build 1951636 - assisted service operator use default serviceaccount in operator bundle 1951637 - don't rollout a new kube-apiserver revision on oauth accessTokenInactivityTimeout changes 1951639 - Bootstrap API server unclean shutdown causes reconcile delay 1951646 - Unexpected memory climb while container not in use 1951652 - Add retries to opm index add 1951670 - Error gathering bootstrap log after pivot: The bootstrap machine did not execute the release-image.service systemd unit 1951671 - Excessive writes to ironic Nodes 1951705 - kube-apiserver needs alerts on CPU utlization 1951713 - [OCP-OSP] After changing image in machine object it enters in Failed - Can't find created instance 1951853 - dnses.operator.openshift.io resource's spec.nodePlacement.tolerations godoc incorrectly describes default behavior 1951858 - unexpected text '0' on filter toolbar on RoleBinding tab 1951860 - [4.8] add Intel XXV710 NIC model (1572) support in SR-IOV Operator 1951870 - sriov network resources injector: user defined injection removed existing pod annotations 1951891 - [migration] cannot change ClusterNetwork CIDR during migration 1951952 - [AWS CSI Migration] Metrics for cloudprovider error requests are lost 1952001 - Delegated authentication: reduce the number of watch requests 1952032 - malformatted assets in CMO 1952045 - Mirror nfs-server image used in jenkins-e2e 1952049 - Helm: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1952079 - rebase openshift/sdn to kube 1.21 1952111 - Optimize importing from @patternfly/react-tokens 1952174 - DNS operator claims to be done upgrading before it even starts 1952179 - OpenStack Provider Ports UI Underscore Variables 1952187 - Pods stuck in ImagePullBackOff with errors like rpc error: code = Unknown desc = Error committing the finished image: image with ID "SomeLongID" already exists, but uses a different top layer: that ID 1952211 - cascading mounts happening exponentially on when deleting openstack-cinder-csi-driver-node pods 1952214 - Console Devfile Import Dev Preview broken 1952238 - Catalog pods don't report termination logs to catalog-operator 1952262 - Need support external gateway via hybrid overlay 1952266 - etcd operator bumps status.version[name=operator] before operands update 1952268 - etcd operator should not set Degraded=True EtcdMembersDegraded on healthy machine-config node reboots 1952282 - CSR approver races with nodelink controller and does not requeue 1952310 - VM cannot start up if the ssh key is added by another template 1952325 - [e2e][automation] Check support modal in ssh tests and skip template parentSupport 1952333 - openshift/kubernetes vulnerable to CVE-2021-3121 1952358 - Openshift-apiserver CO unavailable in fresh OCP 4.7.5 installations 1952367 - No VM status on overview page when VM is pending 1952368 - worker pool went degraded due to no rpm-ostree on rhel worker during applying new mc 1952372 - VM stop action should not be there if the VM is not running 1952405 - console-operator is not reporting correct Available status 1952448 - Switch from Managed to Disabled mode: no IP removed from configuration and no container metal3-static-ip-manager stopped 1952460 - In k8s 1.21 bump '[sig-network] Firewall rule control plane should not expose well-known ports' test is disabled 1952473 - Monitor pod placement during upgrades 1952487 - Template filter does not work properly 1952495 - “Create” button on the Templates page is confuse 1952527 - [Multus] multi-networkpolicy does wrong filtering 1952545 - Selection issue when inserting YAML snippets 1952585 - Operator links for 'repository' and 'container image' should be clickable in OperatorHub 1952604 - Incorrect port in external loadbalancer config 1952610 - [aws] image-registry panics when the cluster is installed in a new region 1952611 - Tracking bug for OCPCLOUD-1115 - support user-defined tags on AWS EC2 Instances 1952618 - 4.7.4->4.7.8 Upgrade Caused OpenShift-Apiserver Outage 1952625 - Fix translator-reported text issues 1952632 - 4.8 installer should default ClusterVersion channel to stable-4.8 1952635 - Web console displays a blank page- white space instead of cluster information 1952665 - [Multus] multi-networkpolicy pod continue restart due to OOM (out of memory) 1952666 - Implement Enhancement 741 for Kubelet 1952667 - Update Readme for cluster-baremetal-operator with details about the operator 1952684 - cluster-etcd-operator: metrics controller panics on invalid response from client 1952728 - It was not clear for users why Snapshot feature was not available 1952730 - “Customize virtual machine” and the “Advanced” feature are confusing in wizard 1952732 - Users did not understand the boot source labels 1952741 - Monitoring DB: after set Time Range as Custom time range, no data display 1952744 - PrometheusDuplicateTimestamps with user workload monitoring enabled 1952759 - [RFE]It was not immediately clear what the Star icon meant 1952795 - cloud-network-config-controller CRD does not specify correct plural name 1952819 - failed to configure pod interface: error while waiting on flows for pod: timed out waiting for OVS flows 1952820 - [LSO] Delete localvolume pv is failed 1952832 - [IBM][ROKS] Enable the Web console UI to deploy OCS in External mode on IBM Cloud 1952891 - Upgrade failed due to cinder csi driver not deployed 1952904 - Linting issues in gather/clusterconfig package 1952906 - Unit tests for configobserver.go 1952931 - CI does not check leftover PVs 1952958 - Runtime error loading console in Safari 13 1953019 - [Installer][baremetal][metal3] The baremetal IPI installer fails on delete cluster with: failed to clean baremetal bootstrap storage pool 1953035 - Installer should error out if publish: Internal is set while deploying OCP cluster on any on-prem platform 1953041 - openshift-authentication-operator uses 3.9k% of its requested CPU 1953077 - Handling GCP's: Error 400: Permission accesscontextmanager.accessLevels.list is not valid for this resource 1953102 - kubelet CPU use during an e2e run increased 25% after rebase 1953105 - RHCOS system components registered a 3.5x increase in CPU use over an e2e run before and after 4/9 1953169 - endpoint slice controller doesn't handle services target port correctly 1953257 - Multiple EgressIPs per node for one namespace when "oc get hostsubnet" 1953280 - DaemonSet/node-resolver is not recreated by dns operator after deleting it 1953291 - cluster-etcd-operator: peer cert DNS SAN is populated incorrectly 1953418 - [e2e][automation] Fix vm wizard validate tests 1953518 - thanos-ruler pods failed to start up for "cannot unmarshal DNS message" 1953530 - Fix openshift/sdn unit test flake 1953539 - kube-storage-version-migrator: priorityClassName not set 1953543 - (release-4.8) Add missing sample archive data 1953551 - build failure: unexpected trampoline for shared or dynamic linking 1953555 - GlusterFS tests fail on ipv6 clusters 1953647 - prometheus-adapter should have a PodDisruptionBudget in HA topology 1953670 - ironic container image build failing because esp partition size is too small 1953680 - ipBlock ignoring all other cidr's apart from the last one specified 1953691 - Remove unused mock 1953703 - Inconsistent usage of Tech preview badge in OCS plugin of OCP Console 1953726 - Fix issues related to loading dynamic plugins 1953729 - e2e unidling test is flaking heavily on SNO jobs 1953795 - Ironic can't virtual media attach ISOs sourced from ingress routes 1953798 - GCP e2e (parallel and upgrade) regularly trigger KubeAPIErrorBudgetBurn alert, also happens on AWS 1953803 - [AWS] Installer should do pre-check to ensure user-provided private hosted zone name is valid for OCP cluster 1953810 - Allow use of storage policy in VMC environments 1953830 - The oc-compliance build does not available for OCP4.8 1953846 - SystemMemoryExceedsReservation alert should consider hugepage reservation 1953977 - [4.8] packageserver pods restart many times on the SNO cluster 1953979 - Ironic caching virtualmedia images results in disk space limitations 1954003 - Alerts shouldn't report any alerts in firing or pending state: openstack-cinder-csi-driver-controller-metrics TargetDown 1954025 - Disk errors while scaling up a node with multipathing enabled 1954087 - Unit tests for kube-scheduler-operator 1954095 - Apply user defined tags in AWS Internal Registry 1954105 - TaskRuns Tab in PipelineRun Details Page makes cluster based calls for TaskRuns 1954124 - oc set volume not adding storageclass to pvc which leads to issues using snapshots 1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js 1954177 - machine-api: admissionReviewVersions v1beta1 is going to be removed in 1.22 1954187 - multus: admissionReviewVersions v1beta1 is going to be removed in 1.22 1954248 - Disable Alertmanager Protractor e2e tests 1954317 - [assisted operator] Environment variables set in the subscription not being inherited by the assisted-service container 1954330 - NetworkPolicy: allow-from-router with label policy-group.network.openshift.io/ingress: "" does not work on a upgraded cluster 1954421 - Get 'Application is not available' when access Prometheus UI 1954459 - Error: Gateway Time-out display on Alerting console 1954460 - UI, The status of "Used Capacity Breakdown [Pods]" is "Not available" 1954509 - FC volume is marked as unmounted after failed reconstruction 1954540 - Lack translation for local language on pages under storage menu 1954544 - authn operator: endpoints controller should use the context it creates 1954554 - Add e2e tests for auto node sizing 1954566 - Cannot update a component (UtilizationCard) error when switching perspectives manually 1954597 - Default image for GCP does not support ignition V3 1954615 - Undiagnosed panic detected in pod: pods/openshift-cloud-credential-operator_cloud-credential-operator 1954634 - apirequestcounts does not honor max users 1954638 - apirequestcounts should indicate removedinrelease of empty instead of 2.0 1954640 - Support of gatherers with different periods 1954671 - disable volume expansion support in vsphere csi driver storage class 1954687 - localvolumediscovery and localvolumset e2es are disabled 1954688 - LSO has missing examples for localvolumesets 1954696 - [API-1009] apirequestcounts should indicate useragent 1954715 - Imagestream imports become very slow when doing many in parallel 1954755 - Multus configuration should allow for net-attach-defs referenced in the openshift-multus namespace 1954765 - CCO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1954768 - baremetal-operator: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1954770 - Backport upstream fix for Kubelet getting stuck in DiskPressure 1954773 - OVN: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component does not trigger APIRemovedInNextReleaseInUse alert 1954783 - [aws] support byo private hosted zone 1954790 - KCM Alert PodDisruptionBudget At and Limit do not alert with maxUnavailable or MinAvailable by percentage 1954830 - verify-client-go job is failing for release-4.7 branch 1954865 - Add necessary priority class to pod-identity-webhook deployment 1954866 - Add necessary priority class to downloads 1954870 - Add necessary priority class to network components 1954873 - dns server may not be specified for clusters with more than 2 dns servers specified by openstack. 1954891 - Add necessary priority class to pruner 1954892 - Add necessary priority class to ingress-canary 1954931 - (release-4.8) Remove legacy URL anonymization in the ClusterOperator related resources 1954937 - [API-1009] oc get apirequestcount shows blank for column REQUESTSINCURRENTHOUR 1954959 - unwanted decorator shown for revisions in topology though should only be shown only for knative services 1954972 - TechPreviewNoUpgrade featureset can be undone 1954973 - "read /proc/pressure/cpu: operation not supported" in node-exporter logs 1954994 - should update to 2.26.0 for prometheus resources label 1955051 - metrics "kube_node_status_capacity_cpu_cores" does not exist 1955089 - Support [sig-cli] oc observe works as expected test for IPv6 1955100 - Samples: APIRemovedInNextReleaseInUse info alerts display 1955102 - Add vsphere_node_hw_version_total metric to the collected metrics 1955114 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM 1955196 - linuxptp-daemon crash on 4.8 1955226 - operator updates apirequestcount CRD over and over 1955229 - release-openshift-origin-installer-e2e-aws-calico-4.7 is permfailing 1955256 - stop collecting API that no longer exists 1955324 - Kubernetes Autoscaler should use Go 1.16 for testing scripts 1955336 - Failure to Install OpenShift on GCP due to Cluster Name being similar to / contains "google" 1955414 - 4.8 -> 4.7 rollbacks broken on unrecognized flowschema openshift-etcd-operator 1955445 - Drop crio image metrics with high cardinality 1955457 - Drop container_memory_failures_total metric because of high cardinality 1955467 - Disable collection of node_mountstats_nfs metrics in node_exporter 1955474 - [aws-ebs-csi-driver] rebase from version v1.0.0 1955478 - Drop high-cardinality metrics from kube-state-metrics which aren't used 1955517 - Failed to upgrade from 4.6.25 to 4.7.8 due to the machine-config degradation 1955548 - [IPI][OSP] OCP 4.6/4.7 IPI with kuryr exceeds defined serviceNetwork range 1955554 - MAO does not react to events triggered from Validating Webhook Configurations 1955589 - thanos-querier should have a PodDisruptionBudget in HA topology 1955595 - Add DevPreviewLongLifecycle Descheduler profile 1955596 - Pods stuck in creation phase on realtime kernel SNO 1955610 - release-openshift-origin-installer-old-rhcos-e2e-aws-4.7 is permfailing 1955622 - 4.8-e2e-metal-assisted jobs: Timeout of 360 seconds expired waiting for Cluster to be in status ['installing', 'error'] 1955701 - [4.8] RHCOS boot image bump for RHEL 8.4 Beta 1955749 - OCP branded templates need to be translated 1955761 - packageserver clusteroperator does not set reason or message for Available condition 1955783 - NetworkPolicy: ACL audit log message for allow-from-router policy should also include the namespace to distinguish between two policies similarly named configured in respective namespaces 1955803 - OperatorHub - console accepts any value for "Infrastructure features" annotation 1955822 - CIS Benchmark 5.4.1 Fails on ROKS 4: Prefer using secrets as files over secrets as environment variables 1955854 - Ingress clusteroperator reports Degraded=True/Available=False if any ingresscontroller is degraded or unavailable 1955862 - Local Storage Operator using LocalVolume CR fails to create PV's when backend storage failure is simulated 1955874 - Webscale: sriov vfs are not created and sriovnetworknodestate indicates sync succeeded - state is not correct 1955879 - Customer tags cannot be seen in S3 level when set spec.managementState from Managed-> Removed-> Managed in configs.imageregistry with high ratio 1955969 - Workers cannot be deployed attached to multiple networks. 1956079 - Installer gather doesn't collect any networking information 1956208 - Installer should validate root volume type 1956220 - Set htt proxy system properties as expected by kubernetes-client 1956281 - Disconnected installs are failing with kubelet trying to pause image from the internet 1956334 - Event Listener Details page does not show Triggers section 1956353 - test: analyze job consistently fails 1956372 - openshift-gcp-routes causes disruption during upgrade by stopping before all pods terminate 1956405 - Bump k8s dependencies in cluster resource override admission operator 1956411 - Apply custom tags to AWS EBS volumes 1956480 - [4.8] Bootimage bump tracker 1956606 - probes FlowSchema manifest not included in any cluster profile 1956607 - Multiple manifests lack cluster profile annotations 1956609 - [cluster-machine-approver] CSRs for replacement control plane nodes not approved after restore from backup 1956610 - manage-helm-repos manifest lacks cluster profile annotations 1956611 - OLM CRD schema validation failing against CRs where the value of a string field is a blank string 1956650 - The container disk URL is empty for Windows guest tools 1956768 - aws-ebs-csi-driver-controller-metrics TargetDown 1956826 - buildArgs does not work when the value is taken from a secret 1956895 - Fix chatty kubelet log message 1956898 - fix log files being overwritten on container state loss 1956920 - can't open terminal for pods that have more than one container running 1956959 - ipv6 disconnected sno crd deployment hive reports success status and clusterdeployrmet reporting false 1956978 - Installer gather doesn't include pod names in filename 1957039 - Physical VIP for pod -> Svc -> Host is incorrectly set to an IP of 169.254.169.2 for Local GW 1957041 - Update CI e2echart with more node info 1957127 - Delegated authentication: reduce the number of watch requests 1957131 - Conformance tests for OpenStack require the Cinder client that is not included in the "tests" image 1957146 - Only run test/extended/router/idle tests on OpenshiftSDN or OVNKubernetes 1957149 - CI: "Managed cluster should start all core operators" fails with: OpenStackCinderDriverStaticResourcesControllerDegraded: "volumesnapshotclass.yaml" (string): missing dynamicClient 1957179 - Incorrect VERSION in node_exporter 1957190 - CI jobs failing due too many watch requests (prometheus-operator) 1957198 - Misspelled console-operator condition 1957227 - Issue replacing the EnvVariables using the unsupported ConfigMap 1957260 - [4.8] [gcp] Installer is missing new region/zone europe-central2 1957261 - update godoc for new build status image change trigger fields 1957295 - Apply priority classes conventions as test to openshift/origin repo 1957315 - kuryr-controller doesn't indicate being out of quota 1957349 - [Azure] Machine object showing Failed phase even node is ready and VM is running properly 1957374 - mcddrainerr doesn't list specific pod 1957386 - Config serve and validate command should be under alpha 1957446 - prepare CCO for future without v1beta1 CustomResourceDefinitions 1957502 - Infrequent panic in kube-apiserver in aws-serial job 1957561 - lack of pseudolocalization for some text on Cluster Setting page 1957584 - Routes are not getting created when using hostname without FQDN standard 1957597 - Public DNS records were not deleted when destroying a cluster which is using byo private hosted zone 1957645 - Event "Updated PrometheusRule.monitoring.coreos.com/v1 because it changed" is frequently looped with weird empty {} changes 1957708 - e2e-metal-ipi and related jobs fail to bootstrap due to multiple VIP's 1957726 - Pod stuck in ContainerCreating - Failed to start transient scope unit: Connection timed out 1957748 - Ptp operator pod should have CPU and memory requests set but not limits 1957756 - Device Replacemet UI, The status of the disk is "replacement ready" before I clicked on "start replacement" 1957772 - ptp daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent 1957775 - CVO creating cloud-controller-manager too early causing upgrade failures 1957809 - [OSP] Install with invalid platform.openstack.machinesSubnet results in runtime error 1957822 - Update apiserver tlsSecurityProfile description to include Custom profile 1957832 - CMO end-to-end tests work only on AWS 1957856 - 'resource name may not be empty' is shown in CI testing 1957869 - baremetal IPI power_interface for irmc is inconsistent 1957879 - cloud-controller-manage ClusterOperator manifest does not declare relatedObjects 1957889 - Incomprehensible documentation of the GatherClusterOperatorPodsAndEvents gatherer 1957893 - ClusterDeployment / Agent conditions show "ClusterAlreadyInstalling" during each spoke install 1957895 - Cypress helper projectDropdown.shouldContain is not an assertion 1957908 - Many e2e failed requests caused by kube-storage-version-migrator-operator's version reads 1957926 - "Add Capacity" should allow to add n3 (or n4) local devices at once 1957951 - [aws] destroy can get blocked on instances stuck in shutting-down state 1957967 - Possible test flake in listPage Cypress view 1957972 - Leftover templates from mdns 1957976 - Ironic execute_deploy_steps command to ramdisk times out, resulting in a failed deployment in 4.7 1957982 - Deployment Actions clickable for view-only projects 1957991 - ClusterOperatorDegraded can fire during installation 1958015 - "config-reloader-cpu" and "config-reloader-memory" flags have been deprecated for prometheus-operator 1958080 - Missing i18n for login, error and selectprovider pages 1958094 - Audit log files are corrupted sometimes 1958097 - don't show "old, insecure token format" if the token does not actually exist 1958114 - Ignore staged vendor files in pre-commit script 1958126 - [OVN]Egressip doesn't take effect 1958158 - OAuth proxy container for AlertManager and Thanos are flooding the logs 1958216 - ocp libvirt: dnsmasq options in install config should allow duplicate option names 1958245 - cluster-etcd-operator: static pod revision is not visible from etcd logs 1958285 - Deployment considered unhealthy despite being available and at latest generation 1958296 - OLM must explicitly alert on deprecated APIs in use 1958329 - pick 97428: add more context to log after a request times out 1958367 - Build metrics do not aggregate totals by build strategy 1958391 - Update MCO KubeletConfig to mixin the API Server TLS Security Profile Singleton 1958405 - etcd: current health checks and reporting are not adequate to ensure availability 1958406 - Twistlock flags mode of /var/run/crio/crio.sock 1958420 - openshift-install 4.7.10 fails with segmentation error 1958424 - aws: support more auth options in manual mode 1958439 - Install/Upgrade button on Install/Upgrade Helm Chart page does not work with Form View 1958492 - CCO: pod-identity-webhook still accesses APIRemovedInNextReleaseInUse 1958643 - All pods creation stuck due to SR-IOV webhook timeout 1958679 - Compression on pool can't be disabled via UI 1958753 - VMI nic tab is not loadable 1958759 - Pulling Insights report is missing retry logic 1958811 - VM creation fails on API version mismatch 1958812 - Cluster upgrade halts as machine-config-daemon fails to parse rpm-ostree status during cluster upgrades 1958861 - [CCO] pod-identity-webhook certificate request failed 1958868 - ssh copy is missing when vm is running 1958884 - Confusing error message when volume AZ not found 1958913 - "Replacing an unhealthy etcd member whose node is not ready" procedure results in new etcd pod in CrashLoopBackOff 1958930 - network config in machine configs prevents addition of new nodes with static networking via kargs 1958958 - [SCALE] segfault with ovnkube adding to address set 1958972 - [SCALE] deadlock in ovn-kube when scaling up to 300 nodes 1959041 - LSO Cluster UI,"Troubleshoot" link does not exist after scale down osd pod 1959058 - ovn-kubernetes has lock contention on the LSP cache 1959158 - packageserver clusteroperator Available condition set to false on any Deployment spec change 1959177 - Descheduler dev manifests are missing permissions 1959190 - Set LABEL io.openshift.release.operator=true for driver-toolkit image addition to payload 1959194 - Ingress controller should use minReadySeconds because otherwise it is disrupted during deployment updates 1959278 - Should remove prometheus servicemonitor from openshift-user-workload-monitoring 1959294 - openshift-operator-lifecycle-manager:olm-operator-serviceaccount should not rely on external networking for health check 1959327 - Degraded nodes on upgrade - Cleaning bootversions: Read-only file system 1959406 - Difficult to debug performance on ovn-k without pprof enabled 1959471 - Kube sysctl conformance tests are disabled, meaning we can't submit conformance results 1959479 - machines doesn't support dual-stack loadbalancers on Azure 1959513 - Cluster-kube-apiserver does not use library-go for audit pkg 1959519 - Operand details page only renders one status donut no matter how many 'podStatuses' descriptors are used 1959550 - Overly generic CSS rules for dd and dt elements breaks styling elsewhere in console 1959564 - Test verify /run filesystem contents failing 1959648 - oc adm top --help indicates that oc adm top can display storage usage while it cannot 1959650 - Gather SDI-related MachineConfigs 1959658 - showing a lot "constructing many client instances from the same exec auth config" 1959696 - Deprecate 'ConsoleConfigRoute' struct in console-operator config 1959699 - [RFE] Collect LSO pod log and daemonset log managed by LSO 1959703 - Bootstrap gather gets into an infinite loop on bootstrap-in-place mode 1959711 - Egressnetworkpolicy doesn't work when configure the EgressIP 1959786 - [dualstack]EgressIP doesn't work on dualstack cluster for IPv6 1959916 - Console not works well against a proxy in front of openshift clusters 1959920 - UEFISecureBoot set not on the right master node 1959981 - [OCPonRHV] - Affinity Group should not create by default if we define empty affinityGroupsNames: [] 1960035 - iptables is missing from ose-keepalived-ipfailover image 1960059 - Remove "Grafana UI" link from Console Monitoring > Dashboards page 1960089 - ImageStreams list page, detail page and breadcrumb are not following CamelCase conventions 1960129 - [e2e][automation] add smoke tests about VM pages and actions 1960134 - some origin images are not public 1960171 - Enable SNO checks for image-registry 1960176 - CCO should recreate a user for the component when it was removed from the cloud providers 1960205 - The kubelet log flooded with reconcileState message once CPU manager enabled 1960255 - fixed obfuscation permissions 1960257 - breaking changes in pr template 1960284 - ExternalTrafficPolicy Local does not preserve connections correctly on shutdown, policy Cluster has significant performance cost 1960323 - Address issues raised by coverity security scan 1960324 - manifests: extra "spec.version" in console quickstarts makes CVO hotloop 1960330 - manifests: invalid selector in ServiceMonitor makes CVO hotloop 1960334 - manifests: invalid selector in ServiceMonitor makes CVO hotloop 1960337 - manifests: invalid selector in ServiceMonitor makes CVO hotloop 1960339 - manifests: unset "preemptionPolicy" makes CVO hotloop 1960531 - Items under 'Current Bandwidth' for Dashboard 'Kubernetes / Networking / Pod' keep added for every access 1960534 - Some graphs of console dashboards have no legend and tooltips are difficult to undstand compared with grafana 1960546 - Add virt_platform metric to the collected metrics 1960554 - Remove rbacv1beta1 handling code 1960612 - Node disk info in overview/details does not account for second drive where /var is located 1960619 - Image registry integration tests use old-style OAuth tokens 1960683 - GlobalConfigPage is constantly requesting resources 1960711 - Enabling IPsec runtime causing incorrect MTU on Pod interfaces 1960716 - Missing details for debugging 1960732 - Outdated manifests directory in CSI driver operator repositories 1960757 - [OVN] hostnetwork pod can access MCS port 22623 or 22624 on master 1960758 - oc debug / oc adm must-gather do not require openshift/tools and openshift/must-gather to be "the newest" 1960767 - /metrics endpoint of the Grafana UI is accessible without authentication 1960780 - CI: failed to create PDB "service-test" the server could not find the requested resource 1961064 - Documentation link to network policies is outdated 1961067 - Improve log gathering logic 1961081 - policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget in CMO logs 1961091 - Gather MachineHealthCheck definitions 1961120 - CSI driver operators fail when upgrading a cluster 1961173 - recreate existing static pod manifests instead of updating 1961201 - [sig-network-edge] DNS should answer A and AAAA queries for a dual-stack service is constantly failing 1961314 - Race condition in operator-registry pull retry unit tests 1961320 - CatalogSource does not emit any metrics to indicate if it's ready or not 1961336 - Devfile sample for BuildConfig is not defined 1961356 - Update single quotes to double quotes in string 1961363 - Minor string update for " No Storage classes found in cluster, adding source is disabled." 1961393 - DetailsPage does not work with group~version~kind 1961452 - Remove "Alertmanager UI" link from Console Monitoring > Alerting page 1961466 - Some dropdown placeholder text on route creation page is not translated 1961472 - openshift-marketplace pods in CrashLoopBackOff state after RHACS installed with an SCC with readOnlyFileSystem set to true 1961506 - NodePorts do not work on RHEL 7.9 workers (was "4.7 -> 4.8 upgrade is stuck at Ingress operator Degraded with rhel 7.9 workers") 1961536 - clusterdeployment without pull secret is crashing assisted service pod 1961538 - manifests: invalid namespace in ClusterRoleBinding makes CVO hotloop 1961545 - Fixing Documentation Generation 1961550 - HAproxy pod logs showing error "another server named 'pod:httpd-7c7ccfffdc-wdkvk:httpd:8080-tcp:10.128.x.x:8080' was already defined at line 326, please use distinct names" 1961554 - respect the shutdown-delay-duration from OpenShiftAPIServerConfig 1961561 - The encryption controllers send lots of request to an API server 1961582 - Build failure on s390x 1961644 - NodeAuthenticator tests are failing in IPv6 1961656 - driver-toolkit missing some release metadata 1961675 - Kebab menu of taskrun contains Edit options which should not be present 1961701 - Enhance gathering of events 1961717 - Update runtime dependencies to Wallaby builds for bugfixes 1961829 - Quick starts prereqs not shown when description is long 1961852 - Excessive lock contention when adding many pods selected by the same NetworkPolicy 1961878 - Add Sprint 199 translations 1961897 - Remove history listener before console UI is unmounted 1961925 - New ManagementCPUsOverride admission plugin blocks pod creation in clusters with no nodes 1962062 - Monitoring dashboards should support default values of "All" 1962074 - SNO:the pod get stuck in CreateContainerError and prompt "failed to add conmon to systemd sandbox cgroup: dial unix /run/systemd/private: connect: resource temporarily unavailable" after adding a performanceprofile 1962095 - Replace gather-job image without FQDN 1962153 - VolumeSnapshot routes are ambiguous, too generic 1962172 - Single node CI e2e tests kubelet metrics endpoints intermittent downtime 1962219 - NTO relies on unreliable leader-for-life implementation. 1962256 - use RHEL8 as the vm-example 1962261 - Monitoring components requesting more memory than they use 1962274 - OCP on RHV installer fails to generate an install-config with only 2 hosts in RHV cluster 1962347 - Cluster does not exist logs after successful installation 1962392 - After upgrade from 4.5.16 to 4.6.17, customer's application is seeing re-transmits 1962415 - duplicate zone information for in-tree PV after enabling migration 1962429 - Cannot create windows vm because kubemacpool.io denied the request 1962525 - [Migration] SDN migration stuck on MCO on RHV cluster 1962569 - NetworkPolicy details page should also show Egress rules 1962592 - Worker nodes restarting during OS installation 1962602 - Cloud credential operator scrolls info "unable to provide upcoming..." on unsupported platform 1962630 - NTO: Ship the current upstream TuneD 1962687 - openshift-kube-storage-version-migrator pod failed due to Error: container has runAsNonRoot and image will run as root 1962698 - Console-operator can not create resource console-public configmap in the openshift-config-managed namespace 1962718 - CVE-2021-29622 prometheus: open redirect under the /new endpoint 1962740 - Add documentation to Egress Router 1962850 - [4.8] Bootimage bump tracker 1962882 - Version pod does not set priorityClassName 1962905 - Ramdisk ISO source defaulting to "http" breaks deployment on a good amount of BMCs 1963068 - ironic container should not specify the entrypoint 1963079 - KCM/KS: ability to enforce localhost communication with the API server. 1963154 - Current BMAC reconcile flow skips Ironic's deprovision step 1963159 - Add Sprint 200 translations 1963204 - Update to 8.4 IPA images 1963205 - Installer is using old redirector 1963208 - Translation typos/inconsistencies for Sprint 200 files 1963209 - Some strings in public.json have errors 1963211 - Fix grammar issue in kubevirt-plugin.json string 1963213 - Memsource download script running into API error 1963219 - ImageStreamTags not internationalized 1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment 1963267 - Warning: Invalid DOM property classname. Did you mean className? console warnings in volumes table 1963502 - create template from is not descriptive 1963676 - in vm wizard when selecting an os template it looks like selecting the flavor too 1963833 - Cluster monitoring operator crashlooping on single node clusters due to segfault 1963848 - Use OS-shipped stalld vs. the NTO-shipped one. 1963866 - NTO: use the latest k8s 1.21.1 and openshift vendor dependencies 1963871 - cluster-etcd-operator:[build] upgrade to go 1.16 1963896 - The VM disks table does not show easy links to PVCs 1963912 - "[sig-network] DNS should provide DNS for {services, cluster, subdomain, hostname}" failures on vsphere 1963932 - Installation failures in bootstrap in OpenStack release jobs 1963964 - Characters are not escaped on config ini file causing Kuryr bootstrap to fail 1964059 - rebase openshift/sdn to kube 1.21.1 1964197 - Failing Test vendor/k8s.io/kube-aggregator/pkg/apiserver TestProxyCertReload due to hardcoded certificate expiration 1964203 - e2e-metal-ipi, e2e-metal-ipi-ovn-dualstack and e2e-metal-ipi-ovn-ipv6 are failing due to "Unknown provider baremetal" 1964243 - The oc compliance fetch-raw doesn’t work for disconnected cluster 1964270 - Failed to install 'cluster-kube-descheduler-operator' with error: "clusterkubedescheduleroperator.4.8.0-202105211057.p0.assembly.stream\": must be no more than 63 characters" 1964319 - Network policy "deny all" interpreted as "allow all" in description page 1964334 - alertmanager/prometheus/thanos-querier /metrics endpoints are not secured 1964472 - Make project and namespace requirements more visible rather than giving me an error after submission 1964486 - Bulk adding of CIDR IPS to whitelist is not working 1964492 - Pick 102171: Implement support for watch initialization in P&F 1964625 - NETID duplicate check is only required in NetworkPolicy Mode 1964748 - Sync upstream 1.7.2 downstream 1964756 - PVC status is always in 'Bound' status when it is actually cloning 1964847 - Sanity check test suite missing from the repo 1964888 - opoenshift-apiserver imagestreamimports depend on >34s timeout support, WAS: transport: loopyWriter.run returning. connection error: desc = "transport is closing" 1964936 - error log for "oc adm catalog mirror" is not correct 1964979 - Add mapping from ACI to infraenv to handle creation order issues 1964997 - Helm Library charts are showing and can be installed from Catalog 1965024 - [DR] backup and restore should perform consistency checks on etcd snapshots 1965092 - [Assisted-4.7] [Staging][OLM] Operators deployments start before all workers finished installation 1965283 - 4.7->4.8 upgrades: cluster operators are not ready: openshift-controller-manager (Upgradeable=Unknown NoData: ), service-ca (Upgradeable=Unknown NoData: 1965330 - oc image extract fails due to security capabilities on files 1965334 - opm index add fails during image extraction 1965367 - Typo in in etcd-metric-serving-ca resource name 1965370 - "Route" is not translated in Korean or Chinese 1965391 - When storage class is already present wizard do not jumps to "Stoarge and nodes" 1965422 - runc is missing Provides oci-runtime in rpm spec 1965522 - [v2v] Multiple typos on VM Import screen 1965545 - Pod stuck in ContainerCreating: Unit ...slice already exists 1965909 - Replace "Enable Taint Nodes" by "Mark nodes as dedicated" 1965921 - [oVirt] High performance VMs shouldn't be created with Existing policy 1965929 - kube-apiserver should use cert auth when reaching out to the oauth-apiserver with a TokenReview request 1966077 - hidden descriptor is visible in the Operator instance details page1966116 - DNS SRV request which worked in 4.7.9 stopped working in 4.7.11 1966126 - root_ca_cert_publisher_sync_duration_seconds metric can have an excessive cardinality 1966138 - (release-4.8) Update K8s & OpenShift API versions 1966156 - Issue with Internal Registry CA on the service pod 1966174 - No storage class is installed, OCS and CNV installations fail 1966268 - Workaround for Network Manager not supporting nmconnections priority 1966401 - Revamp Ceph Table in Install Wizard flow 1966410 - kube-controller-manager should not trigger APIRemovedInNextReleaseInUse alert 1966416 - (release-4.8) Do not exceed the data size limit 1966459 - 'policy/v1beta1 PodDisruptionBudget' and 'batch/v1beta1 CronJob' appear in image-registry-operator log 1966487 - IP address in Pods list table are showing node IP other than pod IP 1966520 - Add button from ocs add capacity should not be enabled if there are no PV's 1966523 - (release-4.8) Gather MachineAutoScaler definitions 1966546 - [master] KubeAPI - keep day1 after cluster is successfully installed 1966561 - Workload partitioning annotation workaround needed for CSV annotation propagation bug 1966602 - don't require manually setting IPv6DualStack feature gate in 4.8 1966620 - The bundle.Dockerfile in the repo is obsolete 1966632 - [4.8.0] [assisted operator] Unable to re-register an SNO instance if deleting CRDs during install 1966654 - Alertmanager PDB is not created, but Prometheus UWM is 1966672 - Add Sprint 201 translations 1966675 - Admin console string updates 1966677 - Change comma to semicolon 1966683 - Translation bugs from Sprint 201 files 1966684 - Verify "Creating snapshot for claim <1>{pvcName}</1>" displays correctly 1966697 - Garbage collector logs every interval - move to debug level 1966717 - include full timestamps in the logs 1966759 - Enable downstream plugin for Operator SDK 1966795 - [tests] Release 4.7 broken due to the usage of wrong OCS version 1966813 - "Replacing an unhealthy etcd member whose node is not ready" procedure results in new etcd pod in CrashLoopBackOff 1966862 - vsphere IPI - local dns prepender is not prepending nameserver 127.0.0.1 1966892 - [master] [Assisted-4.8][SNO] SNO node cannot transition into "Writing image to disk" from "Waiting for bootkub[e" 1966952 - [4.8.0] [Assisted-4.8][SNO][Dual Stack] DHCPv6 settings "ipv6.dhcp-duid=ll" missing from dual stack install 1967104 - [4.8.0] InfraEnv ctrl: log the amount of NMstate Configs baked into the image 1967126 - [4.8.0] [DOC] KubeAPI docs should clarify that the InfraEnv Spec pullSecretRef is currently ignored 1967197 - 404 errors loading some i18n namespaces 1967207 - Getting started card: console customization resources link shows other resources 1967208 - Getting started card should use semver library for parsing the version instead of string manipulation 1967234 - Console is continuously polling for ConsoleLink acm-link 1967275 - Awkward wrapping in getting started dashboard card 1967276 - Help menu tooltip overlays dropdown 1967398 - authentication operator still uses previous deleted pod ip rather than the new created pod ip to do health check 1967403 - (release-4.8) Increase workloads fingerprint gatherer pods limit 1967423 - [master] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion 1967444 - openshift-local-storage pods found with invalid priority class, should be openshift-user-critical or begin with system- while running e2e tests 1967531 - the ccoctl tool should extend MaxItems when listRoles, the default value 100 is a little small 1967578 - [4.8.0] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion 1967591 - The ManagementCPUsOverride admission plugin should not mutate containers with the limit 1967595 - Fixes the remaining lint issues 1967614 - prometheus-k8s pods can't be scheduled due to volume node affinity conflict 1967623 - [OCPonRHV] - ./openshift-install installation with install-config doesn't work if ovirt-config.yaml doesn't exist and user should fill the FQDN URL 1967625 - Add OpenShift Dockerfile for cloud-provider-aws 1967631 - [4.8.0] Cluster install failed due to timeout while "Waiting for control plane" 1967633 - [4.8.0] [Assisted-4.8][SNO] SNO node cannot transition into "Writing image to disk" from "Waiting for bootkube" 1967639 - Console whitescreens if user preferences fail to load 1967662 - machine-api-operator should not use deprecated "platform" field in infrastructures.config.openshift.io 1967667 - Add Sprint 202 Round 1 translations 1967713 - Insights widget shows invalid link to the OCM 1967717 - Insights Advisor widget is missing a description paragraph and contains deprecated naming 1967745 - When setting DNS node placement by toleration to not tolerate master node, effect value should not allow string other than "NoExecute" 1967803 - should update to 7.5.5 for grafana resources version label 1967832 - Add more tests for periodic.go 1967833 - Add tasks pool to tasks_processing 1967842 - Production logs are spammed on "OCS requirements validation status Insufficient hosts to deploy OCS. A minimum of 3 hosts is required to deploy OCS" 1967843 - Fix null reference to messagesToSearch in gather_logs.go 1967902 - [4.8.0] Assisted installer chrony manifests missing index numberring 1967933 - Network-Tools debug scripts not working as expected 1967945 - [4.8.0] [assisted operator] Assisted Service Postgres crashes msg: "mkdir: cannot create directory '/var/lib/pgsql/data/userdata': Permission denied" 1968019 - drain timeout and pool degrading period is too short 1968067 - [master] Agent validation not including reason for being insufficient 1968168 - [4.8.0] KubeAPI - keep day1 after cluster is successfully installed 1968175 - [4.8.0] Agent validation not including reason for being insufficient 1968373 - [4.8.0] BMAC re-attaches installed node on ISO regeneration 1968385 - [4.8.0] Infra env require pullSecretRef although it shouldn't be required 1968435 - [4.8.0] Unclear message in case of missing clusterImageSet 1968436 - Listeners timeout updated to remain using default value 1968449 - [4.8.0] Wrong Install-config override documentation 1968451 - [4.8.0] Garbage collector not cleaning up directories of removed clusters 1968452 - [4.8.0] [doc] "Mirror Registry Configuration" doc section needs clarification of functionality and limitations 1968454 - [4.8.0] backend events generated with wrong namespace for agent 1968455 - [4.8.0] Assisted Service operator's controllers are starting before the base service is ready 1968515 - oc should set user-agent when talking with registry 1968531 - Sync upstream 1.8.0 downstream 1968558 - [sig-cli] oc adm storage-admin [Suite:openshift/conformance/parallel] doesn't clean up properly 1968567 - [OVN] Egress router pod not running and openshift.io/scc is restricted 1968625 - Pods using sr-iov interfaces failign to start for Failed to create pod sandbox 1968700 - catalog-operator crashes when status.initContainerStatuses[].state.waiting is nil 1968701 - Bare metal IPI installation is failed due to worker inspection failure 1968754 - CI: e2e-metal-ipi-upgrade failing on KubeletHasDiskPressure, which triggers machine-config RequiredPoolsFailed 1969212 - [FJ OCP4.8 Bug - PUBLIC VERSION]: Masters repeat reboot every few minutes during workers provisioning 1969284 - Console Query Browser: Can't reset zoom to fixed time range after dragging to zoom 1969315 - [4.8.0] BMAC doesn't check if ISO Url changed before queuing BMH for reconcile 1969352 - [4.8.0] Creating BareMetalHost without the "inspect.metal3.io" does not automatically add it 1969363 - [4.8.0] Infra env should show the time that ISO was generated. 1969367 - [4.8.0] BMAC should wait for an ISO to exist for 1 minute before using it 1969386 - Filesystem's Utilization doesn't show in VM overview tab 1969397 - OVN bug causing subports to stay DOWN fails installations 1969470 - [4.8.0] Misleading error in case of install-config override bad input 1969487 - [FJ OCP4.8 Bug]: Avoid always do delete_configuration clean step 1969525 - Replace golint with revive 1969535 - Topology edit icon does not link correctly when branch name contains slash 1969538 - Install a VolumeSnapshotClass by default on CSI Drivers that support it 1969551 - [4.8.0] Assisted service times out on GetNextSteps due tooc adm release infotaking too long 1969561 - Test "an end user can use OLM can subscribe to the operator" generates deprecation alert 1969578 - installer: accesses v1beta1 RBAC APIs and causes APIRemovedInNextReleaseInUse to fire 1969599 - images without registry are being prefixed with registry.hub.docker.com instead of docker.io 1969601 - manifest for networks.config.openshift.io CRD uses deprecated apiextensions.k8s.io/v1beta1 1969626 - Portfoward stream cleanup can cause kubelet to panic 1969631 - EncryptionPruneControllerDegraded: etcdserver: request timed out 1969681 - MCO: maxUnavailable of ds/machine-config-daemon does not get updated due to missing resourcemerge check 1969712 - [4.8.0] Assisted service reports a malformed iso when we fail to download the base iso 1969752 - [4.8.0] [assisted operator] Installed Clusters are missing DNS setups 1969773 - [4.8.0] Empty cluster name on handleEnsureISOErrors log after applying InfraEnv.yaml 1969784 - WebTerminal widget should send resize events 1969832 - Applying a profile with multiple inheritance where parents include a common ancestor fails 1969891 - Fix rotated pipelinerun status icon issue in safari 1969900 - Test files should not use deprecated APIs that will trigger APIRemovedInNextReleaseInUse 1969903 - Provisioning a large number of hosts results in an unexpected delay in hosts becoming available 1969951 - Cluster local doesn't work for knative services created from dev console 1969969 - ironic-rhcos-downloader container uses and old base image 1970062 - ccoctl does not work with STS authentication 1970068 - ovnkube-master logs "Failed to find node ips for gateway" error 1970126 - [4.8.0] Disable "metrics-events" when deploying using the operator 1970150 - master pool is still upgrading when machine config reports level / restarts on osimageurl change 1970262 - [4.8.0] Remove Agent CRD Status fields not needed 1970265 - [4.8.0] Add State and StateInfo to DebugInfo in ACI and Agent CRDs 1970269 - [4.8.0] missing role in agent CRD 1970271 - [4.8.0] Add ProgressInfo to Agent and AgentClusterInstalll CRDs 1970381 - Monitoring dashboards: Custom time range inputs should retain their values 1970395 - [4.8.0] SNO with AI/operator - kubeconfig secret is not created until the spoke is deployed 1970401 - [4.8.0] AgentLabelSelector is required yet not supported 1970415 - SR-IOV Docs needs documentation for disabling port security on a network 1970470 - Add pipeline annotation to Secrets which are created for a private repo 1970494 - [4.8.0] Missing value-filling of log line in assisted-service operator pod 1970624 - 4.7->4.8 updates: AggregatedAPIDown for v1beta1.metrics.k8s.io 1970828 - "500 Internal Error" for all openshift-monitoring routes 1970975 - 4.7 -> 4.8 upgrades on AWS take longer than expected 1971068 - Removing invalid AWS instances from the CF templates 1971080 - 4.7->4.8 CI: KubePodNotReady due to MCD's 5m sleep between drain attempts 1971188 - Web Console does not show OpenShift Virtualization Menu with VirtualMachine CRDs of version v1alpha3 ! 1971293 - [4.8.0] Deleting agent from one namespace causes all agents with the same name to be deleted from all namespaces 1971308 - [4.8.0] AI KubeAPI AgentClusterInstall confusing "Validated" condition about VIP not matching machine network 1971529 - [Dummy bug for robot] 4.7.14 upgrade to 4.8 and then downgrade back to 4.7.14 doesn't work - clusteroperator/kube-apiserver is not upgradeable 1971589 - [4.8.0] Telemetry-client won't report metrics in case the cluster was installed using the assisted operator 1971630 - [4.8.0] ACM/ZTP with Wan emulation fails to start the agent service 1971632 - [4.8.0] ACM/ZTP with Wan emulation, several clusters fail to step past discovery 1971654 - [4.8.0] InfraEnv controller should always requeue for backend response HTTP StatusConflict (code 409) 1971739 - Keep /boot RW when kdump is enabled 1972085 - [4.8.0] Updating configmap within AgentServiceConfig is not logged properly 1972128 - ironic-static-ip-manager container still uses 4.7 base image 1972140 - [4.8.0] ACM/ZTP with Wan emulation, SNO cluster installs do not show as installed although they are 1972167 - Several operators degraded because Failed to create pod sandbox when installing an sts cluster 1972213 - Openshift Installer| UEFI mode | BM hosts have BIOS halted 1972262 - [4.8.0] "baremetalhost.metal3.io/detached" uses boolean value where string is expected 1972426 - Adopt failure can trigger deprovisioning 1972436 - [4.8.0] [DOCS] AgentServiceConfig examples in operator.md doc should each contain databaseStorage + filesystemStorage 1972526 - [4.8.0] clusterDeployments controller should send an event to InfraEnv for backend cluster registration 1972530 - [4.8.0] no indication for missing debugInfo in AgentClusterInstall 1972565 - performance issues due to lost node, pods taking too long to relaunch 1972662 - DPDK KNI modules need some additional tools 1972676 - Requirements for authenticating kernel modules with X.509 1972687 - Using bound SA tokens causes causes failures to /apis/authorization.openshift.io/v1/clusterrolebindings 1972690 - [4.8.0] infra-env condition message isn't informative in case of missing pull secret 1972702 - [4.8.0] Domain dummy.com (not belonging to Red Hat) is being used in a default configuration 1972768 - kube-apiserver setup fail while installing SNO due to port being used 1972864 - Newlocal-with-fallback` service annotation does not preserve source IP 1973018 - Ironic rhcos downloader breaks image cache in upgrade process from 4.7 to 4.8 1973117 - No storage class is installed, OCS and CNV installations fail 1973233 - remove kubevirt images and references 1973237 - RHCOS-shipped stalld systemd units do not use SCHED_FIFO to run stalld. 1973428 - Placeholder bug for OCP 4.8.0 image release 1973667 - [4.8] NetworkPolicy tests were mistakenly marked skipped 1973672 - fix ovn-kubernetes NetworkPolicy 4.7->4.8 upgrade issue 1973995 - [Feature:IPv6DualStack] tests are failing in dualstack 1974414 - Uninstalling kube-descheduler clusterkubedescheduleroperator.4.6.0-202106010807.p0.git.5db84c5 removes some clusterrolebindings 1974447 - Requirements for nvidia GPU driver container for driver toolkit 1974677 - [4.8.0] KubeAPI CVO progress is not available on CR/conditions only in events. 1974718 - Tuned net plugin fails to handle net devices with n/a value for a channel 1974743 - [4.8.0] All resources not being cleaned up after clusterdeployment deletion 1974746 - [4.8.0] File system usage not being logged appropriately 1974757 - [4.8.0] Assisted-service deployed on an IPv6 cluster installed with proxy: agentclusterinstall shows error pulling an image from quay. 1974773 - Using bound SA tokens causes fail to query cluster resource especially in a sts cluster 1974839 - CVE-2021-29059 nodejs-is-svg: Regular expression denial of service if the application is provided and checks a crafted invalid SVG string 1974850 - [4.8] coreos-installer failing Execshield 1974931 - [4.8.0] Assisted Service Operator should be Infrastructure Operator for Red Hat OpenShift 1974978 - 4.8.0.rc0 upgrade hung, stuck on DNS clusteroperator progressing 1975155 - Kubernetes service IP cannot be accessed for rhel worker 1975227 - [4.8.0] KubeAPI Move conditions consts to CRD types 1975360 - [4.8.0] [master] timeout on kubeAPI subsystem test: SNO full install and validate MetaData 1975404 - [4.8.0] Confusing behavior when multi-node spoke workers present when only controlPlaneAgents specified 1975432 - Alert InstallPlanStepAppliedWithWarnings does not resolve 1975527 - VMware UPI is configuring static IPs via ignition rather than afterburn 1975672 - [4.8.0] Production logs are spammed on "Found unpreparing host: id 08f22447-2cf1-a107-eedf-12c7421f7380 status insufficient" 1975789 - worker nodes rebooted when we simulate a case where the api-server is down 1975938 - gcp-realtime: e2e test failing [sig-storage] Multi-AZ Cluster Volumes should only be allowed to provision PDs in zones where nodes exist [Suite:openshift/conformance/parallel] [Suite:k8s] 1975964 - 4.7 nightly upgrade to 4.8 and then downgrade back to 4.7 nightly doesn't work - ingresscontroller "default" is degraded 1976079 - [4.8.0] Openshift Installer| UEFI mode | BM hosts have BIOS halted 1976263 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel] 1976376 - disable jenkins client plugin test whose Jenkinsfile references master branch openshift/origin artifacts 1976590 - [Tracker] [SNO][assisted-operator][nmstate] Bond Interface is down when booting from the discovery ISO 1977233 - [4.8] Unable to authenticate against IDP after upgrade to 4.8-rc.1 1977351 - CVO pod skipped by workload partitioning with incorrect error stating cluster is not SNO 1977352 - [4.8.0] [SNO] No DNS to cluster API from assisted-installer-controller 1977426 - Installation of OCP 4.6.13 fails when teaming interface is used with OVNKubernetes 1977479 - CI failing on firing CertifiedOperatorsCatalogError due to slow livenessProbe responses 1977540 - sriov webhook not worked when upgrade from 4.7 to 4.8 1977607 - [4.8.0] Post making changes to AgentServiceConfig assisted-service operator is not detecting the change and redeploying assisted-service pod 1977924 - Pod fails to run when a custom SCC with a specific set of volumes is used 1980788 - NTO-shipped stalld can segfault 1981633 - enhance service-ca injection 1982250 - Performance Addon Operator fails to install after catalog source becomes ready 1982252 - olm Operator is in CrashLoopBackOff state with error "couldn't cleanup cross-namespace ownerreferences"

  1. References:

https://access.redhat.com/security/cve/CVE-2016-2183 https://access.redhat.com/security/cve/CVE-2020-7774 https://access.redhat.com/security/cve/CVE-2020-15106 https://access.redhat.com/security/cve/CVE-2020-15112 https://access.redhat.com/security/cve/CVE-2020-15113 https://access.redhat.com/security/cve/CVE-2020-15114 https://access.redhat.com/security/cve/CVE-2020-15136 https://access.redhat.com/security/cve/CVE-2020-26160 https://access.redhat.com/security/cve/CVE-2020-26541 https://access.redhat.com/security/cve/CVE-2020-28469 https://access.redhat.com/security/cve/CVE-2020-28500 https://access.redhat.com/security/cve/CVE-2020-28852 https://access.redhat.com/security/cve/CVE-2021-3114 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/cve/CVE-2021-3516 https://access.redhat.com/security/cve/CVE-2021-3517 https://access.redhat.com/security/cve/CVE-2021-3518 https://access.redhat.com/security/cve/CVE-2021-3520 https://access.redhat.com/security/cve/CVE-2021-3537 https://access.redhat.com/security/cve/CVE-2021-3541 https://access.redhat.com/security/cve/CVE-2021-3636 https://access.redhat.com/security/cve/CVE-2021-20206 https://access.redhat.com/security/cve/CVE-2021-20271 https://access.redhat.com/security/cve/CVE-2021-20291 https://access.redhat.com/security/cve/CVE-2021-21419 https://access.redhat.com/security/cve/CVE-2021-21623 https://access.redhat.com/security/cve/CVE-2021-21639 https://access.redhat.com/security/cve/CVE-2021-21640 https://access.redhat.com/security/cve/CVE-2021-21648 https://access.redhat.com/security/cve/CVE-2021-22133 https://access.redhat.com/security/cve/CVE-2021-23337 https://access.redhat.com/security/cve/CVE-2021-23362 https://access.redhat.com/security/cve/CVE-2021-23368 https://access.redhat.com/security/cve/CVE-2021-23382 https://access.redhat.com/security/cve/CVE-2021-25735 https://access.redhat.com/security/cve/CVE-2021-25737 https://access.redhat.com/security/cve/CVE-2021-26539 https://access.redhat.com/security/cve/CVE-2021-26540 https://access.redhat.com/security/cve/CVE-2021-27292 https://access.redhat.com/security/cve/CVE-2021-28092 https://access.redhat.com/security/cve/CVE-2021-29059 https://access.redhat.com/security/cve/CVE-2021-29622 https://access.redhat.com/security/cve/CVE-2021-32399 https://access.redhat.com/security/cve/CVE-2021-33034 https://access.redhat.com/security/cve/CVE-2021-33194 https://access.redhat.com/security/cve/CVE-2021-33909 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYQCOF9zjgjWX9erEAQjsEg/+NSFQdRcZpqA34LWRtxn+01y2MO0WLroQ d4o+3h0ECKYNRFKJe6n7z8MdmPpvV2uNYN0oIwidTESKHkFTReQ6ZolcV/sh7A26 Z7E+hhpTTObxAL7Xx8nvI7PNffw3CIOZSpnKws5TdrwuMkH5hnBSSZntP5obp9Vs ImewWWl7CNQtFewtXbcmUojNzIvU1mujES2DTy2ffypLoOW6kYdJzyWubigIoR6h gep9HKf1X4oGPuDNF5trSdxKwi6W68+VsOA25qvcNZMFyeTFhZqowot/Jh1HUHD8 TWVpDPA83uuExi/c8tE8u7VZgakWkRWcJUsIw68VJVOYGvpP6K/MjTpSuP2itgUX X//1RGQM7g6sYTCSwTOIrMAPbYH0IMbGDjcS4fSZcfg6c+WJnEpZ72ZgjHZV8mxb 1BtQSs2lil48/cwDKM0yMO2nYsKiz4DCCx2W5izP0rLwNA8Hvqh9qlFgkxJWWOvA mtBCelB0E74qrE4NXbX+MIF7+ZQKjd1evE91/VWNs0FLR/xXdP3C5ORLU3Fag0G/ 0oTV73NdxP7IXVAdsECwU2AqS9ne1y01zJKtd7hq7H/wtkbasqCNq5J7HikJlLe6 dpKh5ZRQzYhGeQvho9WQfz/jd4HZZTcB6wxrWubbd05bYt/i/0gau90LpuFEuSDx +bLvJlpGiMg= =NJcM -----END PGP SIGNATURE-----

-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:

Red Hat Advanced Cluster Management for Kubernetes 2.3.0 images

Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in.

Bugs:

  • RFE Make the source code for the endpoint-metrics-operator public (BZ# 1913444)

  • cluster became offline after apiserver health check (BZ# 1942589)

  • Solution:

Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):

1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension 1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag 1913444 - RFE Make the source code for the endpoint-metrics-operator public 1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull 1927520 - RHACM 2.3.0 images 1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection 1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash() 1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate 1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms 1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization 1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string 1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application 1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header 1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call 1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS 1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service 1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service 1942589 - cluster became offline after apiserver health check 1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() 1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character 1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data 1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service 1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option 1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing 1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js 1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service 1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS) 1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option 1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe 1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command 1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets 1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs 1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method 1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions 1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id 1983131 - Defragmenting an etcd member doesn't reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters

  1. VDSM manages and monitors the host's storage, memory and networks as well as virtual machine creation, other host administration tasks, statistics gathering, and log collection.

Bug Fix(es):

  • An update in libvirt has changed the way block threshold events are submitted. As a result, the VDSM was confused by the libvirt event, and tried to look up a drive, logging a warning about a missing drive. In this release, the VDSM has been adapted to handle the new libvirt behavior, and does not log warnings about missing drives. (BZ#1948177)

  • Previously, when a virtual machine was powered off on the source host of a live migration and the migration finished successfully at the same time, the two events interfered with each other, and sometimes prevented migration cleanup resulting in additional migrations from the host being blocked. In this release, additional migrations are not blocked. (BZ#1959436)

  • Previously, when failing to execute a snapshot and re-executing it later, the second try would fail due to using the previous execution data. In this release, this data will be used only when needed, in recovery mode. (BZ#1984209)

  • Then engine deletes the volume and causes data corruption. 1998017 - Keep cinbderlib dependencies optional for 4.4.8

Bug Fix(es):

  • Documentation is referencing deprecated API for Service Export - Submariner (BZ#1936528)

  • Importing of cluster fails due to error/typo in generated command (BZ#1936642)

  • RHACM 2.2.2 images (BZ#1938215)

  • 2.2 clusterlifecycle fails to allow provision fips: true clusters on aws, vsphere (BZ#1941778)

  • Summary:

The Migration Toolkit for Containers (MTC) 1.7.4 is now available. Description:

The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202102-1492",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "primavera unifier",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "17.7"
      },
      {
        "model": "financial services crime and compliance management studio",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.0.8.3.0"
      },
      {
        "model": "jd edwards enterpriseone tools",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "9.2.6.1"
      },
      {
        "model": "health sciences data management workbench",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "3.0.0.0"
      },
      {
        "model": "primavera gateway",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "19.12.11"
      },
      {
        "model": "banking trade finance process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "primavera gateway",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "17.12.0"
      },
      {
        "model": "primavera gateway",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "20.12.0"
      },
      {
        "model": "primavera unifier",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "17.12"
      },
      {
        "model": "peoplesoft enterprise peopletools",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.59"
      },
      {
        "model": "banking trade finance process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "banking supply chain finance",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "communications cloud native core policy",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "1.11.0"
      },
      {
        "model": "primavera unifier",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "20.12"
      },
      {
        "model": "lodash",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "lodash",
        "version": "4.17.21"
      },
      {
        "model": "primavera gateway",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "20.12.7"
      },
      {
        "model": "banking corporate lending process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "banking supply chain finance",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "primavera gateway",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "18.8.0"
      },
      {
        "model": "banking trade finance process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "banking extensibility workbench",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "primavera unifier",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "19.12"
      },
      {
        "model": "health sciences data management workbench",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "2.5.2.1"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "banking corporate lending process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "enterprise communications broker",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "3.2.0"
      },
      {
        "model": "banking credit facilities process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "banking extensibility workbench",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "primavera gateway",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "19.12.0"
      },
      {
        "model": "banking supply chain finance",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "banking credit facilities process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "primavera gateway",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "18.8.12"
      },
      {
        "model": "banking corporate lending process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "financial services crime and compliance management studio",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.0.8.2.0"
      },
      {
        "model": "retail customer management and segmentation foundation",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "19.0"
      },
      {
        "model": "communications session border controller",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "9.0"
      },
      {
        "model": "banking extensibility workbench",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "primavera unifier",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "18.8"
      },
      {
        "model": "communications session border controller",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.4"
      },
      {
        "model": "enterprise communications broker",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "3.3.0"
      },
      {
        "model": "primavera gateway",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "17.12.11"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "banking credit facilities process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "communications design studio",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "7.4.2"
      },
      {
        "model": "communications services gatekeeper",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "7.0"
      },
      {
        "model": "peoplesoft enterprise peopletools",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.58"
      },
      {
        "model": "lodash",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "lodash",
        "version": "4.17.21"
      },
      {
        "model": "lodash",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "lodash",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011490"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-28500"
      }
    ]
  },
  "configurations": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/configurations#",
      "children": {
        "@container": "@list"
      },
      "cpe_match": {
        "@container": "@list"
      },
      "data": {
        "@container": "@list"
      },
      "nodes": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "CVE_data_version": "4.0",
        "nodes": [
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:a:lodash:lodash:*:*:*:*:*:node.js:*:*",
                "cpe_name": [],
                "versionEndExcluding": "4.17.21",
                "vulnerable": true
              }
            ],
            "operator": "OR"
          },
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:18.8:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndIncluding": "17.12",
                "versionStartIncluding": "17.7",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.58:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:19.12:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:retail_customer_management_and_segmentation_foundation:19.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_services_gatekeeper:7.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:enterprise_communications_broker:3.2.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:20.12:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_extensibility_workbench:14.3.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_trade_finance_process_management:14.3.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_credit_facilities_process_management:14.3.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_corporate_lending_process_management:14.3.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.59:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:primavera_gateway:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndIncluding": "17.12.11",
                "versionStartIncluding": "17.12.0",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_session_border_controller:8.4:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_session_border_controller:9.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:primavera_gateway:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndIncluding": "20.12.7",
                "versionStartIncluding": "20.12.0",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:primavera_gateway:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndIncluding": "19.12.11",
                "versionStartIncluding": "19.12.0",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:primavera_gateway:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndIncluding": "18.8.12",
                "versionStartIncluding": "18.8.0",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_supply_chain_finance:14.2.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_trade_finance_process_management:14.5.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_credit_facilities_process_management:14.2.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_credit_facilities_process_management:14.5.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_corporate_lending_process_management:14.2.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_corporate_lending_process_management:14.5.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_supply_chain_finance:14.5.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_supply_chain_finance:14.3.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_trade_finance_process_management:14.2.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_design_studio:7.4.2:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_extensibility_workbench:14.2.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:banking_extensibility_workbench:14.5.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:enterprise_communications_broker:3.3.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_policy:1.11.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_enterpriseone_tools:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "9.2.6.1",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:health_sciences_data_management_workbench:2.5.2.1:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:health_sciences_data_management_workbench:3.0.0.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:financial_services_crime_and_compliance_management_studio:8.0.8.3.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:financial_services_crime_and_compliance_management_studio:8.0.8.2.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              }
            ],
            "operator": "OR"
          },
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "1.0",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              }
            ],
            "operator": "OR"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-28500"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "163276"
      },
      {
        "db": "PACKETSTORM",
        "id": "162901"
      },
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "db": "PACKETSTORM",
        "id": "164090"
      },
      {
        "db": "PACKETSTORM",
        "id": "162151"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1168"
      }
    ],
    "trust": 1.3
  },
  "cve": "CVE-2020-28500",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "acInsufInfo": false,
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "NVD",
            "availabilityImpact": "PARTIAL",
            "baseScore": 5.0,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 10.0,
            "impactScore": 2.9,
            "integrityImpact": "NONE",
            "obtainAllPrivilege": false,
            "obtainOtherPrivilege": false,
            "obtainUserPrivilege": false,
            "severity": "MEDIUM",
            "trust": 1.0,
            "userInteractionRequired": false,
            "vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P",
            "version": "2.0"
          },
          {
            "acInsufInfo": null,
            "accessComplexity": "Low",
            "accessVector": "Network",
            "authentication": "None",
            "author": "NVD",
            "availabilityImpact": "Partial",
            "baseScore": 5.0,
            "confidentialityImpact": "None",
            "exploitabilityScore": null,
            "id": "CVE-2020-28500",
            "impactScore": null,
            "integrityImpact": "None",
            "obtainAllPrivilege": null,
            "obtainOtherPrivilege": null,
            "obtainUserPrivilege": null,
            "severity": "Medium",
            "trust": 0.9,
            "userInteractionRequired": null,
            "vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 5.0,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 10.0,
            "id": "VHN-373964",
            "impactScore": 2.9,
            "integrityImpact": "NONE",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:L/AU:N/C:N/I:N/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "NVD",
            "availabilityImpact": "LOW",
            "baseScore": 5.3,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 3.9,
            "impactScore": 1.4,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 2.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "Low",
            "baseScore": 5.3,
            "baseSeverity": "Medium",
            "confidentialityImpact": "None",
            "exploitabilityScore": null,
            "id": "CVE-2020-28500",
            "impactScore": null,
            "integrityImpact": "None",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "NVD",
            "id": "CVE-2020-28500",
            "trust": 1.8,
            "value": "MEDIUM"
          },
          {
            "author": "report@snyk.io",
            "id": "CVE-2020-28500",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202102-1168",
            "trust": 0.6,
            "value": "MEDIUM"
          },
          {
            "author": "VULHUB",
            "id": "VHN-373964",
            "trust": 0.1,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2020-28500",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-373964"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-28500"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011490"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-28500"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-28500"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1168"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions. Lodash Exists in unspecified vulnerabilities.Service operation interruption (DoS) It may be in a state. lodash is an open source JavaScript utility library. There is a security vulnerability in Lodash. Please keep an eye on CNNVD or manufacturer announcements. Description:\n\nThe ovirt-engine package provides the manager for virtualization\nenvironments. \nThis manager enables admins to define hosts and networks, as well as to add\nstorage, create VMs and manage user permissions. \n\nBug Fix(es):\n\n* This release adds the queue attribute to the virtio-scsi driver in the\nvirtual machine configuration. This improvement enables multi-queue\nperformance with the virtio-scsi driver. (BZ#911394)\n\n* With this release, source-load-balancing has been added as a new\nsub-option for xmit_hash_policy. It can be configured for bond modes\nbalance-xor (2), 802.3ad (4) and balance-tlb (5), by specifying\nxmit_hash_policy=vlan+srcmac. (BZ#1683987)\n\n* The default DataCenter/Cluster will be set to compatibility level 4.6 on\nnew installations of Red Hat Virtualization 4.4.6.; (BZ#1950348)\n\n* With this release, support has been added for copying disks between\nregular Storage Domains and Managed Block Storage Domains. \nIt is now possible to migrate disks between Managed Block Storage Domains\nand regular Storage Domains. (BZ#1906074)\n\n* Previously, the engine-config value LiveSnapshotPerformFreezeInEngine was\nset by default to false and was supposed to be uses in cluster\ncompatibility levels below 4.4. The value was set to general version. \nWith this release, each cluster level has it\u0027s own value, defaulting to\nfalse for 4.4 and above. This will reduce unnecessary overhead in removing\ntime outs of the file system freeze command. (BZ#1932284)\n\n* With this release, running virtual machines is supported for up to 16TB\nof RAM on x86_64 architectures. (BZ#1944723)\n\n* This release adds the gathering of oVirt/RHV related certificates to\nallow easier debugging of issues for faster customer help and issue\nresolution. \nInformation from certificates is now included as part of the sosreport. \nNote that no corresponding private key information is gathered, due to\nsecurity considerations. (BZ#1845877)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/2974891\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1113630 - [RFE] indicate vNICs that are out-of-sync from their configuration on engine\n1310330 - [RFE] Provide a way to remove stale LUNs from hypervisors\n1589763 - [downstream clone] Error changing CD for a running VM when ISO image is on a block domain\n1621421 - [RFE] indicate vNIC is out of sync on network QoS modification on engine\n1717411 - improve engine logging when migration fail\n1766414 - [downstream] [UI] hint after updating mtu on networks connected to running VMs\n1775145 - Incorrect message from hot-plugging memory\n1821199 - HP VM fails to migrate between identical hosts (the same cpu flags) not supporting TSC. \n1845877 - [RFE] Collect information about RHV PKI\n1875363 - engine-setup failing on FIPS enabled rhel8 machine\n1906074 - [RFE] Support disks copy between regular and managed block storage domains\n1910858 - vm_ovf_generations is not cleared while detaching the storage domain causing VM import with old stale configuration\n1917718 - [RFE] Collect memory usage from guests without ovirt-guest-agent and memory ballooning\n1919195 - Unable to create snapshot without saving memory of running VM from VM Portal. \n1919984 - engine-setup failse to deploy the grafana service in an external DWH server\n1924610 - VM Portal shows N/A as the VM IP address even if the guest agent is running and the IP is shown in the webadmin portal\n1926018 - Failed to run VM after FIPS mode is enabled\n1926823 - Integrating ELK with RHV-4.4 fails as RHVH is missing \u0027rsyslog-gnutls\u0027 package. \n1928158 - Rename \u0027CA Certificate\u0027 link in welcome page to \u0027Engine CA certificate\u0027\n1928188 - Failed to parse \u0027writeOps\u0027 value \u0027XXXX\u0027 to integer: For input string: \"XXXX\"\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1929211 - Failed to parse \u0027writeOps\u0027 value \u0027XXXX\u0027 to integer: For input string: \"XXXX\"\n1930522 - [RHV-4.4.5.5] Failed to deploy RHEL AV 8.4.0 host to RHV with error \"missing groups or modules: virt:8.4\"\n1930565 - Host upgrade failed in imgbased but RHVM shows upgrade successful\n1930895 - RHEL 8 virtual machine with qemu-guest-agent installed displays Guest OS Memory Free/Cached/Buffered: Not Configured\n1932284 - Engine handled FS freeze is not fast enough for Windows systems\n1935073 - Ansible ovirt_disk module can create disks with conflicting IDs that cannot be removed\n1942083 - upgrade ovirt-cockpit-sso to 0.1.4-2\n1943267 - Snapshot creation is failing for VM having vGPU. \n1944723 - [RFE] Support virtual machines with 16TB memory\n1948577 - [welcome page] remove \"Infrastructure Migration\" section (obsoleted)\n1949543 - rhv-log-collector-analyzer fails to run MAC Pools rule\n1949547 - rhv-log-collector-analyzer report contains \u0027b characters\n1950348 - Set compatibility level 4.6 for Default DataCenter/Cluster during new installations of RHV 4.4.6\n1950466 - Host installation failed\n1954401 - HP VMs pinning is wiped after edit-\u003eok and pinned to first physical CPUs.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Moderate: OpenShift Container Platform 4.8.2 bug fix and security update\nAdvisory ID:       RHSA-2021:2438-01\nProduct:           Red Hat OpenShift Enterprise\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2021:2438\nIssue date:        2021-07-27\nCVE Names:         CVE-2016-2183 CVE-2020-7774 CVE-2020-15106 \n                   CVE-2020-15112 CVE-2020-15113 CVE-2020-15114 \n                   CVE-2020-15136 CVE-2020-26160 CVE-2020-26541 \n                   CVE-2020-28469 CVE-2020-28500 CVE-2020-28852 \n                   CVE-2021-3114 CVE-2021-3121 CVE-2021-3516 \n                   CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 \n                   CVE-2021-3537 CVE-2021-3541 CVE-2021-3636 \n                   CVE-2021-20206 CVE-2021-20271 CVE-2021-20291 \n                   CVE-2021-21419 CVE-2021-21623 CVE-2021-21639 \n                   CVE-2021-21640 CVE-2021-21648 CVE-2021-22133 \n                   CVE-2021-23337 CVE-2021-23362 CVE-2021-23368 \n                   CVE-2021-23382 CVE-2021-25735 CVE-2021-25737 \n                   CVE-2021-26539 CVE-2021-26540 CVE-2021-27292 \n                   CVE-2021-28092 CVE-2021-29059 CVE-2021-29622 \n                   CVE-2021-32399 CVE-2021-33034 CVE-2021-33194 \n                   CVE-2021-33909 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.8.2 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nThis release includes a security update for Red Hat OpenShift Container\nPlatform 4.8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.8.2. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2021:2437\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel\nease-notes.html\n\nSecurity Fix(es):\n\n* SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32)\n(CVE-2016-2183)\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n\n* nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)\n\n* etcd: Large slice causes panic in decodeRecord method (CVE-2020-15106)\n\n* etcd: DoS in wal/wal.go (CVE-2020-15112)\n\n* etcd: directories created via os.MkdirAll are not checked for permissions\n(CVE-2020-15113)\n\n* etcd: gateway can include itself as an endpoint resulting in resource\nexhaustion and leads to DoS (CVE-2020-15114)\n\n* etcd: no authentication is performed against endpoints provided in the\n- --endpoints flag (CVE-2020-15136)\n\n* jwt-go: access restriction bypass vulnerability (CVE-2020-26160)\n\n* nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)\n\n* nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n(CVE-2020-28500)\n\n* golang.org/x/text: Panic in language.ParseAcceptLanguage while processing\nbcp47 tag (CVE-2020-28852)\n\n* golang: crypto/elliptic: incorrect operations on the P-224 curve\n(CVE-2021-3114)\n\n* containernetworking-cni: Arbitrary path injection via type field in CNI\nconfiguration (CVE-2021-20206)\n\n* containers/storage: DoS via malicious image (CVE-2021-20291)\n\n* prometheus: open redirect under the /new endpoint (CVE-2021-29622)\n\n* golang: x/net/html: infinite loop in ParseFragment (CVE-2021-33194)\n\n* go.elastic.co/apm: leaks sensitive HTTP headers during panic\n(CVE-2021-22133)\n\nSpace precludes listing in detail the following additional CVEs fixes:\n(CVE-2021-27292), (CVE-2021-28092), (CVE-2021-29059), (CVE-2021-23382),\n(CVE-2021-26539), (CVE-2021-26540), (CVE-2021-23337), (CVE-2021-23362) and\n(CVE-2021-23368)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.8.2-x86_64\n\nThe image digest is\nssha256:0e82d17ababc79b10c10c5186920232810aeccbccf2a74c691487090a2c98ebc\n\n(For s390x architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.8.2-s390x\n\nThe image digest is\nsha256:a284c5c3fa21b06a6a65d82be1dc7e58f378aa280acd38742fb167a26b91ecb5\n\n(For ppc64le architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.8.2-ppc64le\n\nThe image digest is\nsha256:da989b8e28bccadbb535c2b9b7d3597146d14d254895cd35f544774f374cdd0f\n\nAll OpenShift Container Platform 4.8 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.8/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor\n\n3. Solution:\n\nFor OpenShift Container Platform 4.8 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.8/updating/updating-cluster\n- -cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1369383 - CVE-2016-2183 SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32)\n1725981 - oc explain does not work well with full resource.group names\n1747270 - [osp] Machine with name \"\u003ccluster-id\u003e-worker\"couldn\u0027t join the cluster\n1772993 - rbd block devices attached to a host are visible in unprivileged container pods\n1786273 - [4.6] KAS pod logs show \"error building openapi models ... has invalid property: anyOf\" for CRDs\n1786314 - [IPI][OSP] Install fails on OpenStack with self-signed certs unless the node running the installer has the CA cert in its system trusts\n1801407 - Router in v4v6 mode puts brackets around IPv4 addresses in the Forwarded header\n1812212 - ArgoCD example application cannot be downloaded from github\n1817954 - [ovirt] Workers nodes are not numbered sequentially\n1824911 - PersistentVolume yaml editor is read-only with system:persistent-volume-provisioner ClusterRole\n1825219 - openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with \"Unable to connect to the server\"\n1825417 - The containerruntimecontroller doesn\u0027t roll back to CR-1 if we delete CR-2\n1834551 - ClusterOperatorDown fires when operator is only degraded; states will block upgrades\n1835264 - Intree provisioner doesn\u0027t respect PVC.spec.dataSource sometimes\n1839101 - Some sidebar links in developer perspective don\u0027t follow same project\n1840881 - The KubeletConfigController cannot process multiple confs for a pool/ pool changes\n1846875 - Network setup test high failure rate\n1848151 - Console continues to poll the ClusterVersion resource when the user doesn\u0027t have authority\n1850060 - After upgrading to 3.11.219 timeouts are appearing. \n1852637 - Kubelet sets incorrect image names in node status images section\n1852743 - Node list CPU column only show usage\n1853467 - container_fs_writes_total is inconsistent with CPU/memory in summarizing cgroup values\n1857008 - [Edge] [BareMetal] Not provided STATE value for machines\n1857477 - Bad helptext for storagecluster creation\n1859382 - check-endpoints panics on graceful shutdown\n1862084 - Inconsistency of time formats in the OpenShift web-console\n1864116 - Cloud credential operator scrolls warnings about unsupported platform\n1866222 - Should output all options when runing `operator-sdk init --help`\n1866318 - [RHOCS Usability Study][Dashboard] Users found it difficult to navigate to the OCS dashboard\n1866322 - [RHOCS Usability Study][Dashboard] Alert details page does not help to explain the Alert\n1866331 - [RHOCS Usability Study][Dashboard] Users need additional tooltips or definitions\n1868755 - [vsphere] terraform provider vsphereprivate crashes when network is unavailable on host\n1868870 - CVE-2020-15113 etcd: directories created via os.MkdirAll are not checked for permissions\n1868872 - CVE-2020-15112 etcd: DoS in wal/wal.go\n1868874 - CVE-2020-15114 etcd: gateway can include itself as an endpoint resulting in resource exhaustion and leads to DoS\n1868880 - CVE-2020-15136 etcd: no authentication is performed against endpoints provided in the --endpoints flag\n1868883 - CVE-2020-15106 etcd: Large slice causes panic in decodeRecord method\n1871303 - [sig-instrumentation] Prometheus when installed on the cluster should have important platform topology metrics\n1871770 - [IPI baremetal] The Keepalived.conf file is not indented evenly\n1872659 - ClusterAutoscaler doesn\u0027t scale down when a node is not needed anymore\n1873079 - SSH to api and console route is possible when the clsuter is hosted on Openstack\n1873649 - proxy.config.openshift.io should validate user inputs\n1874322 - openshift/oauth-proxy: htpasswd using SHA1 to store credentials\n1874931 - Accessibility - Keyboard shortcut to exit YAML editor not easily discoverable\n1876918 - scheduler test leaves taint behind\n1878199 - Remove Log Level Normalization controller in cluster-config-operator release N+1\n1878655 - [aws-custom-region] creating manifests take too much time when custom endpoint is unreachable\n1878685 - Ingress resource with \"Passthrough\"  annotation does not get applied when using the newer \"networking.k8s.io/v1\" API\n1879077 - Nodes tainted after configuring additional host iface\n1879140 - console auth errors not understandable by customers\n1879182 - switch over to secure access-token logging by default and delete old non-sha256 tokens\n1879184 - CVO must detect or log resource hotloops\n1879495 - [4.6] namespace \\\u201copenshift-user-workload-monitoring\\\u201d does not exist\u201d\n1879638 - Binary file uploaded to a secret in OCP 4 GUI is not properly converted to Base64-encoded string\n1879944 - [OCP 4.8] Slow PV creation with vsphere\n1880757 - AWS: master not removed from LB/target group when machine deleted\n1880758 - Component descriptions in cloud console have bad description (Managed by Terraform)\n1881210 - nodePort for router-default metrics with NodePortService does not exist\n1881481 - CVO hotloops on some service manifests\n1881484 - CVO hotloops on deployment manifests\n1881514 - CVO hotloops on imagestreams from cluster-samples-operator\n1881520 - CVO hotloops on (some) clusterrolebindings\n1881522 - CVO hotloops on clusterserviceversions packageserver\n1881662 - Error getting volume limit for plugin kubernetes.io/\u003cname\u003e in kubelet logs\n1881694 - Evidence of disconnected installs pulling images from the local registry instead of quay.io\n1881938 - migrator deployment doesn\u0027t tolerate masters\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1883587 - No option for user to select volumeMode\n1883993 - Openshift 4.5.8 Deleting pv disk vmdk after delete machine\n1884053 - cluster DNS experiencing disruptions during cluster upgrade in insights cluster\n1884800 - Failed to set up mount unit: Invalid argument\n1885186 - Removing ssh keys MC does not remove the key from authorized_keys\n1885349 - [IPI Baremetal] Proxy Information Not passed to metal3\n1885717 - activeDeadlineSeconds DeadlineExceeded does not show terminated container statuses\n1886572 - auth: error contacting auth provider when extra ingress (not default)  goes down\n1887849 - When creating new storage class failure_domain is missing. \n1888712 - Worker nodes do not come up on a baremetal IPI deployment with control plane network configured on a vlan on top of bond interface due to Pending CSRs\n1889689 - AggregatedAPIErrors alert may never fire\n1890678 - Cypress:  Fix \u0027structure\u0027 accesibility violations\n1890828 - Intermittent prune job failures causing operator degradation\n1891124 - CP Conformance: CRD spec and status failures\n1891301 - Deleting bmh  by \"oc delete bmh\u0027 get stuck\n1891696 - [LSO] Add capacity UI does not check for node present in selected storageclass\n1891766 - [LSO] Min-Max filter\u0027s from OCS wizard accepts Negative values and that cause PV not getting created\n1892642 - oauth-server password metrics do not appear in UI after initial OCP installation\n1892718 - HostAlreadyClaimed: The new route cannot be loaded with a new api group version\n1893850 - Add an alert for requests rejected by the apiserver\n1893999 - can\u0027t login ocp cluster with oc 4.7 client without the username\n1895028 - [gcp-pd-csi-driver-operator] Volumes created by CSI driver are not deleted on cluster deletion\n1895053 - Allow builds to optionally mount in cluster trust stores\n1896226 - recycler-pod template should not be in kubelet static manifests directory\n1896321 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types\n1896751 - [RHV IPI] Worker nodes stuck in the Provisioning Stage if the machineset has a long name\n1897415 - [Bare Metal - Ironic] provide the ability to set the cipher suite for ipmitool when doing a Bare Metal IPI install\n1897621 - Auth test.Login test.logs in as kubeadmin user: Timeout\n1897918 - [oVirt] e2e tests fail due to kube-apiserver not finishing\n1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability\n1899057 - fix spurious br-ex MAC address error log\n1899187 - [Openstack] node-valid-hostname.service failes during the first boot leading to 5 minute provisioning delay\n1899587 - [External] RGW usage metrics shown on Object Service Dashboard  is incorrect\n1900454 - Enable host-based disk encryption on Azure platform\n1900819 - Scaled ingress replicas following sharded pattern don\u0027t balance evenly across multi-AZ\n1901207 - Search Page - Pipeline resources table not immediately updated after Name filter applied or removed\n1901535 - Remove the managingOAuthAPIServer field from the authentication.operator API\n1901648 - \"do you need to set up custom dns\" tooltip inaccurate\n1902003 - Jobs Completions column is not sorting when there are \"0 of 1\" and \"1 of 1\" in the list. \n1902076 - image registry operator should monitor status of its routes\n1902247 - openshift-oauth-apiserver apiserver pod crashloopbackoffs\n1903055 - [OSP] Validation should fail when no any IaaS flavor or type related field are given\n1903228 - Pod stuck in Terminating, runc init process frozen\n1903383 - Latest RHCOS 47.83. builds failing to install: mount /root.squashfs failed\n1903553 - systemd container renders node NotReady after deleting it\n1903700 - metal3 Deployment doesn\u0027t have unique Pod selector\n1904006 - The --dir option doest not work for command  `oc image extract`\n1904505 - Excessive Memory Use in Builds\n1904507 - vsphere-problem-detector: implement missing metrics\n1904558 - Random init-p error when trying to start pod\n1905095 - Images built on OCP 4.6 clusters create manifests that result in quay.io (and other registries) rejecting those manifests\n1905147 - ConsoleQuickStart Card\u0027s prerequisites is a combined text instead of a list\n1905159 - Installation on previous unused dasd fails after formatting\n1905331 - openshift-multus initContainer multus-binary-copy, etc. are not requesting required resources: cpu, memory\n1905460 - Deploy using virtualmedia for disabled provisioning network on real BM(HPE) fails\n1905577 - Control plane machines not adopted when provisioning network is disabled\n1905627 - Warn users when using an unsupported browser such as IE\n1905709 - Machine API deletion does not properly handle stopped instances on AWS or GCP\n1905849 - Default volumesnapshotclass should be created when creating default storageclass\n1906056 - Bundles skipped via the `skips` field cannot be pinned\n1906102 - CBO produces standard metrics\n1906147 - ironic-rhcos-downloader should not use --insecure\n1906304 - Unexpected value NaN parsing x/y attribute when viewing pod Memory/CPU usage chart\n1906740 - [aws]Machine should be \"Failed\" when creating a machine with invalid region\n1907309 - Migrate controlflow v1alpha1 to v1beta1 in storage\n1907315 - the internal load balancer annotation for AWS should use \"true\" instead of \"0.0.0.0/0\" as value\n1907353 - [4.8] OVS daemonset is wasting resources even though it doesn\u0027t do anything\n1907614 - Update kubernetes deps to 1.20\n1908068 - Enable DownwardAPIHugePages feature gate\n1908169 - The example of Import URL is \"Fedora cloud image list\" for all templates. \n1908170 - sriov network resource injector: Hugepage injection doesn\u0027t work with mult container\n1908343 - Input labels in Manage columns modal should be clickable\n1908378 - [sig-network] pods should successfully create sandboxes by getting pod - Static Pod Failures\n1908655 - \"Evaluating rule failed\" for \"record: node:node_num_cpu:sum\" rule\n1908762 - [Dualstack baremetal cluster] multicast traffic is not working on ovn-kubernetes\n1908765 - [SCALE] enable OVN lflow data path groups\n1908774 - [SCALE] enable OVN DB memory trimming on compaction\n1908916 - CNO: turn on OVN DB RAFT diffs once all master DB pods are capable of it\n1909091 - Pod/node/ip/template isn\u0027t showing when vm is running\n1909600 - Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error\n1909849 - release-openshift-origin-installer-e2e-aws-upgrade-fips-4.4 is perm failing\n1909875 - [sig-cluster-lifecycle] Cluster version operator acknowledges upgrade : timed out waiting for cluster to acknowledge upgrade\n1910067 - UPI: openstacksdk fails on \"server group list\"\n1910113 - periodic-ci-openshift-release-master-ocp-4.5-ci-e2e-44-stable-to-45-ci is never passing\n1910318 - OC 4.6.9 Installer failed: Some pods are not scheduled: 3 node(s) didn\u0027t match node selector: AWS compute machines without status\n1910378 - socket timeouts for webservice communication between pods\n1910396 - 4.6.9 cred operator should back-off when provisioning fails on throttling\n1910500 - Could not list CSI provisioner on web when create storage class on GCP platform\n1911211 - Should show the cert-recovery-controller version  correctly\n1911470 - ServiceAccount Registry Authfiles Do Not Contain Entries for Public Hostnames\n1912571 - libvirt: Support setting dnsmasq options through the install config\n1912820 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade\n1913112 - BMC details should be optional for unmanaged hosts\n1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag\n1913341 - GCP: strange cluster behavior in CI run\n1913399 - switch to v1beta1 for the priority and fairness APIs\n1913525 - Panic in OLM packageserver when invoking webhook authorization endpoint\n1913532 - After a 4.6 to 4.7 upgrade, a node went unready\n1913974 - snapshot test periodically failing with \"can\u0027t open \u0027/mnt/test/data\u0027: No such file or directory\"\n1914127 - Deletion of oc get svc router-default -n openshift-ingress hangs\n1914446 - openshift-service-ca-operator and openshift-service-ca pods run as root\n1914994 - Panic observed in k8s-prometheus-adapter since k8s 1.20\n1915122 - Size of the hostname was preventing proper DNS resolution of the worker node names\n1915693 - Not able to install gpu-operator on cpumanager enabled node. \n1915971 - Role and Role Binding breadcrumbs do not work as expected\n1916116 - the left navigation menu would not be expanded if repeat clicking the links in Overview page\n1916118 - [OVN] Source IP is not EgressIP if configured allow 0.0.0.0/0 in the EgressFirewall\n1916392 - scrape priority and fairness endpoints for must-gather\n1916450 - Alertmanager: add title and text fields to Adv. config. section of Slack Receiver form\n1916489 - [sig-scheduling] SchedulerPriorities [Serial] fails with \"Error waiting for 1 pods to be running - probably a timeout: Timeout while waiting for pods with labels to be ready\"\n1916553 - Default template\u0027s description is empty on details tab\n1916593 - Destroy cluster sometimes stuck in a loop\n1916872 - need ability to reconcile exgw annotations on pod add\n1916890 - [OCP 4.7] api or api-int not available during installation\n1917241 - [en_US] The tooltips of Created date time is not easy to read in all most of UIs. \n1917282 - [Migration] MCO stucked for rhel worker after  enable the migration prepare state\n1917328 - It should default to current namespace when create vm from template action on details page\n1917482 - periodic-ci-openshift-release-master-ocp-4.7-e2e-metal-ipi failing with \"cannot go from state \u0027deploy failed\u0027 to state \u0027manageable\u0027\"\n1917485 - [oVirt] ovirt machine/machineset object has missing some field validations\n1917667 - Master machine config pool updates are stalled during the migration from SDN to OVNKube. \n1917906 - [oauth-server] bump k8s.io/apiserver to 1.20.3\n1917931 - [e2e-gcp-upi] failing due to missing pyopenssl library\n1918101 - [vsphere]Delete Provisioning machine took about 12 minutes\n1918376 - Image registry pullthrough does not support ICSP, mirroring e2es do not pass\n1918442 - Service Reject ACL does not work on dualstack\n1918723 - installer fails to write boot record on 4k scsi lun on s390x\n1918729 - Add hide/reveal button for the token field in the KMS configuration page\n1918750 - CVE-2021-3114 golang: crypto/elliptic: incorrect operations on the P-224 curve\n1918785 - Pod request and limit calculations in console are incorrect\n1918910 - Scale from zero annotations should not requeue if instance type missing\n1919032 - oc image extract - will not extract files from image rootdir - \"error: unexpected directory from mapping tests.test\"\n1919048 - Whereabouts IPv6 addresses not calculated when leading hextets equal 0\n1919151 - [Azure] dnsrecords with invalid domain should not be published to Azure dnsZone\n1919168 - `oc adm catalog mirror` doesn\u0027t work for the air-gapped cluster\n1919291 - [Cinder-csi-driver] Filesystem did not expand for on-line volume resize\n1919336 - vsphere-problem-detector should check if datastore is part of datastore cluster\n1919356 - Add missing profile annotation in cluster-update-keys manifests\n1919391 - CVE-2021-20206 containernetworking-cni: Arbitrary path injection via type field in CNI configuration\n1919398 - Permissive Egress NetworkPolicy (0.0.0.0/0) is blocking all traffic\n1919406 - OperatorHub filter heading \"Provider Type\" should be \"Source\"\n1919737 - hostname lookup delays when master node down\n1920209 - Multus daemonset upgrade takes the longest time in the cluster during an upgrade\n1920221 - GCP jobs exhaust zone listing query quota sometimes due to too many initializations of cloud provider in tests\n1920300 - cri-o does not support configuration of stream idle time\n1920307 - \"VM not running\" should be \"Guest agent required\" on vm details page in dev console\n1920532 - Problem in trying to connect through the service to a member that is the same as the caller. \n1920677 - Various missingKey errors in the devconsole namespace\n1920699 - Operation cannot be fulfilled on clusterresourcequotas.quota.openshift.io error when creating different OpenShift resources\n1920901 - [4.7]\"500 Internal Error\" for prometheus route in https_proxy cluster\n1920903 - oc adm top reporting unknown status for Windows node\n1920905 - Remove DNS lookup workaround from cluster-api-provider\n1921106 - A11y Violation: button name(s) on Utilization Card on Cluster Dashboard\n1921184 - kuryr-cni binds to wrong interface on machine with two interfaces\n1921227 - Fix issues related to consuming new extensions in Console static plugins\n1921264 - Bundle unpack jobs can hang indefinitely\n1921267 - ResourceListDropdown not internationalized\n1921321 - SR-IOV obliviously reboot the node\n1921335 - ThanosSidecarUnhealthy\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1921720 - test: openshift-tests.[sig-cli] oc observe works as expected [Suite:openshift/conformance/parallel]\n1921763 - operator registry has high memory usage in 4.7... cleanup row closes\n1921778 - Push to stage now failing with semver issues on old releases\n1921780 - Search page not fully internationalized\n1921781 - DefaultList component not internationalized\n1921878 - [kuryr] Egress network policy with namespaceSelector in Kuryr behaves differently than in OVN-Kubernetes\n1921885 - Server-side Dry-run with Validation Downloads Entire OpenAPI spec often\n1921892 - MAO: controller runtime manager closes event recorder\n1921894 - Backport Avoid node disruption when kube-apiserver-to-kubelet-signer is rotated\n1921937 - During upgrade /etc/hostname becomes a directory, nodes are set with kubernetes.io/hostname=localhost label\n1921953 - ClusterServiceVersion property inference does not infer package and version\n1922063 - \"Virtual Machine\" should be \"Templates\" in template wizard\n1922065 - Rootdisk size is default to 15GiB in customize wizard\n1922235 - [build-watch] e2e-aws-upi - e2e-aws-upi container setup failing because of Python code version mismatch\n1922264 - Restore snapshot as a new PVC: RWO/RWX access modes are not click-able if parent PVC is deleted\n1922280 - [v2v] on the upstream release, In VM import wizard I see RHV but no oVirt\n1922646 - Panic in authentication-operator invoking webhook authorization\n1922648 - FailedCreatePodSandBox due to \"failed to pin namespaces [uts]: [pinns:e]: /var/run/utsns exists and is not a directory: File exists\"\n1922764 - authentication operator is degraded due to number of kube-apiservers\n1922992 - some button text on YAML sidebar are not translated\n1922997 - [Migration]The SDN migration rollback failed. \n1923038 - [OSP] Cloud Info is loaded twice\n1923157 - Ingress traffic performance drop due to NodePort services\n1923786 - RHV UPI fails with unhelpful message when ASSET_DIR is not set. \n1923811 - Registry claims Available=True despite .status.readyReplicas == 0  while .spec.replicas == 2\n1923847 - Error occurs when creating pods if configuring multiple key-only labels in default cluster-wide node selectors or project-wide node selectors\n1923984 - Incorrect anti-affinity for UWM prometheus\n1924020 - panic: runtime error: index out of range [0] with length 0\n1924075 - kuryr-controller restart when enablePortPoolsPrepopulation = true\n1924083 - \"Activity\" Pane of Persistent Storage tab shows events related to Noobaa too\n1924140 - [OSP] Typo in OPENSHFIT_INSTALL_SKIP_PREFLIGHT_VALIDATIONS variable\n1924171 - ovn-kube must handle single-stack to dual-stack migration\n1924358 - metal UPI setup fails, no worker nodes\n1924502 - Failed to start transient scope unit: Argument list too long / systemd[1]: Failed to set up mount unit: Invalid argument\n1924536 - \u0027More about Insights\u0027 link points to support link\n1924585 - \"Edit Annotation\" are not correctly translated in Chinese\n1924586 - Control Plane status and Operators status are not fully internationalized\n1924641 - [User Experience] The message \"Missing storage class\" needs to be displayed after user clicks Next and needs to be rephrased\n1924663 - Insights operator should collect related pod logs when operator is degraded\n1924701 - Cluster destroy fails when using byo with Kuryr\n1924728 - Difficult to identify deployment issue if the destination disk is too small\n1924729 - Create Storageclass for CephFS provisioner assumes incorrect default FSName in external mode (side-effect of fix for Bug 1878086)\n1924747 - InventoryItem doesn\u0027t internationalize resource kind\n1924788 - Not clear error message when there are no NADs available for the user\n1924816 - Misleading error messages in ironic-conductor log\n1924869 - selinux avc deny after installing OCP 4.7\n1924916 - PVC reported as Uploading when it is actually cloning\n1924917 - kuryr-controller in crash loop if IP is removed from secondary interfaces\n1924953 - newly added \u0027excessive etcd leader changes\u0027 test case failing in serial job\n1924968 - Monitoring list page filter options are not translated\n1924983 - some components in utils directory not localized\n1925017 - [UI] VM Details-\u003e Network Interfaces, \u0027Name,\u0027 is displayed instead on \u0027Name\u0027\n1925061 - Prometheus backed by a PVC may start consuming a lot of RAM after 4.6 -\u003e 4.7 upgrade due to series churn\n1925083 - Some texts are not marked for translation on idp creation page. \n1925087 - Add i18n support for the Secret page\n1925148 - Shouldn\u0027t create the redundant imagestream when use `oc new-app --name=testapp2 -i ` with exist imagestream\n1925207 - VM from custom template - cloudinit disk is not added if creating the VM from custom template using customization wizard\n1925216 - openshift installer fails immediately failed to fetch Install Config\n1925236 - OpenShift Route targets every port of a multi-port service\n1925245 - oc idle: Clusters upgrading with an idled workload do not have annotations on the workload\u0027s service\n1925261 - Items marked as mandatory in KMS Provider form are not enforced\n1925291 - Baremetal IPI - While deploying with IPv6 provision network with subnet other than /64 masters fail to PXE boot\n1925343 - [ci] e2e-metal tests are not using reserved instances\n1925493 - Enable snapshot e2e tests\n1925586 - cluster-etcd-operator is leaking transports\n1925614 - Error: InstallPlan.operators.coreos.com not found\n1925698 - On GCP, load balancers report kube-apiserver fails its /readyz check 50% of the time, causing load balancer backend churn and disruptions to apiservers\n1926029 - [RFE] Either disable save or give warning when no disks support snapshot\n1926054 - Localvolume CR is created successfully, when the storageclass name defined in the localvolume exists. \n1926072 - Close button (X) does not work in the new \"Storage cluster exists\" Warning alert message(introduced via fix for Bug 1867400)\n1926082 - Insights operator should not go degraded during upgrade\n1926106 - [ja_JP][zh_CN] Create Project, Delete Project and Delete PVC modal are not fully internationalized\n1926115 - Texts in \u201cInsights\u201d popover on overview page are not marked for i18n\n1926123 - Pseudo bug: revert \"force cert rotation every couple days for development\" in 4.7\n1926126 - some kebab/action menu translation issues\n1926131 - Add HPA page is not fully internationalized\n1926146 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it\n1926154 - Create new pool with arbiter - wrong replica\n1926278 - [oVirt] consume K8S 1.20 packages\n1926279 - Pod ignores mtu setting from sriovNetworkNodePolicies in case of PF partitioning\n1926285 - ignore pod not found status messages\n1926289 - Accessibility: Modal content hidden from screen readers\n1926310 - CannotRetrieveUpdates alerts on Critical severity\n1926329 - [Assisted-4.7][Staging] monitoring stack in staging is being overloaded by the amount of metrics being exposed by assisted-installer pods and scraped by prometheus. \n1926336 - Service details can overflow boxes at some screen widths\n1926346 - move to go 1.15 and registry.ci.openshift.org\n1926364 - Installer timeouts because proxy blocked connection to Ironic API running on bootstrap VM\n1926465 - bootstrap kube-apiserver does not have --advertise-address set \u2013 was: [BM][IPI][DualStack] Installation fails cause Kubernetes service doesn\u0027t have IPv6 endpoints\n1926484 - API server exits non-zero on 2 SIGTERM signals\n1926547 - OpenShift installer not reporting IAM permission issue when removing the Shared Subnet Tag\n1926579 - Setting .spec.policy is deprecated and will be removed eventually. Please use .spec.profile instead is being logged every 3 seconds in scheduler operator log\n1926598 - Duplicate alert rules are displayed on console for thanos-querier api return wrong results\n1926776 - \"Template support\" modal appears when select the RHEL6 common template\n1926835 - [e2e][automation] prow gating use unsupported CDI version\n1926843 - pipeline with finally tasks status is improper\n1926867 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade\n1926893 - When deploying the operator via OLM (after creating the respective catalogsource), the deployment \"lost\" the `resources` section. \n1926903 - NTO may fail to disable stalld when relying on Tuned \u0027[service]\u0027 plugin\n1926931 - Inconsistent ovs-flow rule on one of the app node for egress node\n1926943 - vsphere-problem-detector: Alerts in CI jobs\n1926977 - [sig-devex][Feature:ImageEcosystem][Slow] openshift sample application repositories rails/nodejs\n1927013 - Tables don\u0027t render properly at smaller screen widths\n1927017 - CCO does not relinquish leadership when restarting for proxy CA change\n1927042 - Empty static pod files on UPI deployments are confusing\n1927047 - multiple external gateway pods will not work in ingress with IP fragmentation\n1927068 - Workers fail to PXE boot when IPv6 provisionining network has subnet other than /64\n1927075 - [e2e][automation] Fix pvc string in pvc.view\n1927118 - OCP 4.7: NVIDIA GPU Operator DCGM metrics not displayed in OpenShift Console Monitoring Metrics page\n1927244 - UPI installation with Kuryr timing out on bootstrap stage\n1927263 - kubelet service takes around 43 secs to start container when started from stopped state\n1927264 - FailedCreatePodSandBox due to multus inability to reach apiserver\n1927310 - Performance: Console makes unnecessary requests for en-US messages on load\n1927340 - Race condition in OperatorCondition reconcilation\n1927366 - OVS configuration service unable to clone NetworkManager\u0027s connections in the overlay FS\n1927391 - Fix flake in TestSyncPodsDeletesWhenSourcesAreReady\n1927393 - 4.7 still points to 4.6 catalog images\n1927397 - p\u0026f: add auto update for priority \u0026 fairness bootstrap configuration objects\n1927423 - Happy \"Not Found\" and no visible error messages on error-list page when /silences 504s\n1927465 - Homepage dashboard content not internationalized\n1927678 - Reboot interface defaults to softPowerOff so fencing is too slow\n1927731 - /usr/lib/dracut/modules.d/30ignition/ignition --version sigsev\n1927797 - \u0027Pod(s)\u0027 should be included in the pod donut label when a horizontal pod autoscaler is enabled\n1927882 - Can\u0027t create cluster role binding from UI when a project is selected\n1927895 - global RuntimeConfig is overwritten with merge result\n1927898 - i18n Admin Notifier\n1927902 - i18n Cluster Utilization dashboard duration\n1927903 - \"CannotRetrieveUpdates\" - critical error in openshift web console\n1927925 - Manually misspelled as Manualy\n1927941 - StatusDescriptor detail item and Status component can cause runtime error when the status is an object or array\n1927942 - etcd should use socket option (SO_REUSEADDR) instead of wait for port release on process restart\n1927944 - cluster version operator cycles terminating state waiting for leader election\n1927993 - Documentation Links in OKD Web Console are not Working\n1928008 - Incorrect behavior when we click back button after viewing the node details in Internal-attached mode\n1928045 - N+1 scaling Info message says \"single zone\" even if the nodes are spread across 2 or 0 zones\n1928147 - Domain search set in the required domains in Option 119 of DHCP Server is ignored by RHCOS on RHV\n1928157 - 4.7 CNO claims to be done upgrading before it even starts\n1928164 - Traffic to outside the cluster redirected when OVN is used and NodePort service is configured\n1928297 - HAProxy fails with 500 on some requests\n1928473 - NetworkManager overlay FS not being created on None platform\n1928512 - sap license management logs gatherer\n1928537 - Cannot IPI with tang/tpm disk encryption\n1928640 - Definite error message when using StorageClass based on azure-file / Premium_LRS\n1928658 - Update plugins and Jenkins version to prepare openshift-sync-plugin 1.0.46 release\n1928850 - Unable to pull images due to limited quota on Docker Hub\n1928851 - manually creating NetNamespaces will break things and this is not obvious\n1928867 - golden images - DV should not be created with WaitForFirstConsumer\n1928869 - Remove css required to fix search bug in console caused by pf issue in 2021.1\n1928875 - Update translations\n1928893 - Memory Pressure Drop Down Info is stating \"Disk\" capacity is low instead of memory\n1928931 - DNSRecord CRD is using deprecated v1beta1 API\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1929052 - Add new Jenkins agent maven dir for 3.6\n1929056 - kube-apiserver-availability.rules are failing evaluation\n1929110 - LoadBalancer service check test fails during vsphere upgrade\n1929136 - openshift isn\u0027t able to mount nfs manila shares to pods\n1929175 - LocalVolumeSet: PV is created on disk belonging to other provisioner\n1929243 - Namespace column missing in Nodes Node Details / pods tab\n1929277 - Monitoring workloads using too high a priorityclass\n1929281 - Update Tech Preview badge to transparent border color when upgrading to PatternFly v4.87.1\n1929314 - ovn-kubernetes endpoint slice controller doesn\u0027t run on CI jobs\n1929359 - etcd-quorum-guard uses origin-cli [4.8]\n1929577 - Edit Application action overwrites Deployment envFrom values on save\n1929654 - Registry for Azure uses legacy V1 StorageAccount\n1929693 - Pod stuck at \"ContainerCreating\" status\n1929733 - oVirt CSI driver operator is constantly restarting\n1929769 - Getting 404 after switching user perspective in another tab and reload Project details\n1929803 - Pipelines shown in edit flow for Workloads created via ContainerImage flow\n1929824 - fix alerting on volume name check for vsphere\n1929917 - Bare-metal operator is firing for ClusterOperatorDown for 15m during 4.6 to 4.7 upgrade\n1929944 - The etcdInsufficientMembers alert fires incorrectly when any instance is down and not when quorum is lost\n1930007 - filter dropdown item filter and resource list dropdown item filter doesn\u0027t support multi selection\n1930015 - OS list is overlapped by buttons in template wizard\n1930064 - Web console crashes during VM creation from template when no storage classes are defined\n1930220 - Cinder CSI driver is not able to mount volumes under heavier load\n1930240 - Generated clouds.yaml incomplete when provisioning network is disabled\n1930248 - After creating a remediation flow and rebooting a worker there is no access to the openshift-web-console\n1930268 - intel vfio devices are not expose as resources\n1930356 - Darwin binary missing from mirror.openshift.com\n1930393 - Gather info about unhealthy SAP pods\n1930546 - Monitoring-dashboard-workload keep loading when user with cluster-role cluster-monitoring-view login develoer console\n1930570 - Jenkins templates are displayed in Developer Catalog twice\n1930620 - the logLevel field in containerruntimeconfig can\u0027t be set to \"trace\"\n1930631 - Image local-storage-mustgather in the doc does not come from product registry\n1930893 - Backport upstream patch 98956 for pod terminations\n1931005 - Related objects page doesn\u0027t show the object when its name is empty\n1931103 - remove periodic log within kubelet\n1931115 - Azure cluster install fails with worker type workers Standard_D4_v2\n1931215 - [RFE] Cluster-api-provider-ovirt should handle affinity groups\n1931217 - [RFE] Installer should create RHV Affinity group for OCP cluster VMS\n1931467 - Kubelet consuming a large amount of CPU and memory and node becoming unhealthy\n1931505 - [IPI baremetal] Two nodes hold the VIP post remove and start  of the Keepalived container\n1931522 - Fresh UPI install on BM with bonding using OVN Kubernetes fails\n1931529 - SNO: mentioning of 4 nodes in error message - Cluster network CIDR prefix 24 does not contain enough addresses for 4 hosts each one with 25 prefix (128 addresses)\n1931629 - Conversational Hub Fails due to ImagePullBackOff\n1931637 - Kubeturbo Operator fails due to ImagePullBackOff\n1931652 - [single-node] etcd: discover-etcd-initial-cluster graceful termination race. \n1931658 - [single-node] cluster-etcd-operator: cluster never pivots from bootstrapIP endpoint\n1931674 - [Kuryr] Enforce nodes MTU for the Namespaces and Pods\n1931852 - Ignition HTTP GET is failing, because DHCP IPv4 config is failing silently\n1931883 - Fail to install Volume Expander Operator due to CrashLookBackOff\n1931949 - Red Hat  Integration Camel-K Operator keeps stuck in Pending state\n1931974 - Operators cannot access kubeapi endpoint on OVNKubernetes on ipv6\n1931997 - network-check-target causes upgrade to fail from 4.6.18 to 4.7\n1932001 - Only one of multiple subscriptions to the same package is honored\n1932097 - Apiserver liveness probe is marking it as unhealthy during normal shutdown\n1932105 - machine-config ClusterOperator claims level while control-plane still updating\n1932133 - AWS EBS CSI Driver doesn\u2019t support \u201ccsi.storage.k8s.io/fsTyps\u201d parameter\n1932135 - When \u201ciopsPerGB\u201d parameter is not set, event for AWS EBS CSI Driver provisioning is not clear\n1932152 - When \u201ciopsPerGB\u201d parameter is set to a wrong number, events for AWS EBS CSI Driver provisioning are not clear\n1932154 - [AWS ] machine stuck in provisioned phase , no warnings or errors\n1932182 - catalog operator causing CPU spikes and bad etcd performance\n1932229 - Can\u2019t find kubelet metrics for aws ebs csi volumes\n1932281 - [Assisted-4.7][UI] Unable to change upgrade channel once upgrades were discovered\n1932323 - CVE-2021-26540 sanitize-html: improper validation of hostnames set by the \"allowedIframeHostnames\" option can lead to bypass hostname whitelist for iframe element\n1932324 - CRIO fails to create a Pod in sandbox stage -  starting container process caused: process_linux.go:472: container init caused: Running hook #0:: error running hook: exit status 255, stdout: , stderr: \\\"\\n\"\n1932362 - CVE-2021-26539 sanitize-html: improper handling of internationalized domain name (IDN) can lead to bypass hostname whitelist validation\n1932401 - Cluster Ingress Operator degrades if external LB redirects http to https because of new \"canary\" route\n1932453 - Update Japanese timestamp format\n1932472 - Edit Form/YAML switchers cause weird collapsing/code-folding issue\n1932487 - [OKD] origin-branding manifest is missing cluster profile annotations\n1932502 - Setting MTU for a bond interface using Kernel arguments is not working\n1932618 - Alerts during a test run should fail the test job, but were not\n1932624 - ClusterMonitoringOperatorReconciliationErrors is pending at the end of an upgrade and probably should not be\n1932626 - During a 4.8 GCP upgrade OLM fires an alert indicating the operator is unhealthy\n1932673 - Virtual machine template provided by red hat should not be editable. The UI allows to edit and then reverse the change after it was made\n1932789 - Proxy with port is unable to be validated if it overlaps with service/cluster network\n1932799 - During a hive driven baremetal installation the process does not go beyond 80% in the bootstrap VM\n1932805 - e2e: test OAuth API connections in the tests by that name\n1932816 - No new local storage operator bundle image is built\n1932834 - enforce the use of hashed access/authorize tokens\n1933101 - Can not upgrade a Helm Chart that uses a library chart in the OpenShift dev console\n1933102 - Canary daemonset uses default node selector\n1933114 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it [Suite:openshift/conformance/parallel/minimal]\n1933159 - multus DaemonSets should use maxUnavailable: 33%\n1933173 - openshift-sdn/sdn DaemonSet should use maxUnavailable: 10%\n1933174 - openshift-sdn/ovs DaemonSet should use maxUnavailable: 10%\n1933179 - network-check-target DaemonSet should use maxUnavailable: 10%\n1933180 - openshift-image-registry/node-ca DaemonSet should use maxUnavailable: 10%\n1933184 - openshift-cluster-csi-drivers DaemonSets should use maxUnavailable: 10%\n1933263 - user manifest with nodeport services causes bootstrap to block\n1933269 - Cluster unstable replacing an unhealthy etcd member\n1933284 - Samples in CRD creation are ordered arbitarly\n1933414 - Machines are created with unexpected name for Ports\n1933599 - bump k8s.io/apiserver to 1.20.3\n1933630 - [Local Volume] Provision disk failed when disk label has unsupported value like \":\"\n1933664 - Getting Forbidden for image in a container template when creating a sample app\n1933708 - Grafana is not displaying deployment config resources in dashboard `Default /Kubernetes / Compute Resources / Namespace (Workloads)`\n1933711 - EgressDNS: Keep short lived records at most 30s\n1933730 - [AI-UI-Wizard] Toggling \"Use extra disks for local storage\" checkbox highlights the \"Next\" button to move forward but grays out once clicked\n1933761 - Cluster DNS service caps TTLs too low and thus evicts from its cache too aggressively\n1933772 - MCD Crash Loop Backoff\n1933805 - TargetDown alert fires during upgrades because of normal upgrade behavior\n1933857 - Details page can throw an uncaught exception if kindObj prop is undefined\n1933880 - Kuryr-Controller crashes when it\u0027s missing the status object\n1934021 - High RAM usage on machine api termination node system oom\n1934071 - etcd consuming high amount of  memory and CPU after upgrade to 4.6.17\n1934080 - Both old and new Clusterlogging CSVs stuck in Pending during upgrade\n1934085 - Scheduling conformance tests failing in a single node cluster\n1934107 - cluster-authentication-operator builds URL incorrectly for IPv6\n1934112 - Add memory and uptime metadata to IO archive\n1934113 - mcd panic when there\u0027s not enough free disk space\n1934123 - [OSP] First public endpoint is used to fetch ignition config from Glance URL (with multiple endpoints) on OSP\n1934163 - Thanos Querier restarting and gettin alert ThanosQueryHttpRequestQueryRangeErrorRateHigh\n1934174 - rootfs too small when enabling NBDE\n1934176 - Machine Config Operator degrades during cluster update with failed to convert Ignition config spec v2 to v3\n1934177 - knative-camel-operator  CreateContainerError \"container_linux.go:366: starting container process caused: chdir to cwd (\\\"/home/nonroot\\\") set in config.json failed: permission denied\"\n1934216 - machineset-controller stuck in CrashLoopBackOff after upgrade to 4.7.0\n1934229 - List page text filter has input lag\n1934397 - Extend OLM operator gatherer to include Operator/ClusterServiceVersion conditions\n1934400 - [ocp_4][4.6][apiserver-auth] OAuth API servers are not ready - PreconditionNotReady\n1934516 - Setup different priority classes for prometheus-k8s and prometheus-user-workload pods\n1934556 - OCP-Metal images\n1934557 - RHCOS boot image bump for LUKS fixes\n1934643 - Need BFD failover capability on ECMP routes\n1934711 - openshift-ovn-kubernetes ovnkube-node DaemonSet should use maxUnavailable: 10%\n1934773 - Canary client should perform canary probes explicitly over HTTPS (rather than redirect from HTTP)\n1934905 - CoreDNS\u0027s \"errors\" plugin is not enabled for custom upstream resolvers\n1935058 - Can\u2019t finish install sts clusters on aws government region\n1935102 - Error: specifying a root certificates file with the insecure flag is not allowed during oc login\n1935155 - IGMP/MLD packets being dropped\n1935157 - [e2e][automation] environment tests broken\n1935165 - OCP 4.6 Build fails when filename contains an umlaut\n1935176 - Missing an indication whether the deployed setup is SNO. \n1935269 - Topology operator group shows child Jobs. Not shown in details view\u0027s resources. \n1935419 - Failed to scale worker using virtualmedia on Dell R640\n1935528 - [AWS][Proxy] ingress reports degrade with CanaryChecksSucceeding=False in the cluster with proxy setting\n1935539 - Openshift-apiserver CO unavailable during cluster upgrade from 4.6 to 4.7\n1935541 - console operator panics in DefaultDeployment with nil cm\n1935582 - prometheus liveness probes cause issues while replaying WAL\n1935604 - high CPU usage fails ingress controller\n1935667 - pipelinerun status icon rendering issue\n1935706 - test: Detect when the master pool is still updating after upgrade\n1935732 - Update Jenkins agent maven directory to be version agnostic [ART ocp build data]\n1935814 - Pod and Node lists eventually have incorrect row heights when additional columns have long text\n1935909 - New CSV using ServiceAccount named \"default\" stuck in Pending during upgrade\n1936022 - DNS operator performs spurious updates in response to API\u0027s defaulting of daemonset\u0027s terminationGracePeriod and service\u0027s clusterIPs\n1936030 - Ingress operator performs spurious updates in response to API\u0027s defaulting of NodePort service\u0027s clusterIPs field\n1936223 - The IPI installer has a typo. It is missing the word \"the\" in \"the Engine\". \n1936336 - Updating multus-cni builder \u0026 base images to be consistent with ART 4.8 (closed)\n1936342 - kuryr-controller restarting after 3 days cluster running - pools without members\n1936443 - Hive based OCP IPI baremetal installation fails to connect to API VIP port 22623\n1936488 - [sig-instrumentation][Late] Alerts shouldn\u0027t report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured: Prometheus query error\n1936515 - sdn-controller is missing some health checks\n1936534 - When creating a worker with a used mac-address stuck on registering\n1936585 - configure alerts if the catalogsources are missing\n1936620 - OLM checkbox descriptor renders switch instead of checkbox\n1936721 - network-metrics-deamon not associated with a priorityClassName\n1936771 - [aws ebs csi driver] The event for Pod consuming a readonly PVC is not clear\n1936785 - Configmap gatherer doesn\u0027t include namespace name (in the archive path) in case of a configmap with binary data\n1936788 - RBD RWX PVC creation with  Filesystem volume mode selection is creating RWX PVC with Block volume mode instead of disabling Filesystem volume mode selection\n1936798 - Authentication log gatherer shouldn\u0027t scan all the pod logs in the openshift-authentication namespace\n1936801 - Support ServiceBinding 0.5.0+\n1936854 - Incorrect imagestream is shown as selected in knative service container image edit flow\n1936857 - e2e-ovirt-ipi-install-install is permafailing on 4.5 nightlies\n1936859 - ovirt 4.4 -\u003e 4.5 upgrade jobs are permafailing\n1936867 - Periodic vsphere IPI install is broken - missing pip\n1936871 - [Cinder CSI] Topology aware provisioning doesn\u0027t work when Nova and Cinder AZs are different\n1936904 - Wrong output YAML when syncing groups without --confirm\n1936983 - Topology view - vm details screen isntt stop loading\n1937005 - when kuryr quotas are unlimited, we should not sent alerts\n1937018 - FilterToolbar component does not handle \u0027null\u0027 value for \u0027rowFilters\u0027 prop\n1937020 - Release new from image stream chooses incorrect ID based on status\n1937077 - Blank White page on Topology\n1937102 - Pod Containers Page Not Translated\n1937122 - CAPBM changes to support flexible reboot modes\n1937145 - [Local storage] PV provisioned by localvolumeset stays in \"Released\" status after the pod/pvc deleted\n1937167 - [sig-arch] Managed cluster should have no crashlooping pods in core namespaces over four minutes\n1937244 - [Local Storage] The model name of aws EBS doesn\u0027t be extracted well\n1937299 - pod.spec.volumes.awsElasticBlockStore.partition is not respected on NVMe volumes\n1937452 - cluster-network-operator CI linting fails in master branch\n1937459 - Wrong Subnet retrieved for Service without Selector\n1937460 - [CI] Network quota pre-flight checks are failing the installation\n1937464 - openstack cloud credentials are not getting configured with correct user_domain_name across the cluster\n1937466 - KubeClientCertificateExpiration alert is confusing, without explanation in the documentation\n1937496 - Metrics viewer in OCP Console is missing date in a timestamp for selected datapoint\n1937535 - Not all image pulls within OpenShift builds retry\n1937594 - multiple pods in ContainerCreating state after migration from OpenshiftSDN to OVNKubernetes\n1937627 - Bump DEFAULT_DOC_URL for 4.8\n1937628 - Bump upgrade channels for 4.8\n1937658 - Description for storage class encryption during storagecluster creation needs to be updated\n1937666 - Mouseover on headline\n1937683 - Wrong icon classification of output in buildConfig when the destination is a DockerImage\n1937693 - ironic image \"/\" cluttered with files\n1937694 - [oVirt] split ovirt providerIDReconciler logic into NodeController and ProviderIDController\n1937717 - If browser default font size is 20, the layout of template screen breaks\n1937722 - OCP 4.8 vuln due to BZ 1936445\n1937929 - Operand page shows a 404:Not Found error for OpenShift GitOps Operator\n1937941 - [RFE]fix wording for favorite templates\n1937972 - Router HAProxy config file template is slow to render due to repetitive regex compilations\n1938131 - [AWS] Missing iam:ListAttachedRolePolicies permission in permissions.go\n1938321 - Cannot view PackageManifest objects in YAML on \u0027Home \u003e Search\u0027 page nor \u0027CatalogSource details \u003e Operators tab\u0027\n1938465 - thanos-querier should set a CPU request on the thanos-query container\n1938466 - packageserver deployment sets neither CPU or memory request on the packageserver container\n1938467 - The default cluster-autoscaler should get default cpu and memory requests if user omits them\n1938468 - kube-scheduler-operator has a container without a CPU request\n1938492 - Marketplace extract container does not request CPU or memory\n1938493 - machine-api-operator declares restrictive cpu and memory limits where it should not\n1938636 - Can\u0027t set the loglevel of the container: cluster-policy-controller and kube-controller-manager-recovery-controller\n1938903 - Time range on dashboard page will be empty after drog and drop mouse in the graph\n1938920 - ovnkube-master/ovs-node DaemonSets should use maxUnavailable: 10%\n1938947 - Update blocked from 4.6 to 4.7 when using spot/preemptible instances\n1938949 - [VPA] Updater failed to trigger evictions due to \"vpa-admission-controller\" not found\n1939054 - machine healthcheck kills aws spot instance before generated\n1939060 - CNO: nodes and masters are upgrading simultaneously\n1939069 - Add source to vm template silently failed when no storage class is defined in the cluster\n1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string\n1939168 - Builds failing for OCP 3.11 since PR#25 was merged\n1939226 - kube-apiserver readiness probe appears to be hitting /healthz, not /readyz\n1939227 - kube-apiserver liveness probe appears to be hitting /healthz, not /livez\n1939232 - CI tests using openshift/hello-world broken by Ruby Version Update\n1939270 - fix co upgradeableFalse status and reason\n1939294 - OLM may not delete pods with grace period zero (force delete)\n1939412 - missed labels for thanos-ruler pods\n1939485 - CVE-2021-20291 containers/storage: DoS via malicious image\n1939547 - Include container=\"POD\" in resource queries\n1939555 - VSphereProblemDetectorControllerDegraded: context canceled during upgrade to 4.8.0\n1939573 - after entering valid git repo url on add flow page, throwing warning message instead Validated\n1939580 - Authentication operator is degraded during 4.8 to 4.8 upgrade and normal 4.8 e2e runs\n1939606 - Attempting to put a host into maintenance mode warns about Ceph cluster health, but no storage cluster problems are apparent\n1939661 - support new AWS region ap-northeast-3\n1939726 - clusteroperator/network should not change condition/Degraded during normal serial test execution\n1939731 - Image registry operator reports unavailable during normal serial run\n1939734 - Node Fanout Causes Excessive WATCH Secret Calls, Taking Down Clusters\n1939740 - dual stack nodes with OVN single ipv6 fails on bootstrap phase\n1939752 - ovnkube-master sbdb container does not set requests on cpu or memory\n1939753 - Delete HCO is stucking if there is still VM in the cluster\n1939815 - Change the Warning Alert for Encrypted PVs in Create StorageClass(provisioner:RBD) page\n1939853 - [DOC] Creating manifests API should not allow folder in the \"file_name\"\n1939865 - GCP PD CSI driver does not have CSIDriver instance\n1939869 - [e2e][automation] Add annotations to datavolume for HPP\n1939873 - Unlimited number of characters accepted for base domain name\n1939943 - `cluster-kube-apiserver-operator check-endpoints` observed a panic: runtime error: invalid memory address or nil pointer dereference\n1940030 - cluster-resource-override: fix spelling mistake for run-level match expression in webhook configuration\n1940057 - Openshift builds should use a wach instead of polling when checking for pod status\n1940142 - 4.6-\u003e4.7 updates stick on OpenStackCinderCSIDriverOperatorCR_OpenStackCinderDriverControllerServiceController_Deploying\n1940159 - [OSP] cluster destruction fails to remove router in BYON (with provider network) with Kuryr as primary network\n1940206 - Selector and VolumeTableRows not i18ned\n1940207 - 4.7-\u003e4.6 rollbacks stuck on prometheusrules admission webhook \"no route to host\"\n1940314 - Failed to get type for Dashboard Kubernetes / Compute Resources / Namespace (Workloads)\n1940318 - No data under \u0027Current Bandwidth\u0027 for Dashboard \u0027Kubernetes / Networking / Pod\u0027\n1940322 - Split of dashbard  is wrong, many Network parts\n1940337 - rhos-ipi installer fails with not clear message when openstack tenant doesn\u0027t have flavors needed for compute machines\n1940361 - [e2e][automation] Fix vm action tests with storageclass HPP\n1940432 - Gather datahubs.installers.datahub.sap.com resources from SAP clusters\n1940488 - After fix for CVE-2021-3344, Builds do not mount node entitlement keys\n1940498 - pods may fail to add logical port due to lr-nat-del/lr-nat-add error messages\n1940499 - hybrid-overlay not logging properly before exiting due to an error\n1940518 - Components in bare metal components lack resource requests\n1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n1940704 - prjquota is dropped from rootflags if rootfs is reprovisioned\n1940755 - [Web-console][Local Storage] LocalVolumeSet could not be created from web-console without detail error info\n1940865 - Add BareMetalPlatformType into e2e upgrade service unsupported list\n1940876 - Components in ovirt components lack resource requests\n1940889 - Installation failures in OpenStack release jobs\n1940933 - [sig-arch] Check if alerts are firing during or after upgrade success: AggregatedAPIDown on v1beta1.metrics.k8s.io\n1940939 - Wrong Openshift node IP as kubelet setting VIP as node IP\n1940940 - csi-snapshot-controller goes unavailable when machines are added removed to cluster\n1940950 - vsphere: client/bootstrap CSR double create\n1940972 - vsphere: [4.6] CSR approval delayed for unknown reason\n1941000 - cinder storageclass creates persistent volumes with wrong label failure-domain.beta.kubernetes.io/zone in multi availability zones architecture on OSP 16. \n1941334 - [RFE] Cluster-api-provider-ovirt should handle auto pinning policy\n1941342 - Add `kata-osbuilder-generate.service` as part of the default presets\n1941456 - Multiple pods stuck in ContainerCreating status with the message \"failed to create container for [kubepods burstable podxxx] : dbus: connection closed by user\" being seen in the journal log\n1941526 - controller-manager-operator: Observed a panic: nil pointer dereference\n1941592 - HAProxyDown not Firing\n1941606 - [assisted operator] Assisted Installer Operator CSV related images should be digests for icsp\n1941625 - Developer -\u003e Topology - i18n misses\n1941635 - Developer -\u003e Monitoring - i18n misses\n1941636 - BM worker nodes deployment with virtual media failed while trying to clean raid\n1941645 - Developer -\u003e Builds - i18n misses\n1941655 - Developer -\u003e Pipelines - i18n misses\n1941667 - Developer -\u003e Project - i18n misses\n1941669 - Developer -\u003e ConfigMaps - i18n misses\n1941759 - Errored pre-flight checks should not prevent install\n1941798 - Some details pages don\u0027t have internationalized ResourceKind labels\n1941801 - Many filter toolbar dropdowns haven\u0027t been internationalized\n1941815 - From the web console the terminal can no longer connect after using leaving and returning to the terminal view\n1941859 - [assisted operator] assisted pod deploy first time in error state\n1941901 - Toleration merge logic does not account for multiple entries with the same key\n1941915 - No validation against template name in boot source customization\n1941936 - when setting parameters in containerRuntimeConfig, it will show incorrect information on its description\n1941980 - cluster-kube-descheduler operator is broken when upgraded from 4.7 to 4.8\n1941990 - Pipeline metrics endpoint changed in osp-1.4\n1941995 - fix backwards incompatible trigger api changes in osp1.4\n1942086 - Administrator -\u003e Home - i18n misses\n1942117 - Administrator -\u003e Workloads - i18n misses\n1942125 - Administrator -\u003e Serverless - i18n misses\n1942193 - Operand creation form - broken/cutoff blue line on the Accordion component (fieldGroup)\n1942207 - [vsphere] hostname are changed when upgrading from 4.6 to 4.7.x causing upgrades to fail\n1942271 - Insights operator doesn\u0027t gather pod information from openshift-cluster-version\n1942375 - CRI-O failing with error \"reserving ctr name\"\n1942395 - The status is always \"Updating\" on dc detail page after deployment has failed. \n1942521 - [Assisted-4.7] [Staging][OCS] Minimum memory for selected role is failing although minimum OCP requirement satisfied\n1942522 - Resolution fails to sort channel if inner entry does not satisfy predicate\n1942536 - Corrupted image preventing containers from starting\n1942548 - Administrator -\u003e Networking - i18n misses\n1942553 - CVE-2021-22133 go.elastic.co/apm: leaks sensitive HTTP headers during panic\n1942555 - Network policies in ovn-kubernetes don\u0027t support external traffic from router when the endpoint publishing strategy is HostNetwork\n1942557 - Query is reporting \"no datapoint\" when label cluster=\"\" is set but work when the label is removed or when running directly in Prometheus\n1942608 - crictl cannot list the images with an error: error locating item named \"manifest\" for image with ID\n1942614 - Administrator -\u003e Storage - i18n misses\n1942641 - Administrator -\u003e Builds - i18n misses\n1942673 - Administrator -\u003e Pipelines - i18n misses\n1942694 - Resource names with a colon do not display property in the browser window title\n1942715 - Administrator -\u003e User Management - i18n misses\n1942716 - Quay Container Security operator has Medium \u003c-\u003e Low colors reversed\n1942725 - [SCC] openshift-apiserver degraded when creating new pod after installing Stackrox which creates a less privileged SCC [4.8]\n1942736 - Administrator -\u003e Administration - i18n misses\n1942749 - Install Operator form should use info icon for popovers\n1942837 - [OCPv4.6] unable to deploy pod with unsafe sysctls\n1942839 - Windows VMs fail to start on air-gapped environments\n1942856 - Unable to assign nodes for EgressIP even if the egress-assignable label is set\n1942858 - [RFE]Confusing detach volume UX\n1942883 - AWS EBS CSI driver does not support partitions\n1942894 - IPA error when provisioning masters due to an error from ironic.conductor - /dev/sda is busy\n1942935 - must-gather improvements\n1943145 - vsphere: client/bootstrap CSR double create\n1943175 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies (set azure storage account TLS version default to 1.2)\n1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl()\n1943219 - unable to install IPI PRIVATE OpenShift cluster in Azure - SSH access from the Internet should be blocked\n1943224 - cannot upgrade openshift-kube-descheduler from 4.7.2 to latest\n1943238 - The conditions table does not occupy 100% of the width. \n1943258 - [Assisted-4.7][Staging][Advanced Networking] Cluster install fails while waiting for control plane\n1943314 - [OVN SCALE] Combine Logical Flows inside Southbound DB. \n1943315 - avoid workload disruption for ICSP changes\n1943320 - Baremetal node loses connectivity with bonded interface and OVNKubernetes\n1943329 - TLSSecurityProfile missing from KubeletConfig CRD Manifest\n1943356 - Dynamic plugins surfaced in the UI should be referred to as \"Console plugins\"\n1943539 - crio-wipe is failing to start \"Failed to shutdown storage before wiping: A layer is mounted: layer is in use by a container\"\n1943543 - DeploymentConfig Rollback doesn\u0027t reset params correctly\n1943558 - [assisted operator] Assisted Service pod unable to reach self signed local registry in disco environement\n1943578 - CoreDNS caches NXDOMAIN responses for up to 900 seconds\n1943614 - add bracket logging on openshift/builder calls into buildah to assist test-platform team triage\n1943637 - upgrade from ocp 4.5 to 4.6 does not clear SNAT rules on ovn\n1943649 - don\u0027t use hello-openshift for network-check-target\n1943667 - KubeDaemonSetRolloutStuck fires during upgrades too often because it does not accurately detect progress\n1943719 - storage-operator/vsphere-problem-detector causing upgrades to fail that would have succeeded in past versions\n1943804 - API server on AWS takes disruption between 70s and 110s after pod begins termination via external LB\n1943845 - Router pods should have startup probes configured\n1944121 - OVN-kubernetes references AddressSets after deleting them, causing ovn-controller errors\n1944160 - CNO: nbctl daemon should log reconnection info\n1944180 - OVN-Kube Master does not release election lock on shutdown\n1944246 - Ironic fails to inspect and move node to \"manageable\u0027 but get bmh remains in \"inspecting\"\n1944268 - openshift-install AWS SDK is missing endpoints for the ap-northeast-3 region\n1944509 - Translatable texts without context in ssh expose component\n1944581 - oc project not works with cluster proxy\n1944587 - VPA could not take actions based on the recommendation when min-replicas=1\n1944590 - The field name \"VolumeSnapshotContent\" is wrong on VolumeSnapshotContent detail page\n1944602 - Consistant fallures of features/project-creation.feature Cypress test in CI\n1944631 - openshif authenticator should not accept non-hashed tokens\n1944655 - [manila-csi-driver-operator] openstack-manila-csi-nodeplugin pods stucked with \".. still connecting to unix:///var/lib/kubelet/plugins/csi-nfsplugin/csi.sock\"\n1944660 - dm-multipath race condition on bare metal causing /boot partition mount failures\n1944674 - Project field become to \"All projects\" and disabled in \"Review and create virtual machine\" step in devconsole\n1944678 - Whereabouts IPAM CNI duplicate IP addresses assigned to pods\n1944761 - field level help instances do not use common util component \u003cFieldLevelHelp\u003e\n1944762 - Drain on worker node during an upgrade fails due to PDB set for image registry pod when only a single replica is present\n1944763 - field level help instances do not use common util component \u003cFieldLevelHelp\u003e\n1944853 - Update to nodejs \u003e=14.15.4 for ARM\n1944974 - Duplicate KubeControllerManagerDown/KubeSchedulerDown alerts\n1944986 - Clarify the ContainerRuntimeConfiguration cr description on the validation\n1945027 - Button \u0027Copy SSH Command\u0027 does not work\n1945085 - Bring back API data in etcd test\n1945091 - In k8s 1.21 bump Feature:IPv6DualStack tests are disabled\n1945103 - \u0027User credentials\u0027 shows even the VM is not running\n1945104 - In k8s 1.21 bump \u0027[sig-storage] [cis-hostpath] [Testpattern: Generic Ephemeral-volume\u0027 tests are disabled\n1945146 - Remove pipeline Tech preview badge for pipelines GA operator\n1945236 - Bootstrap ignition shim doesn\u0027t follow proxy settings\n1945261 - Operator dependency not consistently chosen from default channel\n1945312 - project deletion does not reset UI project context\n1945326 - console-operator: does not check route health periodically\n1945387 - Image Registry deployment should have 2 replicas and hard anti-affinity rules\n1945398 - 4.8 CI failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n1945431 - alerts: SystemMemoryExceedsReservation triggers too quickly\n1945443 - operator-lifecycle-manager-packageserver flaps Available=False with no reason or message\n1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service\n1945548 - catalog resource update failed if spec.secrets set to \"\"\n1945584 - Elasticsearch  operator fails to install on 4.8 cluster on ppc64le/s390x\n1945599 - Optionally set KERNEL_VERSION and RT_KERNEL_VERSION\n1945630 - Pod log filename no longer in \u003cpod-name\u003e-\u003ccontainer-name\u003e.log format\n1945637 - QE- Automation- Fixing smoke test suite for pipeline-plugin\n1945646 - gcp-routes.sh running as initrc_t unnecessarily\n1945659 - [oVirt] remove ovirt_cafile from ovirt-credentials secret\n1945677 - Need ACM Managed Cluster Info metric enabled for OCP monitoring telemetry\n1945687 - Dockerfile needs updating to new container CI registry\n1945700 - Syncing boot mode after changing device should be restricted to Supermicro\n1945816 - \" Ingresses \" should be kept in English for Chinese\n1945818 - Chinese translation issues: Operator should be the same with English `Operators`\n1945849 - Unnecessary series churn when a new version of kube-state-metrics is rolled out\n1945910 - [aws] support byo iam roles for instances\n1945948 - SNO: pods can\u0027t reach ingress when the ingress uses a different IPv6. \n1946079 - Virtual master is not getting an IP address\n1946097 - [oVirt] oVirt credentials secret contains unnecessary \"ovirt_cafile\"\n1946119 - panic parsing install-config\n1946243 - No relevant error when pg limit is reached in block pools page\n1946307 - [CI] [UPI] use a standardized and reliable way to install google cloud SDK in UPI image\n1946320 - Incorrect error message in Deployment Attach Storage Page\n1946449 - [e2e][automation] Fix cloud-init tests as UI changed\n1946458 - Edit Application action overwrites Deployment envFrom values on save\n1946459 - In bare metal IPv6 environment, [sig-storage] [Driver: nfs] tests are failing in CI. \n1946479 - In k8s 1.21 bump BoundServiceAccountTokenVolume is disabled by default\n1946497 - local-storage-diskmaker pod logs \"DeviceSymlinkExists\" and \"not symlinking, could not get lock: \u003cnil\u003e\"\n1946506 - [on-prem] mDNS plugin no longer needed\n1946513 - honor use specified system reserved with auto node sizing\n1946540 - auth operator: only configure webhook authenticators for internal auth when oauth-apiserver pods are ready\n1946584 - Machine-config controller fails to generate MC, when machine config pool with dashes in name presents under the cluster\n1946607 - etcd readinessProbe is not reflective of actual readiness\n1946705 - Fix issues with \"search\" capability in the Topology Quick Add component\n1946751 - DAY2 Confusing event when trying to add hosts to a cluster that completed installation\n1946788 - Serial tests are broken because of router\n1946790 - Marketplace operator flakes Available=False OperatorStarting during updates\n1946838 - Copied CSVs show up as adopted components\n1946839 - [Azure] While mirroring images to private registry throwing error: invalid character \u0027\u003c\u0027 looking for beginning of value\n1946865 - no \"namespace:kube_pod_container_resource_requests_cpu_cores:sum\" and \"namespace:kube_pod_container_resource_requests_memory_bytes:sum\" metrics\n1946893 - the error messages are inconsistent in DNS status conditions if the default service IP is taken\n1946922 - Ingress details page doesn\u0027t show referenced secret name and link\n1946929 - the default dns operator\u0027s Progressing status is always True and cluster operator dns Progressing status is False\n1947036 - \"failed to create Matchbox client or connect\" on e2e-metal jobs or metal clusters via cluster-bot\n1947066 - machine-config-operator pod crashes when noProxy is *\n1947067 - [Installer] Pick up upstream fix for installer console output\n1947078 - Incorrect skipped status for conditional tasks in the pipeline run\n1947080 - SNO IPv6 with \u0027temporary 60-day domain\u0027 option fails with IPv4 exception\n1947154 - [master] [assisted operator] Unable to re-register an SNO instance if deleting CRDs during install\n1947164 - Print \"Successfully pushed\" even if the build push fails. \n1947176 - OVN-Kubernetes leaves stale AddressSets around if the deletion was missed. \n1947293 - IPv6 provision addresses range larger then /64 prefix (e.g. /48)\n1947311 - When adding a new node to localvolumediscovery UI does not show pre-existing node name\u0027s\n1947360 - [vSphere csi driver operator] operator pod runs as \u201cBestEffort\u201d qosClass\n1947371 - [vSphere csi driver operator] operator doesn\u0027t create \u201ccsidriver\u201d instance\n1947402 - Single Node cluster upgrade: AWS EBS CSI driver deployment is stuck on rollout\n1947478 - discovery v1 beta1 EndpointSlice is deprecated in Kubernetes 1.21 (OCP 4.8)\n1947490 - If Clevis on a managed LUKs volume with Ignition enables, the system will fails to automatically open the LUKs volume on system boot\n1947498 - policy v1 beta1 PodDisruptionBudget is deprecated in Kubernetes 1.21 (OCP 4.8)\n1947663 - disk details are not synced in web-console\n1947665 - Internationalization values for ceph-storage-plugin should be in file named after plugin\n1947684 - MCO on SNO sometimes has rendered configs and sometimes does not\n1947712 - [OVN] Many faults and Polling interval stuck for 4 seconds every roughly 5 minutes intervals. \n1947719 - 8 APIRemovedInNextReleaseInUse info alerts display\n1947746 - Show wrong kubernetes version from kube-scheduler/kube-controller-manager operator pods\n1947756 - [azure-disk-csi-driver-operator] Should allow more nodes to be updated simultaneously for speeding up cluster upgrade\n1947767 - [azure-disk-csi-driver-operator] Uses the same storage type in the sc created by it as the default sc?\n1947771 - [kube-descheduler]descheduler operator pod should not run as \u201cBestEffort\u201d qosClass\n1947774 - CSI driver operators use \"Always\" imagePullPolicy in some containers\n1947775 - [vSphere csi driver operator] doesn\u2019t use the downstream images from payload. \n1947776 - [vSphere csi driver operator] Should allow more nodes to be updated simultaneously for speeding up cluster upgrade\n1947779 - [LSO] Should allow more nodes to be updated simultaneously for speeding up LSO upgrade\n1947785 - Cloud Compute: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947789 - Console: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947791 - MCO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947793 - DevEx: APIRemovedInNextReleaseInUse info alerts display\n1947794 - OLM: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component does not trigger APIRemovedInNextReleaseInUse alert\n1947795 - Networking: APIRemovedInNextReleaseInUse info alerts display\n1947797 - CVO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947798 - Images: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947800 - Ingress: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947801 - Kube Storage Version Migrator APIRemovedInNextReleaseInUse info alerts display\n1947803 - Openshift Apiserver: APIRemovedInNextReleaseInUse info alerts display\n1947806 - Re-enable h2spec, http/2 and grpc-interop e2e tests in openshift/origin\n1947828 - `download it` link should save pod log in \u003cpod-name\u003e-\u003ccontainer-name\u003e.log format\n1947866 - disk.csi.azure.com.spec.operatorLogLevel is not updated when CSO loglevel  is changed\n1947917 - Egress Firewall does not reliably apply firewall rules\n1947946 - Operator upgrades can delete existing CSV before completion\n1948011 - openshift-controller-manager constantly reporting type \"Upgradeable\" status Unknown\n1948012 - service-ca constantly reporting type \"Upgradeable\" status Unknown\n1948019 - [4.8] Large number of requests to the infrastructure cinder volume service\n1948022 - Some on-prem namespaces missing from must-gather\n1948040 - cluster-etcd-operator: etcd is using deprecated logger\n1948082 - Monitoring should not set Available=False with no reason on updates\n1948137 - CNI DEL not called on node reboot - OCP 4 CRI-O. \n1948232 - DNS operator performs spurious updates in response to API\u0027s defaulting of daemonset\u0027s maxSurge and service\u0027s ipFamilies and ipFamilyPolicy fields\n1948311 - Some jobs failing due to excessive watches: the server has received too many requests and has asked us to try again later\n1948359 - [aws] shared tag was not removed from user provided IAM role\n1948410 - [LSO] Local Storage Operator uses imagePullPolicy as \"Always\"\n1948415 - [vSphere csi driver operator] clustercsidriver.spec.logLevel doesn\u0027t take effective after changing\n1948427 - No action is triggered after click \u0027Continue\u0027 button on \u0027Show community Operator\u0027 windows\n1948431 - TechPreviewNoUpgrade does not enable CSI migration\n1948436 - The outbound traffic was broken intermittently after shutdown one egressIP node\n1948443 - OCP 4.8 nightly still showing v1.20 even after 1.21 merge\n1948471 - [sig-auth][Feature:OpenShiftAuthorization][Serial] authorization  TestAuthorizationResourceAccessReview should succeed [Suite:openshift/conformance/serial]\n1948505 - [vSphere csi driver operator] vmware-vsphere-csi-driver-operator pod restart every 10 minutes\n1948513 - get-resources.sh doesn\u0027t honor the no_proxy settings\n1948524 - \u0027DeploymentUpdated\u0027 Updated Deployment.apps/downloads -n openshift-console because it changed message is printed every minute\n1948546 - VM of worker is in error state when a network has port_security_enabled=False\n1948553 - When setting etcd spec.LogLevel is not propagated to etcd operand\n1948555 - A lot of events \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\" were seen in azure disk csi driver verification test\n1948563 - End-to-End Secure boot deployment fails \"Invalid value for input variable\"\n1948582 - Need ability to specify local gateway mode in CNO config\n1948585 - Need a CI jobs to test local gateway mode with bare metal\n1948592 - [Cluster Network Operator] Missing Egress Router Controller\n1948606 - DNS e2e test fails \"[sig-arch] Only known images used by tests\" because it does not use a known image\n1948610 - External Storage [Driver: disk.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]\n1948626 - TestRouteAdmissionPolicy e2e test is failing often\n1948628 - ccoctl needs to plan for future (non-AWS) platform support in the CLI\n1948634 - upgrades: allow upgrades without version change\n1948640 - [Descheduler] operator log reports key failed with : kubedeschedulers.operator.openshift.io \"cluster\" not found\n1948701 - unneeded CCO alert already covered by CVO\n1948703 - p\u0026f: probes should not get 429s\n1948705 - [assisted operator] SNO deployment fails - ClusterDeployment shows `bootstrap.ign was not found`\n1948706 - Cluster Autoscaler Operator manifests missing annotation for ibm-cloud-managed profile\n1948708 - cluster-dns-operator includes a deployment with node selector of masters for the IBM cloud managed profile\n1948711 - thanos querier and prometheus-adapter should have 2 replicas\n1948714 - cluster-image-registry-operator targets master nodes in ibm-cloud-managed-profile\n1948716 - cluster-ingress-operator deployment targets master nodes for ibm-cloud-managed profile\n1948718 - cluster-network-operator deployment manifest for ibm-cloud-managed profile contains master node selector\n1948719 - Machine API components should use 1.21 dependencies\n1948721 - cluster-storage-operator deployment targets master nodes for ibm-cloud-managed profile\n1948725 - operator lifecycle manager does not include profile annotations for ibm-cloud-managed\n1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing\n1948771 - ~50% of GCP upgrade jobs in 4.8 failing with \"AggregatedAPIDown\" alert on packages.coreos.com\n1948782 - Stale references to the single-node-production-edge cluster profile\n1948787 - secret.StringData shouldn\u0027t be used for reads\n1948788 - Clicking an empty metrics graph (when there is no data) should still open metrics viewer\n1948789 - Clicking on a metrics graph should show request and limits queries as well on the resulting metrics page\n1948919 - Need minor update in message on channel modal\n1948923 - [aws] installer forces the platform.aws.amiID option to be set, while installing a cluster into GovCloud or C2S region\n1948926 - Memory Usage of Dashboard \u0027Kubernetes / Compute Resources / Pod\u0027 contain wrong CPU query\n1948936 - [e2e][automation][prow] Prow script point to deleted resource\n1948943 - (release-4.8) Limit the number of collected pods in the workloads gatherer\n1948953 - Uninitialized cloud provider error when provisioning a cinder volume\n1948963 - [RFE] Cluster-api-provider-ovirt should handle hugepages\n1948966 - Add the ability to run a gather done by IO via a Kubernetes Job\n1948981 - Align dependencies and libraries with latest ironic code\n1948998 - style fixes by GoLand and golangci-lint\n1948999 - Can not assign multiple EgressIPs to a namespace by using automatic way. \n1949019 - PersistentVolumes page cannot sync project status automatically which will block user to create PV\n1949022 - Openshift 4 has a zombie problem\n1949039 - Wrong env name to get podnetinfo for hugepage in app-netutil\n1949041 - vsphere: wrong image names in bundle\n1949042 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the http2 tests  (on OpenStack)\n1949050 - Bump k8s to latest 1.21\n1949061 - [assisted operator][nmstate] Continuous attempts to reconcile InstallEnv  in the case of invalid NMStateConfig\n1949063 - [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service\n1949075 - Extend openshift/api for Add card customization\n1949093 - PatternFly v4.96.2 regression results in a.pf-c-button hover issues\n1949096 - Restore private git clone tests\n1949099 - network-check-target code cleanup\n1949105 - NetworkPolicy ... should enforce ingress policy allowing any port traffic to a server on a specific protocol\n1949145 - Move openshift-user-critical priority class to CCO\n1949155 - Console doesn\u0027t correctly check for favorited or last namespace on load if project picker used\n1949180 - Pipelines plugin model kinds aren\u0027t picked up by parser\n1949202 - sriov-network-operator not available from operatorhub on ppc64le\n1949218 - ccoctl not included in container image\n1949237 - Bump OVN: Lots of conjunction warnings in ovn-controller container logs\n1949277 - operator-marketplace: deployment manifests for ibm-cloud-managed profile have master node selectors\n1949294 - [assisted operator] OPENSHIFT_VERSIONS in assisted operator subscription does not propagate\n1949306 - need a way to see top API accessors\n1949313 - Rename vmware-vsphere-* images to vsphere-* images before 4.8 ships\n1949316 - BaremetalHost resource automatedCleaningMode ignored due to outdated vendoring\n1949347 - apiserver-watcher support for dual-stack\n1949357 - manila-csi-controller pod not running due to secret lack(in another ns)\n1949361 - CoreDNS resolution failure for external hostnames with \"A: dns: overflow unpacking uint16\"\n1949364 - Mention scheduling profiles in scheduler operator repository\n1949370 - Testability of: Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error\n1949384 - Edit Default Pull Secret modal - i18n misses\n1949387 - Fix the typo in auto node sizing script\n1949404 - label selector on pvc creation page - i18n misses\n1949410 - The referred role doesn\u0027t exist if create rolebinding from rolebinding tab of role page\n1949411 - VolumeSnapshot, VolumeSnapshotClass and VolumeSnapshotConent Details tab is not translated - i18n misses\n1949413 - Automatic boot order setting is done incorrectly when using by-path style device names\n1949418 - Controller factory workers should always restart on panic()\n1949419 - oauth-apiserver logs \"[SHOULD NOT HAPPEN] failed to update managedFields for authentication.k8s.io/v1, Kind=TokenReview: failed to convert new object (authentication.k8s.io/v1, Kind=TokenReview)\"\n1949420 - [azure csi driver operator] pvc.status.capacity and pv.spec.capacity are processed not the same as in-tree plugin\n1949435 - ingressclass controller doesn\u0027t recreate the openshift-default ingressclass after deleting it\n1949480 - Listeners timeout are constantly being updated\n1949481 - cluster-samples-operator restarts approximately two times per day and logs too many same messages\n1949509 - Kuryr should manage API LB instead of CNO\n1949514 - URL is not visible for routes at narrow screen widths\n1949554 - Metrics of vSphere CSI driver sidecars are not collected\n1949582 - OCP v4.7 installation with OVN-Kubernetes fails with error \"egress bandwidth restriction -1 is not equals\"\n1949589 - APIRemovedInNextEUSReleaseInUse Alert Missing\n1949591 - Alert does not catch removed api usage during end-to-end tests. \n1949593 - rename DeprecatedAPIInUse alert to APIRemovedInNextReleaseInUse\n1949612 - Install with 1.21 Kubelet is spamming logs with failed to get stats failed command \u0027du\u0027\n1949626 - machine-api fails to create AWS client in new regions\n1949661 - Kubelet Workloads Management changes for OCPNODE-529\n1949664 - Spurious keepalived liveness probe failures\n1949671 - System services such as openvswitch are stopped before pod containers on system shutdown or reboot\n1949677 - multus is the first pod on a new node and the last to go ready\n1949711 - cvo unable to reconcile deletion of openshift-monitoring namespace\n1949721 - Pick 99237: Use the audit ID of a request for better correlation\n1949741 - Bump golang version of cluster-machine-approver\n1949799 - ingresscontroller should deny the setting when spec.tuningOptions.threadCount exceed 64\n1949810 - OKD 4.7  unable to access Project  Topology View\n1949818 - Add e2e test to perform MCO operation Single Node OpenShift\n1949820 - Unable to use `oc adm top is` shortcut when asking for `imagestreams`\n1949862 - The ccoctl tool hits the panic sometime when running the delete subcommand\n1949866 - The ccoctl fails to create authentication file when running the command `ccoctl aws create-identity-provider` with `--output-dir` parameter\n1949880 - adding providerParameters.gcp.clientAccess to existing ingresscontroller doesn\u0027t work\n1949882 - service-idler build error\n1949898 - Backport RP#848 to OCP 4.8\n1949907 - Gather summary of PodNetworkConnectivityChecks\n1949923 - some defined rootVolumes zones not used on installation\n1949928 - Samples Operator updates break CI tests\n1949935 - Fix  incorrect access review check on start pipeline kebab action\n1949956 - kaso: add minreadyseconds to ensure we don\u0027t have an LB outage on kas\n1949967 - Update Kube dependencies in MCO to 1.21\n1949972 - Descheduler metrics: populate build info data and make the metrics entries more readeable\n1949978 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the h2spec conformance tests [Suite:openshift/conformance/parallel/minimal]\n1949990 - (release-4.8) Extend the OLM operator gatherer to include CSV display name\n1949991 - openshift-marketplace pods are crashlooping\n1950007 - [CI] [UPI] easy_install is not reliable enough to be used in an image\n1950026 - [Descheduler] Need better way to handle evicted pod count for removeDuplicate pod strategy\n1950047 - CSV deployment template custom annotations are not propagated to deployments\n1950112 - SNO: machine-config pool is degraded:   error running chcon -R -t var_run_t /run/mco-machine-os-content/os-content-321709791\n1950113 - in-cluster operators need an API for additional AWS tags\n1950133 - MCO creates empty conditions on the kubeletconfig object\n1950159 - Downstream ovn-kubernetes repo should have no linter errors\n1950175 - Update Jenkins and agent base image to Go 1.16\n1950196 - ssh Key is added even with \u0027Expose SSH access to this virtual machine\u0027 unchecked\n1950210 - VPA CRDs use deprecated API version\n1950219 - KnativeServing is not shown in list on global config page\n1950232 - [Descheduler] - The minKubeVersion should be 1.21\n1950236 - Update OKD imagestreams to prefer centos7 images\n1950270 - should use \"kubernetes.io/os\" in the dns/ingresscontroller node selector description when executing oc explain command\n1950284 - Tracking bug for NE-563 - support user-defined tags on AWS load balancers\n1950341 - NetworkPolicy: allow-from-router policy does not allow access to service when the endpoint publishing strategy is HostNetwork on OpenshiftSDN network\n1950379 - oauth-server is in pending/crashbackoff at beginning 50% of CI runs\n1950384 - [sig-builds][Feature:Builds][sig-devex][Feature:Jenkins][Slow] openshift pipeline build  perm failing\n1950409 - Descheduler operator code and docs still reference v1beta1\n1950417 - The Marketplace Operator is building with EOL k8s versions\n1950430 - CVO serves metrics over HTTP, despite a lack of consumers\n1950460 - RFE: Change Request Size Input to Number Spinner Input\n1950471 - e2e-metal-ipi-ovn-dualstack is failing with etcd unable to bootstrap\n1950532 - Include \"update\" when referring to operator approval and channel\n1950543 - Document non-HA behaviors in the MCO (SingleNodeOpenshift)\n1950590 - CNO: Too many OVN netFlows collectors causes ovnkube pods CrashLoopBackOff\n1950653 - BuildConfig ignores Args\n1950761 - Monitoring operator deployments anti-affinity rules prevent their rollout on single-node\n1950908 - kube_pod_labels metric does not contain k8s labels\n1950912 - [e2e][automation] add devconsole tests\n1950916 - [RFE]console page show error when vm is poused\n1950934 - Unnecessary rollouts can happen due to unsorted endpoints\n1950935 - Updating cluster-network-operator builder \u0026 base images to be consistent with ART\n1950978 - the ingressclass cannot be removed even after deleting the related custom ingresscontroller\n1951007 - ovn master pod crashed\n1951029 - Drainer panics on missing context for node patch\n1951034 - (release-4.8) Split up the GatherClusterOperators into smaller parts\n1951042 - Panics every few minutes in kubelet logs post-rebase\n1951043 - Start Pipeline Modal Parameters should accept empty string defaults\n1951058 - [gcp-pd-csi-driver-operator] topology and multipods capabilities are not enabled in e2e tests\n1951066 - [IBM][ROKS] Enable volume snapshot controllers on IBM Cloud\n1951084 - avoid benign \"Path \\\"/run/secrets/etc-pki-entitlement\\\" from \\\"/etc/containers/mounts.conf\\\" doesn\u0027t exist, skipping\" messages\n1951158 - Egress Router CRD missing Addresses entry\n1951169 - Improve API Explorer discoverability from the Console\n1951174 - re-pin libvirt to 6.0.0\n1951203 - oc adm catalog mirror can generate ICSPs that exceed etcd\u0027s size limit\n1951209 - RerunOnFailure runStrategy shows wrong VM status (Starting) on Succeeded VMI\n1951212 - User/Group details shows unrelated subjects in role bindings tab\n1951214 - VM list page crashes when the volume type is sysprep\n1951339 - Cluster-version operator does not manage operand container environments when manifest lacks opinions\n1951387 - opm index add doesn\u0027t respect deprecated bundles\n1951412 - Configmap gatherer can fail incorrectly\n1951456 - Docs and linting fixes\n1951486 - Replace \"kubevirt_vmi_network_traffic_bytes_total\" with new metrics names\n1951505 - Remove deprecated techPreviewUserWorkload field from CMO\u0027s configmap\n1951558 - Backport Upstream 101093 for Startup Probe Fix\n1951585 - enterprise-pod fails to build\n1951636 - assisted service operator use default serviceaccount in operator bundle\n1951637 - don\u0027t rollout a new kube-apiserver revision on oauth accessTokenInactivityTimeout changes\n1951639 - Bootstrap API server unclean shutdown causes reconcile delay\n1951646 - Unexpected memory climb while container not in use\n1951652 - Add retries to opm index add\n1951670 - Error gathering bootstrap log after pivot: The bootstrap machine did not execute the release-image.service systemd unit\n1951671 - Excessive writes to ironic Nodes\n1951705 - kube-apiserver needs alerts on CPU utlization\n1951713 - [OCP-OSP] After changing image in machine object it enters in Failed - Can\u0027t find created instance\n1951853 - dnses.operator.openshift.io resource\u0027s spec.nodePlacement.tolerations godoc incorrectly describes default behavior\n1951858 - unexpected text \u00270\u0027 on filter toolbar on RoleBinding tab\n1951860 - [4.8] add Intel XXV710 NIC model (1572) support in SR-IOV Operator\n1951870 - sriov network resources injector: user defined injection removed existing pod annotations\n1951891 - [migration] cannot change ClusterNetwork CIDR during migration\n1951952 - [AWS CSI Migration] Metrics for cloudprovider error requests are lost\n1952001 - Delegated authentication: reduce the number of watch requests\n1952032 - malformatted assets in CMO\n1952045 - Mirror nfs-server image used in jenkins-e2e\n1952049 - Helm: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1952079 - rebase openshift/sdn to kube 1.21\n1952111 - Optimize importing from @patternfly/react-tokens\n1952174 - DNS operator claims to be done upgrading before it even starts\n1952179 - OpenStack Provider Ports UI Underscore Variables\n1952187 - Pods stuck in ImagePullBackOff with errors like rpc error: code = Unknown desc = Error committing the finished image: image with ID \"SomeLongID\" already exists, but uses a different top layer: that ID\n1952211 - cascading mounts happening exponentially on when deleting openstack-cinder-csi-driver-node pods\n1952214 - Console Devfile Import Dev Preview broken\n1952238 - Catalog pods don\u0027t report termination logs to catalog-operator\n1952262 - Need support external gateway via hybrid overlay\n1952266 - etcd operator bumps status.version[name=operator] before operands update\n1952268 - etcd operator should not set Degraded=True EtcdMembersDegraded on healthy machine-config node reboots\n1952282 - CSR approver races with nodelink controller and does not requeue\n1952310 - VM cannot start up if the ssh key is added by another template\n1952325 - [e2e][automation] Check support modal in ssh tests and skip template parentSupport\n1952333 - openshift/kubernetes vulnerable to CVE-2021-3121\n1952358 - Openshift-apiserver CO unavailable in fresh OCP 4.7.5 installations\n1952367 - No VM status on overview page when VM is pending\n1952368 - worker pool went degraded due to no rpm-ostree on rhel worker during applying new mc\n1952372 - VM stop action should not be there if the VM is not running\n1952405 - console-operator is not reporting correct Available status\n1952448 - Switch from Managed to Disabled mode: no IP removed from configuration and no container metal3-static-ip-manager stopped\n1952460 - In k8s 1.21 bump \u0027[sig-network] Firewall rule control plane should not expose well-known ports\u0027 test is disabled\n1952473 - Monitor pod placement during upgrades\n1952487 - Template filter does not work properly\n1952495 - \u201cCreate\u201d button on the Templates page is confuse\n1952527 - [Multus] multi-networkpolicy does wrong filtering\n1952545 - Selection issue when inserting YAML snippets\n1952585 - Operator links for \u0027repository\u0027 and \u0027container image\u0027 should be clickable in OperatorHub\n1952604 - Incorrect port in external loadbalancer config\n1952610 - [aws] image-registry panics when the cluster is installed in a new region\n1952611 - Tracking bug for OCPCLOUD-1115 - support user-defined tags on AWS EC2 Instances\n1952618 - 4.7.4-\u003e4.7.8 Upgrade Caused OpenShift-Apiserver Outage\n1952625 - Fix translator-reported text issues\n1952632 - 4.8 installer should default ClusterVersion channel to stable-4.8\n1952635 - Web console displays a blank page- white space instead of cluster information\n1952665 - [Multus] multi-networkpolicy pod continue restart due to OOM (out of memory)\n1952666 - Implement Enhancement 741 for Kubelet\n1952667 - Update Readme for cluster-baremetal-operator with details about the operator\n1952684 - cluster-etcd-operator: metrics controller panics on invalid response from client\n1952728 - It was not clear for users why Snapshot feature was not available\n1952730 - \u201cCustomize virtual machine\u201d and the \u201cAdvanced\u201d feature are confusing in wizard\n1952732 - Users did not understand the boot source labels\n1952741 - Monitoring DB: after set Time Range as Custom time range, no data display\n1952744 - PrometheusDuplicateTimestamps with user workload monitoring enabled\n1952759 - [RFE]It was not immediately clear what the Star icon meant\n1952795 - cloud-network-config-controller CRD does not specify correct plural name\n1952819 - failed to configure pod interface: error while waiting on flows for pod: timed out waiting for OVS flows\n1952820 - [LSO] Delete localvolume pv is failed\n1952832 - [IBM][ROKS] Enable the Web console UI to deploy OCS in External mode on IBM Cloud\n1952891 - Upgrade failed due to cinder csi driver not deployed\n1952904 - Linting issues in gather/clusterconfig package\n1952906 - Unit tests for configobserver.go\n1952931 - CI does not check leftover PVs\n1952958 - Runtime error loading console in Safari 13\n1953019 - [Installer][baremetal][metal3] The baremetal IPI installer fails on delete cluster with: failed to clean baremetal bootstrap storage pool\n1953035 - Installer should error out if publish: Internal is set while deploying OCP cluster on any on-prem platform\n1953041 - openshift-authentication-operator uses 3.9k% of its requested CPU\n1953077 - Handling GCP\u0027s: Error 400: Permission accesscontextmanager.accessLevels.list is not valid for this resource\n1953102 - kubelet CPU use during an e2e run increased 25% after rebase\n1953105 - RHCOS system components registered a 3.5x increase in CPU use over an e2e run before and after 4/9\n1953169 - endpoint slice controller doesn\u0027t handle services target port correctly\n1953257 - Multiple EgressIPs per node for one namespace when \"oc get hostsubnet\"\n1953280 - DaemonSet/node-resolver is not recreated by dns operator after deleting it\n1953291 - cluster-etcd-operator: peer cert DNS SAN is populated incorrectly\n1953418 - [e2e][automation] Fix vm wizard validate tests\n1953518 - thanos-ruler pods failed to start up for \"cannot unmarshal DNS message\"\n1953530 - Fix openshift/sdn unit test flake\n1953539 - kube-storage-version-migrator: priorityClassName not set\n1953543 - (release-4.8) Add missing sample archive data\n1953551 - build failure: unexpected trampoline for shared or dynamic linking\n1953555 - GlusterFS tests fail on ipv6 clusters\n1953647 - prometheus-adapter should have a PodDisruptionBudget in HA topology\n1953670 - ironic container image build failing because esp partition size is too small\n1953680 - ipBlock ignoring all other cidr\u0027s apart from the last one specified\n1953691 - Remove unused mock\n1953703 - Inconsistent usage of Tech preview badge in OCS plugin of OCP Console\n1953726 - Fix issues related to loading dynamic plugins\n1953729 - e2e unidling test is flaking heavily on SNO jobs\n1953795 - Ironic can\u0027t virtual media attach ISOs sourced from ingress routes\n1953798 - GCP e2e (parallel and upgrade) regularly trigger KubeAPIErrorBudgetBurn alert, also happens on AWS\n1953803 - [AWS] Installer should do pre-check to ensure user-provided private hosted zone name is valid for OCP cluster\n1953810 - Allow use of storage policy in VMC environments\n1953830 - The oc-compliance build does not available for OCP4.8\n1953846 - SystemMemoryExceedsReservation alert should consider hugepage reservation\n1953977 - [4.8] packageserver pods restart many times on the SNO cluster\n1953979 - Ironic caching virtualmedia images results in disk space limitations\n1954003 - Alerts shouldn\u0027t report any alerts in firing or pending state: openstack-cinder-csi-driver-controller-metrics TargetDown\n1954025 - Disk errors while scaling up a node with multipathing enabled\n1954087 - Unit tests for kube-scheduler-operator\n1954095 - Apply user defined tags in AWS Internal Registry\n1954105 - TaskRuns Tab in PipelineRun Details Page makes cluster based calls for TaskRuns\n1954124 - oc set volume not adding storageclass to pvc which leads to issues using snapshots\n1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js\n1954177 - machine-api: admissionReviewVersions v1beta1 is going to be removed in 1.22\n1954187 - multus: admissionReviewVersions v1beta1 is going to be removed in 1.22\n1954248 - Disable Alertmanager Protractor e2e tests\n1954317 - [assisted operator] Environment variables set in the subscription not being inherited by the assisted-service container\n1954330 - NetworkPolicy: allow-from-router with label policy-group.network.openshift.io/ingress: \"\" does not work on a upgraded cluster\n1954421 - Get \u0027Application is not available\u0027 when access Prometheus UI\n1954459 - Error: Gateway Time-out display on Alerting console\n1954460 - UI, The status of \"Used Capacity Breakdown [Pods]\"  is \"Not available\"\n1954509 - FC volume is marked as unmounted after failed reconstruction\n1954540 - Lack translation for local language on pages under storage menu\n1954544 - authn operator: endpoints controller should use the context it creates\n1954554 - Add e2e tests for auto node sizing\n1954566 - Cannot update a component (`UtilizationCard`) error when switching perspectives manually\n1954597 - Default image for GCP does not support ignition V3\n1954615 - Undiagnosed panic detected in pod: pods/openshift-cloud-credential-operator_cloud-credential-operator\n1954634 - apirequestcounts does not honor max users\n1954638 - apirequestcounts should indicate removedinrelease of empty instead of 2.0\n1954640 - Support of gatherers with different periods\n1954671 - disable volume expansion support in vsphere csi driver storage class\n1954687 - localvolumediscovery and localvolumset e2es are disabled\n1954688 - LSO has missing examples for localvolumesets\n1954696 - [API-1009] apirequestcounts should indicate useragent\n1954715 - Imagestream imports become very slow when doing many in parallel\n1954755 - Multus configuration should allow for net-attach-defs referenced in the openshift-multus namespace\n1954765 - CCO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1954768 - baremetal-operator: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1954770 - Backport upstream fix for Kubelet getting stuck in DiskPressure\n1954773 - OVN: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component does not trigger APIRemovedInNextReleaseInUse alert\n1954783 - [aws] support byo private hosted zone\n1954790 - KCM Alert PodDisruptionBudget At and Limit do not alert with maxUnavailable or MinAvailable by percentage\n1954830 - verify-client-go job is failing for release-4.7 branch\n1954865 - Add necessary priority class to pod-identity-webhook deployment\n1954866 - Add necessary priority class to downloads\n1954870 - Add necessary priority class to network components\n1954873 - dns server may not be specified for clusters with more than 2 dns servers specified by  openstack. \n1954891 - Add necessary priority class to pruner\n1954892 - Add necessary priority class to ingress-canary\n1954931 - (release-4.8) Remove legacy URL anonymization in the ClusterOperator related resources\n1954937 - [API-1009] `oc get apirequestcount` shows blank for column REQUESTSINCURRENTHOUR\n1954959 - unwanted decorator shown for revisions in topology though should only be shown only for knative services\n1954972 - TechPreviewNoUpgrade featureset can be undone\n1954973 - \"read /proc/pressure/cpu: operation not supported\" in node-exporter logs\n1954994 - should update to 2.26.0 for prometheus resources label\n1955051 - metrics \"kube_node_status_capacity_cpu_cores\" does not exist\n1955089 - Support [sig-cli] oc observe works as expected test for IPv6\n1955100 - Samples: APIRemovedInNextReleaseInUse info alerts display\n1955102 - Add vsphere_node_hw_version_total metric to the collected metrics\n1955114 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM\n1955196 - linuxptp-daemon crash on 4.8\n1955226 - operator updates apirequestcount CRD over and over\n1955229 - release-openshift-origin-installer-e2e-aws-calico-4.7 is permfailing\n1955256 - stop collecting API that no longer exists\n1955324 - Kubernetes Autoscaler should use Go 1.16 for testing scripts\n1955336 - Failure to Install OpenShift on GCP due to Cluster Name being similar to / contains \"google\"\n1955414 - 4.8 -\u003e 4.7 rollbacks broken on unrecognized flowschema openshift-etcd-operator\n1955445 - Drop crio image metrics with high cardinality\n1955457 - Drop container_memory_failures_total metric because of high cardinality\n1955467 - Disable collection of node_mountstats_nfs metrics in node_exporter\n1955474 - [aws-ebs-csi-driver] rebase from version v1.0.0\n1955478 - Drop high-cardinality metrics from kube-state-metrics which aren\u0027t used\n1955517 - Failed to upgrade from 4.6.25 to 4.7.8 due to the machine-config degradation\n1955548 - [IPI][OSP] OCP 4.6/4.7 IPI with kuryr exceeds defined serviceNetwork range\n1955554 - MAO does not react to events triggered from Validating Webhook Configurations\n1955589 - thanos-querier should have a PodDisruptionBudget in HA topology\n1955595 - Add DevPreviewLongLifecycle Descheduler profile\n1955596 - Pods stuck in creation phase on realtime kernel SNO\n1955610 - release-openshift-origin-installer-old-rhcos-e2e-aws-4.7 is permfailing\n1955622 - 4.8-e2e-metal-assisted jobs: Timeout of 360 seconds expired waiting for Cluster to be in status [\u0027installing\u0027, \u0027error\u0027]\n1955701 - [4.8] RHCOS boot image bump for RHEL 8.4 Beta\n1955749 - OCP branded templates need to be translated\n1955761 - packageserver clusteroperator does not set reason or message for Available condition\n1955783 - NetworkPolicy: ACL audit log message for allow-from-router policy should also include the namespace to distinguish between two policies similarly named configured in respective namespaces\n1955803 - OperatorHub - console accepts any value for \"Infrastructure features\" annotation\n1955822 - CIS Benchmark 5.4.1 Fails on ROKS 4: Prefer using secrets as files over secrets as environment variables\n1955854 - Ingress clusteroperator reports Degraded=True/Available=False if any ingresscontroller is degraded or unavailable\n1955862 - Local Storage Operator using LocalVolume CR fails to create PV\u0027s when backend storage failure is simulated\n1955874 - Webscale: sriov vfs are not created and sriovnetworknodestate indicates sync succeeded - state is not correct\n1955879 - Customer tags cannot be seen in S3 level when set spec.managementState from Managed-\u003e Removed-\u003e Managed in configs.imageregistry with high ratio\n1955969 - Workers cannot be deployed attached to multiple networks. \n1956079 - Installer gather doesn\u0027t collect any networking information\n1956208 - Installer should validate root volume type\n1956220 - Set htt proxy system properties as expected by kubernetes-client\n1956281 - Disconnected installs are failing with kubelet trying to pause image from the internet\n1956334 - Event Listener Details page does not show Triggers section\n1956353 - test: analyze job consistently fails\n1956372 - openshift-gcp-routes causes disruption during upgrade by stopping before all pods terminate\n1956405 - Bump k8s dependencies in cluster resource override admission operator\n1956411 - Apply custom tags to AWS EBS volumes\n1956480 - [4.8] Bootimage bump tracker\n1956606 - probes FlowSchema manifest not included in any cluster profile\n1956607 - Multiple manifests lack cluster profile annotations\n1956609 - [cluster-machine-approver] CSRs for replacement control plane nodes not approved after restore from backup\n1956610 - manage-helm-repos manifest lacks cluster profile annotations\n1956611 - OLM CRD schema validation failing against CRs where the value of a string field is a blank string\n1956650 - The container disk URL is empty for Windows guest tools\n1956768 - aws-ebs-csi-driver-controller-metrics TargetDown\n1956826 - buildArgs does not work when the value is taken from a secret\n1956895 - Fix chatty kubelet log message\n1956898 - fix log files being overwritten on container state loss\n1956920 - can\u0027t open terminal for pods that have more than one container running\n1956959 - ipv6 disconnected sno crd deployment hive reports success status and clusterdeployrmet reporting false\n1956978 - Installer gather doesn\u0027t include pod names in filename\n1957039 - Physical VIP for pod -\u003e Svc -\u003e Host is incorrectly set to an IP of 169.254.169.2 for Local GW\n1957041 - Update CI e2echart with more node info\n1957127 - Delegated authentication: reduce the number of watch requests\n1957131 - Conformance tests for OpenStack require the Cinder client that is not included in the \"tests\" image\n1957146 - Only run test/extended/router/idle tests on OpenshiftSDN or OVNKubernetes\n1957149 - CI: \"Managed cluster should start all core operators\" fails with: OpenStackCinderDriverStaticResourcesControllerDegraded: \"volumesnapshotclass.yaml\" (string): missing dynamicClient\n1957179 - Incorrect VERSION in node_exporter\n1957190 - CI jobs failing due too many watch requests (prometheus-operator)\n1957198 - Misspelled console-operator condition\n1957227 - Issue replacing the EnvVariables using the unsupported ConfigMap\n1957260 - [4.8] [gcp] Installer is missing new region/zone europe-central2\n1957261 - update godoc for new build status image change trigger fields\n1957295 - Apply priority classes conventions as test to openshift/origin repo\n1957315 - kuryr-controller doesn\u0027t indicate being out of quota\n1957349 - [Azure] Machine object showing Failed phase even node is ready and VM is running properly\n1957374 - mcddrainerr doesn\u0027t list specific pod\n1957386 - Config serve and validate command should be under alpha\n1957446 - prepare CCO for future without v1beta1 CustomResourceDefinitions\n1957502 - Infrequent panic in kube-apiserver in aws-serial job\n1957561 - lack of pseudolocalization for some text on Cluster Setting page\n1957584 - Routes are not getting created  when using hostname  without FQDN standard\n1957597 - Public DNS records were not deleted when destroying a cluster which is using byo private hosted zone\n1957645 - Event \"Updated PrometheusRule.monitoring.coreos.com/v1 because it changed\" is frequently looped with weird empty {} changes\n1957708 - e2e-metal-ipi and related jobs fail to bootstrap due to multiple VIP\u0027s\n1957726 - Pod stuck in ContainerCreating - Failed to start transient scope unit: Connection timed out\n1957748 - Ptp operator pod should have CPU and memory requests set but not limits\n1957756 - Device Replacemet UI, The status of the disk is \"replacement ready\" before I clicked on \"start replacement\"\n1957772 - ptp daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1957775 - CVO creating cloud-controller-manager too early causing upgrade failures\n1957809 - [OSP] Install with invalid platform.openstack.machinesSubnet results in runtime error\n1957822 - Update apiserver tlsSecurityProfile description to include Custom profile\n1957832 - CMO end-to-end tests work only on AWS\n1957856 - \u0027resource name may not be empty\u0027 is shown in CI testing\n1957869 - baremetal IPI power_interface for irmc is inconsistent\n1957879 - cloud-controller-manage ClusterOperator manifest does not declare relatedObjects\n1957889 - Incomprehensible documentation of the GatherClusterOperatorPodsAndEvents gatherer\n1957893 - ClusterDeployment / Agent conditions show \"ClusterAlreadyInstalling\" during each spoke install\n1957895 - Cypress helper projectDropdown.shouldContain is not an assertion\n1957908 - Many e2e failed requests caused by kube-storage-version-migrator-operator\u0027s version reads\n1957926 - \"Add Capacity\" should allow to add n*3 (or n*4) local devices at once\n1957951 - [aws] destroy can get blocked on instances stuck in shutting-down state\n1957967 - Possible test flake in listPage Cypress view\n1957972 - Leftover templates from mdns\n1957976 - Ironic execute_deploy_steps command to ramdisk times out, resulting in a failed deployment in 4.7\n1957982 - Deployment Actions clickable for view-only projects\n1957991 - ClusterOperatorDegraded can fire during installation\n1958015 - \"config-reloader-cpu\" and \"config-reloader-memory\" flags have been deprecated for prometheus-operator\n1958080 - Missing i18n for login, error and selectprovider pages\n1958094 - Audit log files are corrupted sometimes\n1958097 - don\u0027t show \"old, insecure token format\" if the token does not actually exist\n1958114 - Ignore staged vendor files in pre-commit script\n1958126 - [OVN]Egressip doesn\u0027t take effect\n1958158 - OAuth proxy container for AlertManager and Thanos are flooding the logs\n1958216 - ocp libvirt: dnsmasq options in install config should allow duplicate option names\n1958245 - cluster-etcd-operator: static pod revision is not visible from etcd logs\n1958285 - Deployment considered unhealthy despite being available and at latest generation\n1958296 - OLM must explicitly alert on deprecated APIs in use\n1958329 - pick 97428: add more context to log after a request times out\n1958367 - Build metrics do not aggregate totals by build strategy\n1958391 - Update MCO KubeletConfig to mixin the API Server TLS Security Profile Singleton\n1958405 - etcd: current health checks and reporting are not adequate to ensure availability\n1958406 - Twistlock flags mode of /var/run/crio/crio.sock\n1958420 - openshift-install 4.7.10 fails with segmentation error\n1958424 - aws: support more auth options in manual mode\n1958439 - Install/Upgrade button on Install/Upgrade Helm Chart page does not work with Form View\n1958492 - CCO: pod-identity-webhook still accesses APIRemovedInNextReleaseInUse\n1958643 - All pods creation stuck due to SR-IOV webhook timeout\n1958679 - Compression on pool can\u0027t be disabled via UI\n1958753 - VMI nic tab is not loadable\n1958759 - Pulling Insights report is missing retry logic\n1958811 - VM creation fails on API version mismatch\n1958812 - Cluster upgrade halts as machine-config-daemon fails to parse `rpm-ostree status` during cluster upgrades\n1958861 - [CCO] pod-identity-webhook certificate request failed\n1958868 - ssh copy is missing when vm is running\n1958884 - Confusing error message when volume AZ not found\n1958913 - \"Replacing an unhealthy etcd member whose node is not ready\" procedure results in new etcd pod in CrashLoopBackOff\n1958930 - network config in machine configs prevents addition of new nodes with static networking via kargs\n1958958 - [SCALE] segfault with ovnkube adding to address set\n1958972 - [SCALE] deadlock in ovn-kube when scaling up to 300 nodes\n1959041 - LSO Cluster UI,\"Troubleshoot\" link does not exist after scale down osd pod\n1959058 - ovn-kubernetes has lock contention on the LSP cache\n1959158 - packageserver clusteroperator Available condition set to false on any Deployment spec change\n1959177 - Descheduler dev manifests are missing permissions\n1959190 - Set LABEL io.openshift.release.operator=true for driver-toolkit image addition to payload\n1959194 - Ingress controller should use minReadySeconds because otherwise it is disrupted during deployment updates\n1959278 - Should remove prometheus servicemonitor from openshift-user-workload-monitoring\n1959294 - openshift-operator-lifecycle-manager:olm-operator-serviceaccount should not rely on external networking for health check\n1959327 - Degraded nodes on upgrade - Cleaning bootversions: Read-only file system\n1959406 - Difficult to debug performance on ovn-k without pprof enabled\n1959471 - Kube sysctl conformance tests are disabled, meaning we can\u0027t submit conformance results\n1959479 - machines doesn\u0027t support dual-stack loadbalancers on Azure\n1959513 - Cluster-kube-apiserver does not use library-go for audit pkg\n1959519 - Operand details page only renders one status donut no matter how many \u0027podStatuses\u0027 descriptors are used\n1959550 - Overly generic CSS rules for dd and dt elements breaks styling elsewhere in console\n1959564 - Test verify /run filesystem contents failing\n1959648 - oc adm top --help indicates that oc adm top can display storage usage while it cannot\n1959650 - Gather SDI-related MachineConfigs\n1959658 - showing a lot \"constructing many client instances from the same exec auth config\"\n1959696 - Deprecate \u0027ConsoleConfigRoute\u0027 struct in console-operator config\n1959699 - [RFE] Collect LSO pod log and daemonset log managed by LSO\n1959703 - Bootstrap gather gets into an infinite loop on bootstrap-in-place mode\n1959711 - Egressnetworkpolicy  doesn\u0027t work when configure the EgressIP\n1959786 - [dualstack]EgressIP doesn\u0027t work on dualstack cluster for IPv6\n1959916 - Console not works well against a proxy in front of openshift clusters\n1959920 - UEFISecureBoot set not on the right master node\n1959981 - [OCPonRHV] - Affinity Group should not create by default if we define empty affinityGroupsNames: []\n1960035 - iptables is missing from ose-keepalived-ipfailover image\n1960059 - Remove \"Grafana UI\" link from Console Monitoring \u003e Dashboards page\n1960089 - ImageStreams list page, detail page and breadcrumb are not following CamelCase conventions\n1960129 - [e2e][automation] add smoke tests about VM pages and actions\n1960134 - some origin images are not public\n1960171 - Enable SNO checks for image-registry\n1960176 - CCO should recreate a user for the component when it was removed from the cloud providers\n1960205 - The kubelet log flooded with reconcileState message once CPU manager enabled\n1960255 - fixed obfuscation permissions\n1960257 - breaking changes in pr template\n1960284 - ExternalTrafficPolicy Local does not preserve connections correctly on shutdown, policy Cluster has significant performance cost\n1960323 - Address issues raised by coverity security scan\n1960324 - manifests: extra \"spec.version\" in console quickstarts makes CVO hotloop\n1960330 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1960334 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1960337 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1960339 - manifests: unset \"preemptionPolicy\" makes CVO hotloop\n1960531 - Items under \u0027Current Bandwidth\u0027 for Dashboard \u0027Kubernetes / Networking / Pod\u0027 keep added for every access\n1960534 - Some graphs of console dashboards have no legend and tooltips are difficult to undstand compared with grafana\n1960546 - Add virt_platform metric to the collected metrics\n1960554 - Remove rbacv1beta1 handling code\n1960612 - Node disk info in overview/details does not account for second drive where /var is located\n1960619 - Image registry integration tests use old-style OAuth tokens\n1960683 - GlobalConfigPage is constantly requesting resources\n1960711 - Enabling IPsec runtime causing incorrect MTU on Pod interfaces\n1960716 - Missing details for debugging\n1960732 - Outdated manifests directory in CSI driver operator repositories\n1960757 - [OVN] hostnetwork pod can access MCS port 22623 or 22624 on master\n1960758 - oc debug / oc adm must-gather do not require openshift/tools and openshift/must-gather to be \"the newest\"\n1960767 - /metrics endpoint of the Grafana UI is accessible without authentication\n1960780 - CI: failed to create PDB \"service-test\" the server could not find the requested resource\n1961064 - Documentation link to network policies is outdated\n1961067 - Improve log gathering logic\n1961081 - policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget in CMO logs\n1961091 - Gather MachineHealthCheck definitions\n1961120 - CSI driver operators fail when upgrading a cluster\n1961173 - recreate existing static pod manifests instead of updating\n1961201 - [sig-network-edge] DNS should answer A and AAAA queries for a dual-stack service is constantly failing\n1961314 - Race condition in operator-registry pull retry unit tests\n1961320 - CatalogSource does not emit any metrics to indicate if it\u0027s ready or not\n1961336 - Devfile sample for BuildConfig is not defined\n1961356 - Update single quotes to double quotes in string\n1961363 - Minor string update for \" No Storage classes found in cluster, adding source is disabled.\"\n1961393 - DetailsPage does not work with group~version~kind\n1961452 - Remove \"Alertmanager UI\" link from Console Monitoring \u003e Alerting page\n1961466 - Some dropdown placeholder text on route creation page is not translated\n1961472 - openshift-marketplace pods in CrashLoopBackOff state after RHACS installed with an SCC with readOnlyFileSystem set to true\n1961506 - NodePorts do not work on RHEL 7.9 workers (was \"4.7 -\u003e 4.8 upgrade is stuck at Ingress operator Degraded with rhel 7.9 workers\")\n1961536 - clusterdeployment without pull secret is crashing assisted service pod\n1961538 - manifests: invalid namespace in ClusterRoleBinding makes CVO hotloop\n1961545 - Fixing Documentation Generation\n1961550 - HAproxy pod logs showing error \"another server named \u0027pod:httpd-7c7ccfffdc-wdkvk:httpd:8080-tcp:10.128.x.x:8080\u0027 was already defined at line 326, please use distinct names\"\n1961554 - respect the shutdown-delay-duration from OpenShiftAPIServerConfig\n1961561 - The encryption controllers send lots of request to an API server\n1961582 - Build failure on s390x\n1961644 - NodeAuthenticator tests are failing in IPv6\n1961656 - driver-toolkit missing some release metadata\n1961675 - Kebab menu of taskrun contains Edit options which should not be present\n1961701 - Enhance gathering of events\n1961717 - Update runtime dependencies to Wallaby builds for bugfixes\n1961829 - Quick starts prereqs not shown when description is long\n1961852 - Excessive lock contention when adding many pods selected by the same NetworkPolicy\n1961878 - Add Sprint 199 translations\n1961897 - Remove history listener before console UI is unmounted\n1961925 - New ManagementCPUsOverride admission plugin blocks pod creation in clusters with no nodes\n1962062 - Monitoring dashboards should support default values of \"All\"\n1962074 - SNO:the pod get stuck in CreateContainerError and prompt \"failed to add conmon to systemd sandbox cgroup: dial unix /run/systemd/private: connect: resource temporarily unavailable\" after adding a performanceprofile\n1962095 - Replace gather-job image without FQDN\n1962153 - VolumeSnapshot routes are ambiguous, too generic\n1962172 - Single node CI e2e tests kubelet metrics endpoints intermittent downtime\n1962219 - NTO relies on unreliable leader-for-life implementation. \n1962256 - use RHEL8 as the vm-example\n1962261 - Monitoring components requesting more memory than they use\n1962274 - OCP on RHV installer fails to generate an install-config with only 2 hosts in RHV cluster\n1962347 - Cluster does not exist logs after successful installation\n1962392 - After upgrade from 4.5.16 to 4.6.17, customer\u0027s application is seeing re-transmits\n1962415 - duplicate zone information for in-tree PV after enabling migration\n1962429 - Cannot create windows vm because kubemacpool.io denied the request\n1962525 - [Migration] SDN migration stuck on MCO on RHV cluster\n1962569 - NetworkPolicy details page should also show Egress rules\n1962592 - Worker nodes restarting during OS installation\n1962602 - Cloud credential operator scrolls info \"unable to provide upcoming...\" on unsupported platform\n1962630 - NTO: Ship the current upstream TuneD\n1962687 - openshift-kube-storage-version-migrator pod failed due to Error: container has runAsNonRoot and image will run as root\n1962698 - Console-operator can not create resource console-public configmap in the openshift-config-managed namespace\n1962718 - CVE-2021-29622 prometheus: open redirect under the /new endpoint\n1962740 - Add documentation to Egress Router\n1962850 - [4.8] Bootimage bump tracker\n1962882 - Version pod does not set priorityClassName\n1962905 - Ramdisk ISO source defaulting to \"http\" breaks deployment on a good amount of BMCs\n1963068 - ironic container should not specify the entrypoint\n1963079 - KCM/KS: ability to enforce localhost communication with the API server. \n1963154 - Current BMAC reconcile flow skips Ironic\u0027s deprovision step\n1963159 - Add Sprint 200 translations\n1963204 - Update to 8.4 IPA images\n1963205 - Installer is using old redirector\n1963208 - Translation typos/inconsistencies for Sprint 200 files\n1963209 - Some strings in public.json have errors\n1963211 - Fix grammar issue in kubevirt-plugin.json string\n1963213 - Memsource download script running into API error\n1963219 - ImageStreamTags not internationalized\n1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment\n1963267 - Warning: Invalid DOM property `classname`. Did you mean `className`? console warnings in volumes table\n1963502 - create template from is not descriptive\n1963676 - in vm wizard when selecting an os template it looks like selecting the flavor too\n1963833 - Cluster monitoring operator crashlooping on single node clusters due to segfault\n1963848 - Use OS-shipped stalld vs. the NTO-shipped one. \n1963866 - NTO: use the latest k8s 1.21.1 and openshift vendor dependencies\n1963871 - cluster-etcd-operator:[build] upgrade to go 1.16\n1963896 - The VM disks table does not show easy links to PVCs\n1963912 - \"[sig-network] DNS should provide DNS for {services, cluster, subdomain, hostname}\" failures on vsphere\n1963932 - Installation failures in bootstrap in OpenStack release jobs\n1963964 - Characters are not escaped on config ini file causing Kuryr bootstrap to fail\n1964059 - rebase openshift/sdn to kube 1.21.1\n1964197 - Failing Test vendor/k8s.io/kube-aggregator/pkg/apiserver TestProxyCertReload due to hardcoded certificate expiration\n1964203 - e2e-metal-ipi, e2e-metal-ipi-ovn-dualstack and e2e-metal-ipi-ovn-ipv6 are failing due to \"Unknown provider baremetal\"\n1964243 - The `oc compliance fetch-raw` doesn\u2019t work for disconnected cluster\n1964270 - Failed to install \u0027cluster-kube-descheduler-operator\u0027 with error: \"clusterkubedescheduleroperator.4.8.0-202105211057.p0.assembly.stream\\\": must be no more than 63 characters\"\n1964319 - Network policy \"deny all\" interpreted as \"allow all\" in description page\n1964334 - alertmanager/prometheus/thanos-querier /metrics endpoints are not secured\n1964472 - Make project and namespace requirements more visible rather than giving me an error after submission\n1964486 - Bulk adding of CIDR IPS to whitelist is not working\n1964492 - Pick 102171: Implement support for watch initialization in P\u0026F\n1964625 - NETID duplicate check is only required in NetworkPolicy Mode\n1964748 - Sync upstream 1.7.2 downstream\n1964756 - PVC status is always in \u0027Bound\u0027 status when it is actually cloning\n1964847 - Sanity check test suite missing from the repo\n1964888 - opoenshift-apiserver imagestreamimports depend on \u003e34s timeout support, WAS: transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\n1964936 - error log for \"oc adm catalog mirror\" is not correct\n1964979 - Add mapping from ACI to infraenv to handle creation order issues\n1964997 - Helm Library charts are showing and can be installed from Catalog\n1965024 - [DR] backup and restore should perform consistency checks on etcd snapshots\n1965092 - [Assisted-4.7] [Staging][OLM] Operators deployments start before all workers finished installation\n1965283 - 4.7-\u003e4.8 upgrades: cluster operators are not ready: openshift-controller-manager (Upgradeable=Unknown NoData: ), service-ca (Upgradeable=Unknown NoData:\n1965330 - oc image extract fails due to security capabilities on files\n1965334 - opm index add fails during image extraction\n1965367 - Typo in in etcd-metric-serving-ca resource name\n1965370 - \"Route\" is not translated in Korean or Chinese\n1965391 - When storage class is already present wizard do not jumps to \"Stoarge and nodes\"\n1965422 - runc is missing Provides oci-runtime in rpm spec\n1965522 - [v2v] Multiple typos on VM Import screen\n1965545 - Pod stuck in ContainerCreating: Unit ...slice already exists\n1965909 - Replace \"Enable Taint Nodes\" by \"Mark nodes as dedicated\"\n1965921 - [oVirt] High performance VMs shouldn\u0027t be created with Existing policy\n1965929 - kube-apiserver should use cert auth when reaching out to the oauth-apiserver with a TokenReview request\n1966077 - `hidden` descriptor is visible in the Operator instance details page`\n1966116 - DNS SRV request which worked in 4.7.9 stopped working in 4.7.11\n1966126 - root_ca_cert_publisher_sync_duration_seconds metric can have an excessive cardinality\n1966138 - (release-4.8) Update K8s \u0026 OpenShift API versions\n1966156 - Issue with Internal Registry CA on the service pod\n1966174 - No storage class is installed, OCS and CNV installations fail\n1966268 - Workaround for Network Manager not supporting nmconnections priority\n1966401 - Revamp Ceph Table in Install Wizard flow\n1966410 - kube-controller-manager should not trigger APIRemovedInNextReleaseInUse alert\n1966416 - (release-4.8) Do not exceed the data size limit\n1966459 - \u0027policy/v1beta1 PodDisruptionBudget\u0027 and \u0027batch/v1beta1 CronJob\u0027 appear in image-registry-operator log\n1966487 - IP address in Pods list table are showing node IP other than pod IP\n1966520 - Add button from ocs add capacity should not be enabled if there are no PV\u0027s\n1966523 - (release-4.8) Gather MachineAutoScaler definitions\n1966546 - [master] KubeAPI - keep day1 after cluster is successfully installed\n1966561 - Workload partitioning annotation workaround needed for CSV annotation propagation bug\n1966602 - don\u0027t require manually setting IPv6DualStack feature gate in 4.8\n1966620 - The bundle.Dockerfile in the repo is obsolete\n1966632 - [4.8.0] [assisted operator] Unable to re-register an SNO instance if deleting CRDs during install\n1966654 - Alertmanager PDB is not created, but Prometheus UWM is\n1966672 - Add Sprint 201 translations\n1966675 - Admin console string updates\n1966677 - Change comma to semicolon\n1966683 - Translation bugs from Sprint 201 files\n1966684 - Verify \"Creating snapshot for claim \u003c1\u003e{pvcName}\u003c/1\u003e\" displays correctly\n1966697 - Garbage collector logs every interval - move to debug level\n1966717 - include full timestamps in the logs\n1966759 - Enable downstream plugin for Operator SDK\n1966795 - [tests] Release 4.7 broken due to the usage of wrong OCS version\n1966813 - \"Replacing an unhealthy etcd member whose node is not ready\" procedure results in new etcd pod in CrashLoopBackOff\n1966862 - vsphere IPI - local dns prepender is not prepending nameserver 127.0.0.1\n1966892 - [master] [Assisted-4.8][SNO] SNO node cannot transition into \"Writing image to disk\" from \"Waiting for bootkub[e\"\n1966952 - [4.8.0] [Assisted-4.8][SNO][Dual Stack] DHCPv6 settings \"ipv6.dhcp-duid=ll\" missing from dual stack install\n1967104 - [4.8.0] InfraEnv ctrl: log the amount of NMstate Configs baked into the image\n1967126 - [4.8.0] [DOC] KubeAPI docs should clarify that the InfraEnv Spec pullSecretRef is currently ignored\n1967197 - 404 errors loading some i18n namespaces\n1967207 - Getting started card: console customization resources link shows other resources\n1967208 - Getting started card should use semver library for parsing the version instead of string manipulation\n1967234 - Console is continuously polling for ConsoleLink acm-link\n1967275 - Awkward wrapping in getting started dashboard card\n1967276 - Help menu tooltip overlays dropdown\n1967398 - authentication operator still uses previous deleted pod ip rather than the new created pod ip to do health check\n1967403 - (release-4.8) Increase workloads fingerprint gatherer pods limit\n1967423 - [master] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion\n1967444 - openshift-local-storage pods found with invalid priority class, should be openshift-user-critical or begin with system- while running e2e tests\n1967531 - the ccoctl tool should extend MaxItems when listRoles, the default value 100 is a little small\n1967578 - [4.8.0] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion\n1967591 - The ManagementCPUsOverride admission plugin should not mutate containers with the limit\n1967595 - Fixes the remaining lint issues\n1967614 - prometheus-k8s pods can\u0027t be scheduled due to volume node affinity conflict\n1967623 - [OCPonRHV] - ./openshift-install installation with install-config doesn\u0027t work if ovirt-config.yaml doesn\u0027t exist and user should fill the FQDN URL\n1967625 - Add OpenShift Dockerfile for cloud-provider-aws\n1967631 - [4.8.0] Cluster install failed due to timeout while \"Waiting for control plane\"\n1967633 - [4.8.0] [Assisted-4.8][SNO] SNO node cannot transition into \"Writing image to disk\" from \"Waiting for bootkube\"\n1967639 - Console whitescreens if user preferences fail to load\n1967662 - machine-api-operator should not use deprecated \"platform\" field in infrastructures.config.openshift.io\n1967667 - Add Sprint 202 Round 1 translations\n1967713 - Insights widget shows invalid link to the OCM\n1967717 - Insights Advisor widget is missing a description paragraph and contains deprecated naming\n1967745 - When setting DNS node placement by toleration to not tolerate master node, effect value should not allow string other than \"NoExecute\"\n1967803 - should update to 7.5.5 for grafana resources version label\n1967832 - Add more tests for periodic.go\n1967833 - Add tasks pool to tasks_processing\n1967842 - Production logs are spammed on \"OCS requirements validation status Insufficient hosts to deploy OCS. A minimum of 3 hosts is required to deploy OCS\"\n1967843 - Fix null reference to messagesToSearch in gather_logs.go\n1967902 - [4.8.0] Assisted installer chrony manifests missing index numberring\n1967933 - Network-Tools debug scripts not working as expected\n1967945 - [4.8.0] [assisted operator] Assisted Service Postgres crashes msg: \"mkdir: cannot create directory \u0027/var/lib/pgsql/data/userdata\u0027: Permission denied\"\n1968019 - drain timeout and pool degrading period is too short\n1968067 - [master] Agent validation not including reason for being insufficient\n1968168 - [4.8.0] KubeAPI - keep day1 after cluster is successfully installed\n1968175 - [4.8.0] Agent validation not including reason for being insufficient\n1968373 - [4.8.0] BMAC re-attaches installed node on ISO regeneration\n1968385 - [4.8.0] Infra env require pullSecretRef although it shouldn\u0027t be required\n1968435 - [4.8.0] Unclear message in case of missing clusterImageSet\n1968436 - Listeners timeout updated to remain using default value\n1968449 - [4.8.0] Wrong Install-config override documentation\n1968451 - [4.8.0] Garbage collector not cleaning up directories of removed clusters\n1968452 - [4.8.0] [doc] \"Mirror Registry Configuration\" doc section needs clarification of functionality and limitations\n1968454 - [4.8.0] backend events generated with wrong namespace for agent\n1968455 - [4.8.0] Assisted Service operator\u0027s controllers are starting before the base service is ready\n1968515 - oc should set user-agent when talking with registry\n1968531 - Sync upstream 1.8.0 downstream\n1968558 - [sig-cli] oc adm storage-admin [Suite:openshift/conformance/parallel] doesn\u0027t clean up properly\n1968567 - [OVN] Egress router pod not running and openshift.io/scc is restricted\n1968625 - Pods using sr-iov interfaces failign to start for Failed to create pod sandbox\n1968700 - catalog-operator crashes when status.initContainerStatuses[].state.waiting is nil\n1968701 - Bare metal IPI installation is failed due to worker inspection failure\n1968754 - CI: e2e-metal-ipi-upgrade failing on KubeletHasDiskPressure, which triggers machine-config RequiredPoolsFailed\n1969212 - [FJ OCP4.8 Bug - PUBLIC VERSION]: Masters repeat reboot every few minutes during workers provisioning\n1969284 - Console Query Browser: Can\u0027t reset zoom to fixed time range after dragging to zoom\n1969315 - [4.8.0] BMAC doesn\u0027t check if ISO Url changed before queuing BMH for reconcile\n1969352 - [4.8.0] Creating BareMetalHost without the \"inspect.metal3.io\" does not automatically add it\n1969363 - [4.8.0] Infra env should show the time that ISO was generated. \n1969367 - [4.8.0] BMAC should wait for an ISO to exist for 1 minute before using it\n1969386 - Filesystem\u0027s Utilization doesn\u0027t show in VM overview tab\n1969397 - OVN bug causing subports to stay DOWN fails installations\n1969470 - [4.8.0] Misleading error in case of install-config override bad input\n1969487 - [FJ OCP4.8 Bug]: Avoid always do delete_configuration clean step\n1969525 - Replace golint with revive\n1969535 - Topology edit icon does not link correctly when branch name contains slash\n1969538 - Install a VolumeSnapshotClass by default on CSI Drivers that support it\n1969551 - [4.8.0] Assisted service times out on GetNextSteps due to `oc adm release info` taking too long\n1969561 - Test \"an end user can use OLM can subscribe to the operator\" generates deprecation alert\n1969578 - installer: accesses v1beta1 RBAC APIs and causes APIRemovedInNextReleaseInUse to fire\n1969599 - images without registry are being prefixed with registry.hub.docker.com instead of docker.io\n1969601 - manifest for networks.config.openshift.io CRD uses deprecated apiextensions.k8s.io/v1beta1\n1969626 - Portfoward stream cleanup can cause kubelet to panic\n1969631 - EncryptionPruneControllerDegraded: etcdserver: request timed out\n1969681 - MCO: maxUnavailable of ds/machine-config-daemon does not get updated due to missing resourcemerge check\n1969712 - [4.8.0] Assisted service reports a malformed iso when we fail to download the base iso\n1969752 - [4.8.0] [assisted operator] Installed Clusters are missing DNS setups\n1969773 - [4.8.0] Empty cluster name on handleEnsureISOErrors log after applying InfraEnv.yaml\n1969784 - WebTerminal widget should send resize events\n1969832 - Applying a profile with multiple inheritance where parents include a common ancestor fails\n1969891 - Fix rotated pipelinerun status icon issue in safari\n1969900 - Test files should not use deprecated APIs that will trigger APIRemovedInNextReleaseInUse\n1969903 - Provisioning a large number of hosts results in an unexpected delay in hosts becoming available\n1969951 - Cluster local doesn\u0027t work for knative services created from dev console\n1969969 - ironic-rhcos-downloader container uses and old base image\n1970062 - ccoctl does not work with STS authentication\n1970068 - ovnkube-master logs \"Failed to find node ips for gateway\" error\n1970126 - [4.8.0] Disable \"metrics-events\" when deploying using the operator\n1970150 - master pool is still upgrading when machine config reports level / restarts on osimageurl change\n1970262 - [4.8.0] Remove Agent CRD Status fields not needed\n1970265 - [4.8.0] Add State and StateInfo to DebugInfo in ACI and Agent CRDs\n1970269 - [4.8.0] missing role in agent CRD\n1970271 - [4.8.0] Add ProgressInfo to Agent and AgentClusterInstalll CRDs\n1970381 - Monitoring dashboards: Custom time range inputs should retain their values\n1970395 - [4.8.0] SNO with AI/operator - kubeconfig secret is not created until the spoke is deployed\n1970401 - [4.8.0] AgentLabelSelector is required yet not supported\n1970415 - SR-IOV Docs needs documentation for disabling port security on a network\n1970470 - Add pipeline annotation to Secrets which are created for a private repo\n1970494 - [4.8.0] Missing value-filling of log line in assisted-service operator pod\n1970624 - 4.7-\u003e4.8 updates: AggregatedAPIDown for v1beta1.metrics.k8s.io\n1970828 - \"500 Internal Error\" for all openshift-monitoring routes\n1970975 - 4.7 -\u003e 4.8 upgrades on AWS take longer than expected\n1971068 - Removing invalid AWS instances from the CF templates\n1971080 - 4.7-\u003e4.8 CI: KubePodNotReady due to MCD\u0027s 5m sleep between drain attempts\n1971188 - Web Console does not show OpenShift Virtualization Menu with VirtualMachine CRDs of version v1alpha3 !\n1971293 - [4.8.0] Deleting agent from one namespace causes all agents with the same name to be deleted from all namespaces\n1971308 - [4.8.0] AI KubeAPI AgentClusterInstall confusing \"Validated\" condition about VIP not matching machine network\n1971529 - [Dummy bug for robot] 4.7.14 upgrade to 4.8 and then downgrade back to 4.7.14 doesn\u0027t work - clusteroperator/kube-apiserver is not upgradeable\n1971589 - [4.8.0] Telemetry-client won\u0027t report metrics in case the cluster was installed using the assisted operator\n1971630 - [4.8.0] ACM/ZTP with Wan emulation fails to start the agent service\n1971632 - [4.8.0] ACM/ZTP with Wan emulation, several clusters fail to step past discovery\n1971654 - [4.8.0] InfraEnv controller should always requeue for backend response HTTP StatusConflict (code 409)\n1971739 - Keep /boot RW when kdump is enabled\n1972085 - [4.8.0] Updating configmap within AgentServiceConfig is not logged properly\n1972128 - ironic-static-ip-manager container still uses 4.7 base image\n1972140 - [4.8.0] ACM/ZTP with Wan emulation, SNO cluster installs do not show as installed although they are\n1972167 - Several operators degraded because Failed to create pod sandbox when installing an sts cluster\n1972213 - Openshift Installer| UEFI mode | BM hosts have BIOS halted\n1972262 - [4.8.0] \"baremetalhost.metal3.io/detached\" uses boolean value where string is expected\n1972426 - Adopt failure can trigger deprovisioning\n1972436 - [4.8.0] [DOCS] AgentServiceConfig examples in operator.md doc should each contain databaseStorage + filesystemStorage\n1972526 - [4.8.0] clusterDeployments controller should send an event to InfraEnv for backend cluster registration\n1972530 - [4.8.0] no indication for missing debugInfo in AgentClusterInstall\n1972565 - performance issues due to lost node, pods taking too long to relaunch\n1972662 - DPDK KNI modules need some additional tools\n1972676 - Requirements for authenticating kernel modules with X.509\n1972687 - Using bound SA tokens causes causes failures to /apis/authorization.openshift.io/v1/clusterrolebindings\n1972690 - [4.8.0] infra-env condition message isn\u0027t informative in case of missing pull secret\n1972702 - [4.8.0] Domain dummy.com (not belonging to Red Hat) is being used in a default configuration\n1972768 - kube-apiserver setup fail while installing SNO due to port being used\n1972864 - New `local-with-fallback` service annotation does not preserve source IP\n1973018 - Ironic rhcos downloader breaks image cache in upgrade process from 4.7 to 4.8\n1973117 - No storage class is installed, OCS and CNV installations fail\n1973233 - remove kubevirt images and references\n1973237 - RHCOS-shipped stalld systemd units do not use SCHED_FIFO to run stalld. \n1973428 - Placeholder bug for OCP 4.8.0 image release\n1973667 - [4.8] NetworkPolicy tests were mistakenly marked skipped\n1973672 - fix ovn-kubernetes NetworkPolicy 4.7-\u003e4.8 upgrade issue\n1973995 - [Feature:IPv6DualStack] tests are failing in dualstack\n1974414 - Uninstalling kube-descheduler clusterkubedescheduleroperator.4.6.0-202106010807.p0.git.5db84c5 removes some clusterrolebindings\n1974447 - Requirements for nvidia GPU driver container for driver toolkit\n1974677 - [4.8.0] KubeAPI CVO progress is not available on CR/conditions only in events. \n1974718 - Tuned net plugin fails to handle net devices with n/a value for a channel\n1974743 - [4.8.0] All resources not being cleaned up after clusterdeployment deletion\n1974746 - [4.8.0] File system usage not being logged appropriately\n1974757 - [4.8.0] Assisted-service deployed on an IPv6 cluster installed with proxy: agentclusterinstall shows error pulling an image from quay. \n1974773 - Using bound SA tokens causes fail to query cluster resource especially in a sts cluster\n1974839 - CVE-2021-29059 nodejs-is-svg: Regular expression denial of service if the application is provided and checks a crafted invalid SVG string\n1974850 - [4.8] coreos-installer failing Execshield\n1974931 - [4.8.0] Assisted Service Operator should be Infrastructure Operator for Red Hat OpenShift\n1974978 - 4.8.0.rc0 upgrade hung, stuck on DNS clusteroperator progressing\n1975155 - Kubernetes service IP cannot be accessed for rhel worker\n1975227 - [4.8.0] KubeAPI Move conditions consts to CRD types\n1975360 - [4.8.0] [master] timeout on kubeAPI subsystem test: SNO full install and validate MetaData\n1975404 - [4.8.0] Confusing behavior when multi-node spoke workers present when only controlPlaneAgents specified\n1975432 - Alert InstallPlanStepAppliedWithWarnings does not resolve\n1975527 - VMware UPI is configuring static IPs via ignition rather than afterburn\n1975672 - [4.8.0] Production logs are spammed on \"Found unpreparing host: id 08f22447-2cf1-a107-eedf-12c7421f7380 status insufficient\"\n1975789 - worker nodes rebooted when we simulate a case where the api-server is down\n1975938 - gcp-realtime: e2e test failing [sig-storage] Multi-AZ Cluster Volumes should only be allowed to provision PDs in zones where nodes exist [Suite:openshift/conformance/parallel] [Suite:k8s]\n1975964 - 4.7 nightly upgrade to 4.8 and then downgrade back to 4.7 nightly doesn\u0027t work -  ingresscontroller \"default\" is degraded\n1976079 - [4.8.0] Openshift Installer| UEFI mode | BM hosts have BIOS halted\n1976263 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel]\n1976376 - disable jenkins client plugin test whose Jenkinsfile references master branch openshift/origin artifacts\n1976590 - [Tracker] [SNO][assisted-operator][nmstate] Bond Interface is down when booting from the discovery ISO\n1977233 - [4.8] Unable to authenticate against IDP after upgrade to 4.8-rc.1\n1977351 - CVO pod skipped by workload partitioning with incorrect error stating cluster is not SNO\n1977352 - [4.8.0] [SNO] No DNS to cluster API from assisted-installer-controller\n1977426 - Installation of OCP 4.6.13 fails when teaming interface is used with OVNKubernetes\n1977479 - CI failing on firing CertifiedOperatorsCatalogError due to slow livenessProbe responses\n1977540 - sriov webhook not worked when upgrade from 4.7 to 4.8\n1977607 - [4.8.0] Post making changes to AgentServiceConfig assisted-service operator is not detecting the change and redeploying assisted-service pod\n1977924 - Pod fails to run when a custom SCC with a specific set of volumes is used\n1980788 - NTO-shipped stalld can segfault\n1981633 - enhance service-ca injection\n1982250 - Performance Addon Operator fails to install after catalog source becomes ready\n1982252 - olm Operator is in CrashLoopBackOff state with error \"couldn\u0027t cleanup cross-namespace ownerreferences\"\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2016-2183\nhttps://access.redhat.com/security/cve/CVE-2020-7774\nhttps://access.redhat.com/security/cve/CVE-2020-15106\nhttps://access.redhat.com/security/cve/CVE-2020-15112\nhttps://access.redhat.com/security/cve/CVE-2020-15113\nhttps://access.redhat.com/security/cve/CVE-2020-15114\nhttps://access.redhat.com/security/cve/CVE-2020-15136\nhttps://access.redhat.com/security/cve/CVE-2020-26160\nhttps://access.redhat.com/security/cve/CVE-2020-26541\nhttps://access.redhat.com/security/cve/CVE-2020-28469\nhttps://access.redhat.com/security/cve/CVE-2020-28500\nhttps://access.redhat.com/security/cve/CVE-2020-28852\nhttps://access.redhat.com/security/cve/CVE-2021-3114\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/cve/CVE-2021-3516\nhttps://access.redhat.com/security/cve/CVE-2021-3517\nhttps://access.redhat.com/security/cve/CVE-2021-3518\nhttps://access.redhat.com/security/cve/CVE-2021-3520\nhttps://access.redhat.com/security/cve/CVE-2021-3537\nhttps://access.redhat.com/security/cve/CVE-2021-3541\nhttps://access.redhat.com/security/cve/CVE-2021-3636\nhttps://access.redhat.com/security/cve/CVE-2021-20206\nhttps://access.redhat.com/security/cve/CVE-2021-20271\nhttps://access.redhat.com/security/cve/CVE-2021-20291\nhttps://access.redhat.com/security/cve/CVE-2021-21419\nhttps://access.redhat.com/security/cve/CVE-2021-21623\nhttps://access.redhat.com/security/cve/CVE-2021-21639\nhttps://access.redhat.com/security/cve/CVE-2021-21640\nhttps://access.redhat.com/security/cve/CVE-2021-21648\nhttps://access.redhat.com/security/cve/CVE-2021-22133\nhttps://access.redhat.com/security/cve/CVE-2021-23337\nhttps://access.redhat.com/security/cve/CVE-2021-23362\nhttps://access.redhat.com/security/cve/CVE-2021-23368\nhttps://access.redhat.com/security/cve/CVE-2021-23382\nhttps://access.redhat.com/security/cve/CVE-2021-25735\nhttps://access.redhat.com/security/cve/CVE-2021-25737\nhttps://access.redhat.com/security/cve/CVE-2021-26539\nhttps://access.redhat.com/security/cve/CVE-2021-26540\nhttps://access.redhat.com/security/cve/CVE-2021-27292\nhttps://access.redhat.com/security/cve/CVE-2021-28092\nhttps://access.redhat.com/security/cve/CVE-2021-29059\nhttps://access.redhat.com/security/cve/CVE-2021-29622\nhttps://access.redhat.com/security/cve/CVE-2021-32399\nhttps://access.redhat.com/security/cve/CVE-2021-33034\nhttps://access.redhat.com/security/cve/CVE-2021-33194\nhttps://access.redhat.com/security/cve/CVE-2021-33909\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYQCOF9zjgjWX9erEAQjsEg/+NSFQdRcZpqA34LWRtxn+01y2MO0WLroQ\nd4o+3h0ECKYNRFKJe6n7z8MdmPpvV2uNYN0oIwidTESKHkFTReQ6ZolcV/sh7A26\nZ7E+hhpTTObxAL7Xx8nvI7PNffw3CIOZSpnKws5TdrwuMkH5hnBSSZntP5obp9Vs\nImewWWl7CNQtFewtXbcmUojNzIvU1mujES2DTy2ffypLoOW6kYdJzyWubigIoR6h\ngep9HKf1X4oGPuDNF5trSdxKwi6W68+VsOA25qvcNZMFyeTFhZqowot/Jh1HUHD8\nTWVpDPA83uuExi/c8tE8u7VZgakWkRWcJUsIw68VJVOYGvpP6K/MjTpSuP2itgUX\nX//1RGQM7g6sYTCSwTOIrMAPbYH0IMbGDjcS4fSZcfg6c+WJnEpZ72ZgjHZV8mxb\n1BtQSs2lil48/cwDKM0yMO2nYsKiz4DCCx2W5izP0rLwNA8Hvqh9qlFgkxJWWOvA\nmtBCelB0E74qrE4NXbX+MIF7+ZQKjd1evE91/VWNs0FLR/xXdP3C5ORLU3Fag0G/\n0oTV73NdxP7IXVAdsECwU2AqS9ne1y01zJKtd7hq7H/wtkbasqCNq5J7HikJlLe6\ndpKh5ZRQzYhGeQvho9WQfz/jd4HZZTcB6wxrWubbd05bYt/i/0gau90LpuFEuSDx\n+bLvJlpGiMg=\n=NJcM\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.3.0 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \n\nBugs:\n\n* RFE Make the source code for the endpoint-metrics-operator public (BZ#\n1913444)\n\n* cluster became offline after apiserver health check (BZ# 1942589)\n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension\n1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag\n1913444 - RFE Make the source code for the endpoint-metrics-operator public\n1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull\n1927520 - RHACM 2.3.0 images\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection\n1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate\n1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms\n1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization\n1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string\n1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application\n1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call\n1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS\n1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service\n1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service\n1942589 - cluster became offline after apiserver health check\n1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl()\n1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character\n1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data\n1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service\n1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option\n1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing\n1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js\n1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service\n1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS)\n1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option\n1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command\n1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets\n1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs\n1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method\n1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions\n1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id\n1983131 - Defragmenting an etcd member doesn\u0027t reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters\n\n5. VDSM manages and monitors the host\u0027s storage, memory and\nnetworks as well as virtual machine creation, other host administration\ntasks, statistics gathering, and log collection. \n\nBug Fix(es):\n\n* An update in libvirt has changed the way block threshold events are\nsubmitted. \nAs a result, the VDSM was confused by the libvirt event, and tried to look\nup a drive, logging a warning about a missing drive. \nIn this release, the VDSM has been adapted to handle the new libvirt\nbehavior, and does not log warnings about missing drives. (BZ#1948177)\n\n* Previously, when a virtual machine was powered off on the source host of\na live migration and the migration finished successfully at the same time,\nthe two events  interfered with each other, and sometimes prevented\nmigration cleanup resulting in additional migrations from the host being\nblocked. \nIn this release, additional migrations are not blocked. (BZ#1959436)\n\n* Previously, when failing to execute a snapshot and re-executing it later,\nthe second try would fail due to using the previous execution data. In this\nrelease, this data will be used only when needed, in recovery mode. \n(BZ#1984209)\n\n4. Then engine deletes the volume and causes data corruption. \n1998017 - Keep cinbderlib dependencies optional for 4.4.8\n\n6. \n\nBug Fix(es):\n\n* Documentation is referencing deprecated API for Service Export -\nSubmariner (BZ#1936528)\n\n* Importing of cluster fails due to error/typo in generated command\n(BZ#1936642)\n\n* RHACM 2.2.2 images (BZ#1938215)\n\n* 2.2 clusterlifecycle fails to allow provision `fips: true` clusters on\naws, vsphere (BZ#1941778)\n\n3. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.7.4 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-28500"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011490"
      },
      {
        "db": "VULHUB",
        "id": "VHN-373964"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-28500"
      },
      {
        "db": "PACKETSTORM",
        "id": "163276"
      },
      {
        "db": "PACKETSTORM",
        "id": "162901"
      },
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "db": "PACKETSTORM",
        "id": "164090"
      },
      {
        "db": "PACKETSTORM",
        "id": "162151"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      }
    ],
    "trust": 2.43
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2020-28500",
        "trust": 4.1
      },
      {
        "db": "SIEMENS",
        "id": "SSA-637483",
        "trust": 1.8
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-22-258-05",
        "trust": 1.5
      },
      {
        "db": "PACKETSTORM",
        "id": "163276",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "162151",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "162901",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU99475301",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011490",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "163690",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "163747",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "164090",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.1225",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.1871",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4616",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5790",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3036",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2232",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.2182",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2555",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2657",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4568",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.2555",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022052615",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021090922",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021062702",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1168",
        "trust": 0.6
      },
      {
        "db": "VULHUB",
        "id": "VHN-373964",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-28500",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168352",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-373964"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-28500"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011490"
      },
      {
        "db": "PACKETSTORM",
        "id": "163276"
      },
      {
        "db": "PACKETSTORM",
        "id": "162901"
      },
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "db": "PACKETSTORM",
        "id": "164090"
      },
      {
        "db": "PACKETSTORM",
        "id": "162151"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-28500"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1168"
      }
    ]
  },
  "id": "VAR-202102-1492",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-373964"
      }
    ],
    "trust": 0.30766129
  },
  "last_update_date": "2023-12-18T11:50:50.527000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "perf",
        "trust": 0.8,
        "url": "https://github.com/lodash/lodash/pull/5065"
      },
      {
        "title": "lodash Security vulnerabilities",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=142393"
      },
      {
        "title": "Debian CVElist Bug Report Logs: CVE-2021-23337 CVE-2020-28500",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=705b23b69122ed473c796891371a9f52"
      },
      {
        "title": "IBM: Security Bulletin: IBM Integration Bus \u0026 IBM App Connect Enterprise V11 are affected by vulnerabilities in Node.js (CVE-2020-28500)",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=3d9a3b6c21f9e87c491e9c1a56004595"
      },
      {
        "title": "IBM: Security Bulletin: A security vulnerability in Node.js Lodash module affects IBM Cloud Automation Manager.",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=ab2b9d02254c2d45625dc8b682d0c4eb"
      },
      {
        "title": "Red Hat: Important: Migration Toolkit for Containers (MTC) 1.7.4 security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226429 - security advisory"
      },
      {
        "title": "tsp-vulnerable-app-nodejs-express",
        "trust": 0.1,
        "url": "https://github.com/the-scan-project/tsp-vulnerable-app-nodejs-express "
      },
      {
        "title": "sample-vulnerable-app-nodejs-express",
        "trust": 0.1,
        "url": "https://github.com/samoylenko/sample-vulnerable-app-nodejs-express "
      },
      {
        "title": "lm-test",
        "trust": 0.1,
        "url": "https://github.com/mishakav/lm-test "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2020-28500"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011490"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1168"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "NVD-CWE-Other",
        "trust": 1.0
      },
      {
        "problemtype": "others (CWE-Other) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011490"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-28500"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.6,
        "url": "https://snyk.io/vuln/snyk-java-orgfujionwebjars-1074896"
      },
      {
        "trust": 2.0,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28500"
      },
      {
        "trust": 1.8,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf"
      },
      {
        "trust": 1.8,
        "url": "https://security.netapp.com/advisory/ntap-20210312-0006/"
      },
      {
        "trust": 1.8,
        "url": "https://github.com/lodash/lodash/blob/npm/trimend.js%23l8"
      },
      {
        "trust": 1.8,
        "url": "https://github.com/lodash/lodash/pull/5065"
      },
      {
        "trust": 1.8,
        "url": "https://snyk.io/vuln/snyk-java-orgwebjars-1074894"
      },
      {
        "trust": 1.8,
        "url": "https://snyk.io/vuln/snyk-java-orgwebjarsbower-1074892"
      },
      {
        "trust": 1.8,
        "url": "https://snyk.io/vuln/snyk-java-orgwebjarsbowergithublodash-1074895"
      },
      {
        "trust": 1.8,
        "url": "https://snyk.io/vuln/snyk-java-orgwebjarsnpm-1074893"
      },
      {
        "trust": 1.8,
        "url": "https://snyk.io/vuln/snyk-js-lodash-1018905"
      },
      {
        "trust": 1.8,
        "url": "https://www.oracle.com//security-alerts/cpujul2021.html"
      },
      {
        "trust": 1.8,
        "url": "https://www.oracle.com/security-alerts/cpujan2022.html"
      },
      {
        "trust": 1.8,
        "url": "https://www.oracle.com/security-alerts/cpujul2022.html"
      },
      {
        "trust": 1.8,
        "url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
      },
      {
        "trust": 0.9,
        "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.8,
        "url": "http://jvn.jp/vu/jvnvu99475301/index.html"
      },
      {
        "trust": 0.7,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-integration-bus-ibm-app-connect-enterprise-v11-are-affected-by-vulnerabilities-in-node-js-cve-2020-28500/"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-28500"
      },
      {
        "trust": 0.7,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2021-23337"
      },
      {
        "trust": 0.7,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-watson-discovery-for-ibm-cloud-pak-for-data-affected-by-vulnerability-in-node-js-3/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2657"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.1225"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/162901/red-hat-security-advisory-2021-2179-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-security-guardium-insights-is-affected-by-multiple-vulnerabilities-5/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/support/pages/node/6486341"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/163747/red-hat-security-advisory-2021-3016-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-a-security-vulnerability-in-node-js-lodash-module-affects-ibm-cloud-automation-manager-2/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/164090/red-hat-security-advisory-2021-3459-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.1871"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3036"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021090922"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/163276/red-hat-security-advisory-2021-2543-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.2555"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022052615"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/support/pages/node/6524656"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/support/pages/node/6483681"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4616"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/162151/red-hat-security-advisory-2021-1168-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021062702"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2232"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/163690/red-hat-security-advisory-2021-2438-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-a-security-vulnerability-in-node-js-lodash-module-affects-ibm-cloud-pak-for-multicloud-management-managed-service/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-potential-vulnerability-with-node-js-lodash-module-2/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2555"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.2182"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5790"
      },
      {
        "trust": 0.6,
        "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/lodash-denial-of-service-via-tonumber-trim-36225"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-cloud-pak-for-integration-is-vulnerable-to-node-js-lodash-vulnerability-cve-2020-28500/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4568"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23337"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-3449"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-3450"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28852"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-28852"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-25013"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29362"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29361"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-2708"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8286"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-28196"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-20305"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15358"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15358"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8927"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13434"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2017-14502"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29362"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8285"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-9169"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29363"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3114"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27618"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29361"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-13434"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-2708"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2016-10228"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8231"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3326"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9169"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-27219"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8284"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-27618"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28196"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/articles/2974891"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-28469"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33034"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-28092"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3520"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3537"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3121"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33909"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3518"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-32399"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3516"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-23368"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-23362"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3517"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3541"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28469"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-20271"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-27292"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-23382"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28851"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-21321"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-23841"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-28851"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-23840"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-21322"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/.html"
      },
      {
        "trust": 0.1,
        "url": "https://github.com/the-scan-project/tsp-vulnerable-app-nodejs-express"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26116"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8284"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23336"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20305"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13949"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28362"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8285"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8286"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/jaeger/jaeger_install/rhb"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26116"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3842"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8927"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13776"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27619"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2543"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24977"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-3842"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13776"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23336"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3177"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13949"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8231"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27619"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24977"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/ht"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2179"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/technical_notes"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21419"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15112"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25737"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.8/updating/updating-cluster"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21639"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7774"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20291"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-26540"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23368"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21419"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33194"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-26539"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15106"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29059"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25735"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2016-2183"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26160"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21623"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2438"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15112"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20206"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25735"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20206"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22133"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15113"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21640"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26160"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21640"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7774"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2437"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15136"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23382"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21623"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21639"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21648"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15106"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15136"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29622"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21648"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20291"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15113"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15114"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22133"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20271"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-2183"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15114"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3636"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20454"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20934"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29418"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13050"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-20843"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-1730"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29482"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27358"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23369"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11668"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23364"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23343"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21309"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23383"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-28918"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3560"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33033"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-14889"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-1730"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13627"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20934"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25217"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:3016"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3377"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21272"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29477"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23346"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29478"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11668"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23839"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19906"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33623"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15903"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33910"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:3459"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:1168"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29529"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29529"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3121"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3347"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3449"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28374"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27364"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-26708"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27365"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0466"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27152"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27363"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21322"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27152"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3347"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3450"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14040"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21321"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27365"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-0466"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27364"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14040"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28374"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-26708"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36084"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36085"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8559"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30629"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20838"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1785"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1897"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1927"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4189"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20095"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2526"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-5827"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-29154"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0691"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2097"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3634"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3580"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2068"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24370"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0686"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32206"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-25313"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32208"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-29824"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16845"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23177"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3737"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14155"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19603"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-42771"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1292"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0639"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36087"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6429"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20231"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-40528"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30631"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20232"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25219"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-31566"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-25314"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17595"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36086"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16845"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0512"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15586"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28493"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1650"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13435"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-373964"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-28500"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011490"
      },
      {
        "db": "PACKETSTORM",
        "id": "163276"
      },
      {
        "db": "PACKETSTORM",
        "id": "162901"
      },
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "db": "PACKETSTORM",
        "id": "164090"
      },
      {
        "db": "PACKETSTORM",
        "id": "162151"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-28500"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1168"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-373964"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-28500"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011490"
      },
      {
        "db": "PACKETSTORM",
        "id": "163276"
      },
      {
        "db": "PACKETSTORM",
        "id": "162901"
      },
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "db": "PACKETSTORM",
        "id": "164090"
      },
      {
        "db": "PACKETSTORM",
        "id": "162151"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-28500"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1168"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2021-02-15T00:00:00",
        "db": "VULHUB",
        "id": "VHN-373964"
      },
      {
        "date": "2021-02-15T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-28500"
      },
      {
        "date": "2021-04-05T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-011490"
      },
      {
        "date": "2021-06-24T17:54:53",
        "db": "PACKETSTORM",
        "id": "163276"
      },
      {
        "date": "2021-06-01T15:17:45",
        "db": "PACKETSTORM",
        "id": "162901"
      },
      {
        "date": "2021-07-28T14:53:49",
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "date": "2021-08-06T14:02:37",
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "date": "2021-09-09T13:33:33",
        "db": "PACKETSTORM",
        "id": "164090"
      },
      {
        "date": "2021-04-13T15:38:30",
        "db": "PACKETSTORM",
        "id": "162151"
      },
      {
        "date": "2022-09-13T15:42:14",
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "date": "2021-02-15T11:15:12.397000",
        "db": "NVD",
        "id": "CVE-2020-28500"
      },
      {
        "date": "2021-02-15T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202102-1168"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-09-13T00:00:00",
        "db": "VULHUB",
        "id": "VHN-373964"
      },
      {
        "date": "2022-09-13T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-28500"
      },
      {
        "date": "2022-09-20T05:44:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-011490"
      },
      {
        "date": "2022-09-13T21:18:50.543000",
        "db": "NVD",
        "id": "CVE-2020-28500"
      },
      {
        "date": "2022-11-11T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202102-1168"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1168"
      }
    ],
    "trust": 0.7
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Lodash\u00a0 Vulnerability in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011490"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "other",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1168"
      }
    ],
    "trust": 0.6
  }
}

cve-2019-1010266
Vulnerability from cvelistv5
Published
2019-07-17 20:25
Modified
2024-08-05 03:07
Severity ?
Summary
lodash prior to 4.17.11 is affected by: CWE-400: Uncontrolled Resource Consumption. The impact is: Denial of service. The component is: Date handler. The attack vector is: Attacker provides very long strings, which the library attempts to match using a regular expression. The fixed version is: 4.17.11.
Impacted products
lodashlodash
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "providerMetadata": {
          "dateUpdated": "2024-08-05T03:07:18.476Z",
          "orgId": "af854a3a-2127-422b-91ae-364da2661108",
          "shortName": "CVE"
        },
        "references": [
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://snyk.io/vuln/SNYK-JS-LODASH-73639"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://github.com/lodash/lodash/issues/3359"
          },
          {
            "tags": [
              "x_refsource_CONFIRM",
              "x_transferred"
            ],
            "url": "https://github.com/lodash/lodash/wiki/Changelog"
          },
          {
            "tags": [
              "x_refsource_CONFIRM",
              "x_transferred"
            ],
            "url": "https://security.netapp.com/advisory/ntap-20190919-0004/"
          }
        ],
        "title": "CVE Program Container"
      }
    ],
    "cna": {
      "affected": [
        {
          "product": "lodash",
          "vendor": "lodash",
          "versions": [
            {
              "status": "affected",
              "version": "\u003c4.17.11 [fixed: 4.7.11]"
            }
          ]
        }
      ],
      "descriptions": [
        {
          "lang": "en",
          "value": "lodash prior to 4.17.11 is affected by: CWE-400: Uncontrolled Resource Consumption. The impact is: Denial of service. The component is: Date handler. The attack vector is: Attacker provides very long strings, which the library attempts to match using a regular expression. The fixed version is: 4.17.11."
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "cweId": "CWE-400",
              "description": "CWE-400: Uncontrolled Resource Consumption",
              "lang": "en",
              "type": "CWE"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2019-09-19T16:06:08",
        "orgId": "7556d962-6fb7-411e-85fa-6cd62f095ba8",
        "shortName": "dwf"
      },
      "references": [
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://snyk.io/vuln/SNYK-JS-LODASH-73639"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://github.com/lodash/lodash/issues/3359"
        },
        {
          "tags": [
            "x_refsource_CONFIRM"
          ],
          "url": "https://github.com/lodash/lodash/wiki/Changelog"
        },
        {
          "tags": [
            "x_refsource_CONFIRM"
          ],
          "url": "https://security.netapp.com/advisory/ntap-20190919-0004/"
        }
      ],
      "x_legacyV4Record": {
        "CVE_data_meta": {
          "ASSIGNER": "cve-assign@distributedweaknessfiling.org",
          "ID": "CVE-2019-1010266",
          "STATE": "PUBLIC"
        },
        "affects": {
          "vendor": {
            "vendor_data": [
              {
                "product": {
                  "product_data": [
                    {
                      "product_name": "lodash",
                      "version": {
                        "version_data": [
                          {
                            "version_value": "\u003c4.17.11 [fixed: 4.7.11]"
                          }
                        ]
                      }
                    }
                  ]
                },
                "vendor_name": "lodash"
              }
            ]
          }
        },
        "data_format": "MITRE",
        "data_type": "CVE",
        "data_version": "4.0",
        "description": {
          "description_data": [
            {
              "lang": "eng",
              "value": "lodash prior to 4.17.11 is affected by: CWE-400: Uncontrolled Resource Consumption. The impact is: Denial of service. The component is: Date handler. The attack vector is: Attacker provides very long strings, which the library attempts to match using a regular expression. The fixed version is: 4.17.11."
            }
          ]
        },
        "problemtype": {
          "problemtype_data": [
            {
              "description": [
                {
                  "lang": "eng",
                  "value": "CWE-400: Uncontrolled Resource Consumption"
                }
              ]
            }
          ]
        },
        "references": {
          "reference_data": [
            {
              "name": "https://snyk.io/vuln/SNYK-JS-LODASH-73639",
              "refsource": "MISC",
              "url": "https://snyk.io/vuln/SNYK-JS-LODASH-73639"
            },
            {
              "name": "https://github.com/lodash/lodash/issues/3359",
              "refsource": "MISC",
              "url": "https://github.com/lodash/lodash/issues/3359"
            },
            {
              "name": "https://github.com/lodash/lodash/wiki/Changelog",
              "refsource": "CONFIRM",
              "url": "https://github.com/lodash/lodash/wiki/Changelog"
            },
            {
              "name": "https://security.netapp.com/advisory/ntap-20190919-0004/",
              "refsource": "CONFIRM",
              "url": "https://security.netapp.com/advisory/ntap-20190919-0004/"
            }
          ]
        }
      }
    }
  },
  "cveMetadata": {
    "assignerOrgId": "7556d962-6fb7-411e-85fa-6cd62f095ba8",
    "assignerShortName": "dwf",
    "cveId": "CVE-2019-1010266",
    "datePublished": "2019-07-17T20:25:30",
    "dateReserved": "2019-03-20T00:00:00",
    "dateUpdated": "2024-08-05T03:07:18.476Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}

cve-2020-8203
Vulnerability from cvelistv5
Published
2020-07-15 16:10
Modified
2024-08-04 09:56
Severity ?
Summary
Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20.
Impacted products
n/alodash
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "providerMetadata": {
          "dateUpdated": "2024-08-04T09:56:28.214Z",
          "orgId": "af854a3a-2127-422b-91ae-364da2661108",
          "shortName": "CVE"
        },
        "references": [
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://hackerone.com/reports/712065"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://www.oracle.com/security-alerts/cpuApr2021.html"
          },
          {
            "tags": [
              "x_refsource_CONFIRM",
              "x_transferred"
            ],
            "url": "https://security.netapp.com/advisory/ntap-20200724-0006/"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://github.com/lodash/lodash/issues/4874"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://www.oracle.com//security-alerts/cpujul2021.html"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://www.oracle.com/security-alerts/cpujan2022.html"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://www.oracle.com/security-alerts/cpuapr2022.html"
          }
        ],
        "title": "CVE Program Container"
      }
    ],
    "cna": {
      "affected": [
        {
          "product": "lodash",
          "vendor": "n/a",
          "versions": [
            {
              "status": "affected",
              "version": "Not Fixed"
            }
          ]
        }
      ],
      "descriptions": [
        {
          "lang": "en",
          "value": "Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20."
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "cweId": "CWE-770",
              "description": "Allocation of Resources Without Limits or Throttling (CWE-770)",
              "lang": "en",
              "type": "CWE"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2022-04-19T23:23:22",
        "orgId": "36234546-b8fa-4601-9d6f-f4e334aa8ea1",
        "shortName": "hackerone"
      },
      "references": [
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://hackerone.com/reports/712065"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://www.oracle.com/security-alerts/cpuApr2021.html"
        },
        {
          "tags": [
            "x_refsource_CONFIRM"
          ],
          "url": "https://security.netapp.com/advisory/ntap-20200724-0006/"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://github.com/lodash/lodash/issues/4874"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://www.oracle.com//security-alerts/cpujul2021.html"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://www.oracle.com/security-alerts/cpujan2022.html"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://www.oracle.com/security-alerts/cpuapr2022.html"
        }
      ],
      "x_legacyV4Record": {
        "CVE_data_meta": {
          "ASSIGNER": "support@hackerone.com",
          "ID": "CVE-2020-8203",
          "STATE": "PUBLIC"
        },
        "affects": {
          "vendor": {
            "vendor_data": [
              {
                "product": {
                  "product_data": [
                    {
                      "product_name": "lodash",
                      "version": {
                        "version_data": [
                          {
                            "version_value": "Not Fixed"
                          }
                        ]
                      }
                    }
                  ]
                },
                "vendor_name": "n/a"
              }
            ]
          }
        },
        "data_format": "MITRE",
        "data_type": "CVE",
        "data_version": "4.0",
        "description": {
          "description_data": [
            {
              "lang": "eng",
              "value": "Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20."
            }
          ]
        },
        "problemtype": {
          "problemtype_data": [
            {
              "description": [
                {
                  "lang": "eng",
                  "value": "Allocation of Resources Without Limits or Throttling (CWE-770)"
                }
              ]
            }
          ]
        },
        "references": {
          "reference_data": [
            {
              "name": "https://hackerone.com/reports/712065",
              "refsource": "MISC",
              "url": "https://hackerone.com/reports/712065"
            },
            {
              "name": "https://www.oracle.com/security-alerts/cpuApr2021.html",
              "refsource": "MISC",
              "url": "https://www.oracle.com/security-alerts/cpuApr2021.html"
            },
            {
              "name": "https://security.netapp.com/advisory/ntap-20200724-0006/",
              "refsource": "CONFIRM",
              "url": "https://security.netapp.com/advisory/ntap-20200724-0006/"
            },
            {
              "name": "https://github.com/lodash/lodash/issues/4874",
              "refsource": "MISC",
              "url": "https://github.com/lodash/lodash/issues/4874"
            },
            {
              "name": "https://www.oracle.com//security-alerts/cpujul2021.html",
              "refsource": "MISC",
              "url": "https://www.oracle.com//security-alerts/cpujul2021.html"
            },
            {
              "name": "https://www.oracle.com/security-alerts/cpuoct2021.html",
              "refsource": "MISC",
              "url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
            },
            {
              "name": "https://www.oracle.com/security-alerts/cpujan2022.html",
              "refsource": "MISC",
              "url": "https://www.oracle.com/security-alerts/cpujan2022.html"
            },
            {
              "name": "https://www.oracle.com/security-alerts/cpuapr2022.html",
              "refsource": "MISC",
              "url": "https://www.oracle.com/security-alerts/cpuapr2022.html"
            }
          ]
        }
      }
    }
  },
  "cveMetadata": {
    "assignerOrgId": "36234546-b8fa-4601-9d6f-f4e334aa8ea1",
    "assignerShortName": "hackerone",
    "cveId": "CVE-2020-8203",
    "datePublished": "2020-07-15T16:10:27",
    "dateReserved": "2020-01-28T00:00:00",
    "dateUpdated": "2024-08-04T09:56:28.214Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}

cve-2021-23337
Vulnerability from cvelistv5
Published
2021-02-15 12:15
Modified
2024-09-16 19:15
Summary
Command Injection
Impacted products
n/aLodash
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "providerMetadata": {
          "dateUpdated": "2024-08-03T19:05:55.700Z",
          "orgId": "af854a3a-2127-422b-91ae-364da2661108",
          "shortName": "CVE"
        },
        "references": [
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://snyk.io/vuln/SNYK-JS-LODASH-1040724"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARSBOWER-1074928"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARSNPM-1074929"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARS-1074930"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARSBOWERGITHUBLODASH-1074931"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://snyk.io/vuln/SNYK-JAVA-ORGFUJIONWEBJARS-1074932"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://github.com/lodash/lodash/blob/ddfd9b11a0126db2302cb70ec9973b66baec0975/lodash.js%23L14851"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://www.oracle.com//security-alerts/cpujul2021.html"
          },
          {
            "tags": [
              "x_refsource_CONFIRM",
              "x_transferred"
            ],
            "url": "https://security.netapp.com/advisory/ntap-20210312-0006/"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://www.oracle.com/security-alerts/cpujan2022.html"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://www.oracle.com/security-alerts/cpujul2022.html"
          },
          {
            "tags": [
              "x_refsource_CONFIRM",
              "x_transferred"
            ],
            "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf"
          }
        ],
        "title": "CVE Program Container"
      }
    ],
    "cna": {
      "affected": [
        {
          "product": "Lodash",
          "vendor": "n/a",
          "versions": [
            {
              "status": "affected",
              "version": "prior to 4.17.21"
            }
          ]
        }
      ],
      "credits": [
        {
          "lang": "en",
          "value": "Marc Hassan"
        }
      ],
      "datePublic": "2021-02-15T00:00:00",
      "descriptions": [
        {
          "lang": "en",
          "value": "Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function."
        }
      ],
      "metrics": [
        {
          "cvssV3_1": {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "availabilityImpact": "HIGH",
            "baseScore": 7.2,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitCodeMaturity": "PROOF_OF_CONCEPT",
            "integrityImpact": "HIGH",
            "privilegesRequired": "HIGH",
            "remediationLevel": "UNAVAILABLE",
            "reportConfidence": "CONFIRMED",
            "scope": "UNCHANGED",
            "temporalScore": 6.8,
            "temporalSeverity": "MEDIUM",
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H/E:P/RL:U/RC:C",
            "version": "3.1"
          }
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "description": "Command Injection",
              "lang": "en",
              "type": "text"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2022-09-13T11:06:34",
        "orgId": "bae035ff-b466-4ff4-94d0-fc9efd9e1730",
        "shortName": "snyk"
      },
      "references": [
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://snyk.io/vuln/SNYK-JS-LODASH-1040724"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARSBOWER-1074928"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARSNPM-1074929"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARS-1074930"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARSBOWERGITHUBLODASH-1074931"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://snyk.io/vuln/SNYK-JAVA-ORGFUJIONWEBJARS-1074932"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://github.com/lodash/lodash/blob/ddfd9b11a0126db2302cb70ec9973b66baec0975/lodash.js%23L14851"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://www.oracle.com//security-alerts/cpujul2021.html"
        },
        {
          "tags": [
            "x_refsource_CONFIRM"
          ],
          "url": "https://security.netapp.com/advisory/ntap-20210312-0006/"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://www.oracle.com/security-alerts/cpujan2022.html"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://www.oracle.com/security-alerts/cpujul2022.html"
        },
        {
          "tags": [
            "x_refsource_CONFIRM"
          ],
          "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf"
        }
      ],
      "title": "Command Injection",
      "x_legacyV4Record": {
        "CVE_data_meta": {
          "ASSIGNER": "report@snyk.io",
          "DATE_PUBLIC": "2021-02-15T12:13:18.729628Z",
          "ID": "CVE-2021-23337",
          "STATE": "PUBLIC",
          "TITLE": "Command Injection"
        },
        "affects": {
          "vendor": {
            "vendor_data": [
              {
                "product": {
                  "product_data": [
                    {
                      "product_name": "Lodash",
                      "version": {
                        "version_data": [
                          {
                            "version_value": "prior to 4.17.21"
                          }
                        ]
                      }
                    }
                  ]
                },
                "vendor_name": "n/a"
              }
            ]
          }
        },
        "credit": [
          {
            "lang": "eng",
            "value": "Marc Hassan"
          }
        ],
        "data_format": "MITRE",
        "data_type": "CVE",
        "data_version": "4.0",
        "description": {
          "description_data": [
            {
              "lang": "eng",
              "value": "Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function."
            }
          ]
        },
        "impact": {
          "cvss": {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "availabilityImpact": "HIGH",
            "baseScore": 7.2,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "integrityImpact": "HIGH",
            "privilegesRequired": "HIGH",
            "scope": "UNCHANGED",
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H/E:P/RL:U/RC:C",
            "version": "3.1"
          }
        },
        "problemtype": {
          "problemtype_data": [
            {
              "description": [
                {
                  "lang": "eng",
                  "value": "Command Injection"
                }
              ]
            }
          ]
        },
        "references": {
          "reference_data": [
            {
              "name": "https://snyk.io/vuln/SNYK-JS-LODASH-1040724",
              "refsource": "MISC",
              "url": "https://snyk.io/vuln/SNYK-JS-LODASH-1040724"
            },
            {
              "name": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARSBOWER-1074928",
              "refsource": "MISC",
              "url": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARSBOWER-1074928"
            },
            {
              "name": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARSNPM-1074929",
              "refsource": "MISC",
              "url": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARSNPM-1074929"
            },
            {
              "name": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARS-1074930",
              "refsource": "MISC",
              "url": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARS-1074930"
            },
            {
              "name": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARSBOWERGITHUBLODASH-1074931",
              "refsource": "MISC",
              "url": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARSBOWERGITHUBLODASH-1074931"
            },
            {
              "name": "https://snyk.io/vuln/SNYK-JAVA-ORGFUJIONWEBJARS-1074932",
              "refsource": "MISC",
              "url": "https://snyk.io/vuln/SNYK-JAVA-ORGFUJIONWEBJARS-1074932"
            },
            {
              "name": "https://github.com/lodash/lodash/blob/ddfd9b11a0126db2302cb70ec9973b66baec0975/lodash.js%23L14851",
              "refsource": "MISC",
              "url": "https://github.com/lodash/lodash/blob/ddfd9b11a0126db2302cb70ec9973b66baec0975/lodash.js%23L14851"
            },
            {
              "name": "https://www.oracle.com//security-alerts/cpujul2021.html",
              "refsource": "MISC",
              "url": "https://www.oracle.com//security-alerts/cpujul2021.html"
            },
            {
              "name": "https://security.netapp.com/advisory/ntap-20210312-0006/",
              "refsource": "CONFIRM",
              "url": "https://security.netapp.com/advisory/ntap-20210312-0006/"
            },
            {
              "name": "https://www.oracle.com/security-alerts/cpuoct2021.html",
              "refsource": "MISC",
              "url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
            },
            {
              "name": "https://www.oracle.com/security-alerts/cpujan2022.html",
              "refsource": "MISC",
              "url": "https://www.oracle.com/security-alerts/cpujan2022.html"
            },
            {
              "name": "https://www.oracle.com/security-alerts/cpujul2022.html",
              "refsource": "MISC",
              "url": "https://www.oracle.com/security-alerts/cpujul2022.html"
            },
            {
              "name": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf",
              "refsource": "CONFIRM",
              "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf"
            }
          ]
        }
      }
    }
  },
  "cveMetadata": {
    "assignerOrgId": "bae035ff-b466-4ff4-94d0-fc9efd9e1730",
    "assignerShortName": "snyk",
    "cveId": "CVE-2021-23337",
    "datePublished": "2021-02-15T12:15:14.715164Z",
    "dateReserved": "2021-01-08T00:00:00",
    "dateUpdated": "2024-09-16T19:15:17.074Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}

cve-2018-3721
Vulnerability from cvelistv5
Published
2018-06-07 02:00
Modified
2024-09-16 22:34
Severity ?
Summary
lodash node module before 4.17.5 suffers from a Modification of Assumed-Immutable Data (MAID) vulnerability via defaultsDeep, merge, and mergeWith functions, which allows a malicious user to modify the prototype of "Object" via __proto__, causing the addition or modification of an existing property that will exist on all objects.
Impacted products
HackerOnelodash node module
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "providerMetadata": {
          "dateUpdated": "2024-08-05T04:50:30.535Z",
          "orgId": "af854a3a-2127-422b-91ae-364da2661108",
          "shortName": "CVE"
        },
        "references": [
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://hackerone.com/reports/310443"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://github.com/lodash/lodash/commit/d8e069cc3410082e44eb18fcf8e7f3d08ebe1d4a"
          },
          {
            "tags": [
              "x_refsource_CONFIRM",
              "x_transferred"
            ],
            "url": "https://security.netapp.com/advisory/ntap-20190919-0004/"
          }
        ],
        "title": "CVE Program Container"
      }
    ],
    "cna": {
      "affected": [
        {
          "product": "lodash node module",
          "vendor": "HackerOne",
          "versions": [
            {
              "status": "affected",
              "version": "Versions before 4.17.5"
            }
          ]
        }
      ],
      "datePublic": "2018-04-26T00:00:00",
      "descriptions": [
        {
          "lang": "en",
          "value": "lodash node module before 4.17.5 suffers from a Modification of Assumed-Immutable Data (MAID) vulnerability via defaultsDeep, merge, and mergeWith functions, which allows a malicious user to modify the prototype of \"Object\" via __proto__, causing the addition or modification of an existing property that will exist on all objects."
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "cweId": "CWE-471",
              "description": "Modification of Assumed-Immutable Data (MAID) (CWE-471)",
              "lang": "en",
              "type": "CWE"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2019-09-19T16:06:08",
        "orgId": "36234546-b8fa-4601-9d6f-f4e334aa8ea1",
        "shortName": "hackerone"
      },
      "references": [
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://hackerone.com/reports/310443"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://github.com/lodash/lodash/commit/d8e069cc3410082e44eb18fcf8e7f3d08ebe1d4a"
        },
        {
          "tags": [
            "x_refsource_CONFIRM"
          ],
          "url": "https://security.netapp.com/advisory/ntap-20190919-0004/"
        }
      ],
      "x_legacyV4Record": {
        "CVE_data_meta": {
          "ASSIGNER": "support@hackerone.com",
          "DATE_PUBLIC": "2018-04-26T00:00:00",
          "ID": "CVE-2018-3721",
          "STATE": "PUBLIC"
        },
        "affects": {
          "vendor": {
            "vendor_data": [
              {
                "product": {
                  "product_data": [
                    {
                      "product_name": "lodash node module",
                      "version": {
                        "version_data": [
                          {
                            "version_value": "Versions before 4.17.5"
                          }
                        ]
                      }
                    }
                  ]
                },
                "vendor_name": "HackerOne"
              }
            ]
          }
        },
        "data_format": "MITRE",
        "data_type": "CVE",
        "data_version": "4.0",
        "description": {
          "description_data": [
            {
              "lang": "eng",
              "value": "lodash node module before 4.17.5 suffers from a Modification of Assumed-Immutable Data (MAID) vulnerability via defaultsDeep, merge, and mergeWith functions, which allows a malicious user to modify the prototype of \"Object\" via __proto__, causing the addition or modification of an existing property that will exist on all objects."
            }
          ]
        },
        "problemtype": {
          "problemtype_data": [
            {
              "description": [
                {
                  "lang": "eng",
                  "value": "Modification of Assumed-Immutable Data (MAID) (CWE-471)"
                }
              ]
            }
          ]
        },
        "references": {
          "reference_data": [
            {
              "name": "https://hackerone.com/reports/310443",
              "refsource": "MISC",
              "url": "https://hackerone.com/reports/310443"
            },
            {
              "name": "https://github.com/lodash/lodash/commit/d8e069cc3410082e44eb18fcf8e7f3d08ebe1d4a",
              "refsource": "MISC",
              "url": "https://github.com/lodash/lodash/commit/d8e069cc3410082e44eb18fcf8e7f3d08ebe1d4a"
            },
            {
              "name": "https://security.netapp.com/advisory/ntap-20190919-0004/",
              "refsource": "CONFIRM",
              "url": "https://security.netapp.com/advisory/ntap-20190919-0004/"
            }
          ]
        }
      }
    }
  },
  "cveMetadata": {
    "assignerOrgId": "36234546-b8fa-4601-9d6f-f4e334aa8ea1",
    "assignerShortName": "hackerone",
    "cveId": "CVE-2018-3721",
    "datePublished": "2018-06-07T02:00:00Z",
    "dateReserved": "2017-12-28T00:00:00",
    "dateUpdated": "2024-09-16T22:34:54.590Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}

cve-2018-16487
Vulnerability from cvelistv5
Published
2019-02-01 18:00
Modified
2024-08-05 10:24
Severity ?
Summary
A prototype pollution vulnerability was found in lodash <4.17.11 where the functions merge, mergeWith, and defaultsDeep can be tricked into adding or modifying properties of Object.prototype.
Impacted products
HackerOnelodash
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "providerMetadata": {
          "dateUpdated": "2024-08-05T10:24:32.702Z",
          "orgId": "af854a3a-2127-422b-91ae-364da2661108",
          "shortName": "CVE"
        },
        "references": [
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://hackerone.com/reports/380873"
          },
          {
            "tags": [
              "x_refsource_CONFIRM",
              "x_transferred"
            ],
            "url": "https://security.netapp.com/advisory/ntap-20190919-0004/"
          }
        ],
        "title": "CVE Program Container"
      }
    ],
    "cna": {
      "affected": [
        {
          "product": "lodash",
          "vendor": "HackerOne",
          "versions": [
            {
              "status": "affected",
              "version": "\u003c4.7.11"
            }
          ]
        }
      ],
      "datePublic": "2019-02-01T00:00:00",
      "descriptions": [
        {
          "lang": "en",
          "value": "A prototype pollution vulnerability was found in lodash \u003c4.17.11 where the functions merge, mergeWith, and defaultsDeep can be tricked into adding or modifying properties of Object.prototype."
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "cweId": "CWE-400",
              "description": "Denial of Service (CWE-400)",
              "lang": "en",
              "type": "CWE"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2019-09-19T16:06:08",
        "orgId": "36234546-b8fa-4601-9d6f-f4e334aa8ea1",
        "shortName": "hackerone"
      },
      "references": [
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://hackerone.com/reports/380873"
        },
        {
          "tags": [
            "x_refsource_CONFIRM"
          ],
          "url": "https://security.netapp.com/advisory/ntap-20190919-0004/"
        }
      ],
      "x_legacyV4Record": {
        "CVE_data_meta": {
          "ASSIGNER": "support@hackerone.com",
          "ID": "CVE-2018-16487",
          "STATE": "PUBLIC"
        },
        "affects": {
          "vendor": {
            "vendor_data": [
              {
                "product": {
                  "product_data": [
                    {
                      "product_name": "lodash",
                      "version": {
                        "version_data": [
                          {
                            "version_value": "\u003c4.7.11"
                          }
                        ]
                      }
                    }
                  ]
                },
                "vendor_name": "HackerOne"
              }
            ]
          }
        },
        "data_format": "MITRE",
        "data_type": "CVE",
        "data_version": "4.0",
        "description": {
          "description_data": [
            {
              "lang": "eng",
              "value": "A prototype pollution vulnerability was found in lodash \u003c4.17.11 where the functions merge, mergeWith, and defaultsDeep can be tricked into adding or modifying properties of Object.prototype."
            }
          ]
        },
        "problemtype": {
          "problemtype_data": [
            {
              "description": [
                {
                  "lang": "eng",
                  "value": "Denial of Service (CWE-400)"
                }
              ]
            }
          ]
        },
        "references": {
          "reference_data": [
            {
              "name": "https://hackerone.com/reports/380873",
              "refsource": "MISC",
              "url": "https://hackerone.com/reports/380873"
            },
            {
              "name": "https://security.netapp.com/advisory/ntap-20190919-0004/",
              "refsource": "CONFIRM",
              "url": "https://security.netapp.com/advisory/ntap-20190919-0004/"
            }
          ]
        }
      }
    }
  },
  "cveMetadata": {
    "assignerOrgId": "36234546-b8fa-4601-9d6f-f4e334aa8ea1",
    "assignerShortName": "hackerone",
    "cveId": "CVE-2018-16487",
    "datePublished": "2019-02-01T18:00:00",
    "dateReserved": "2018-09-04T00:00:00",
    "dateUpdated": "2024-08-05T10:24:32.702Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}

cve-2019-10744
Vulnerability from cvelistv5
Published
2019-07-25 23:43
Modified
2024-08-04 22:32
Severity ?
Summary
Versions of lodash lower than 4.17.12 are vulnerable to Prototype Pollution. The function defaultsDeep could be tricked into adding or modifying properties of Object.prototype using a constructor payload.
Impacted products
Snyklodash
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "providerMetadata": {
          "dateUpdated": "2024-08-04T22:32:01.271Z",
          "orgId": "af854a3a-2127-422b-91ae-364da2661108",
          "shortName": "CVE"
        },
        "references": [
          {
            "name": "RHSA-2019:3024",
            "tags": [
              "vendor-advisory",
              "x_refsource_REDHAT",
              "x_transferred"
            ],
            "url": "https://access.redhat.com/errata/RHSA-2019:3024"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://www.oracle.com/security-alerts/cpuoct2020.html"
          },
          {
            "tags": [
              "x_refsource_CONFIRM",
              "x_transferred"
            ],
            "url": "https://snyk.io/vuln/SNYK-JS-LODASH-450202"
          },
          {
            "tags": [
              "x_refsource_CONFIRM",
              "x_transferred"
            ],
            "url": "https://security.netapp.com/advisory/ntap-20191004-0005/"
          },
          {
            "tags": [
              "x_refsource_CONFIRM",
              "x_transferred"
            ],
            "url": "https://support.f5.com/csp/article/K47105354?utm_source=f5support\u0026amp%3Butm_medium=RSS"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://www.oracle.com/security-alerts/cpujan2021.html"
          }
        ],
        "title": "CVE Program Container"
      }
    ],
    "cna": {
      "affected": [
        {
          "product": "lodash",
          "vendor": "Snyk",
          "versions": [
            {
              "status": "affected",
              "version": "All versions prior to 4.17.12"
            }
          ]
        }
      ],
      "descriptions": [
        {
          "lang": "en",
          "value": "Versions of lodash lower than 4.17.12 are vulnerable to Prototype Pollution. The function defaultsDeep could be tricked into adding or modifying properties of Object.prototype using a constructor payload."
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "description": "Prototype Pollution",
              "lang": "en",
              "type": "text"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2021-01-20T14:42:00",
        "orgId": "bae035ff-b466-4ff4-94d0-fc9efd9e1730",
        "shortName": "snyk"
      },
      "references": [
        {
          "name": "RHSA-2019:3024",
          "tags": [
            "vendor-advisory",
            "x_refsource_REDHAT"
          ],
          "url": "https://access.redhat.com/errata/RHSA-2019:3024"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://www.oracle.com/security-alerts/cpuoct2020.html"
        },
        {
          "tags": [
            "x_refsource_CONFIRM"
          ],
          "url": "https://snyk.io/vuln/SNYK-JS-LODASH-450202"
        },
        {
          "tags": [
            "x_refsource_CONFIRM"
          ],
          "url": "https://security.netapp.com/advisory/ntap-20191004-0005/"
        },
        {
          "tags": [
            "x_refsource_CONFIRM"
          ],
          "url": "https://support.f5.com/csp/article/K47105354?utm_source=f5support\u0026amp%3Butm_medium=RSS"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://www.oracle.com/security-alerts/cpujan2021.html"
        }
      ],
      "x_legacyV4Record": {
        "CVE_data_meta": {
          "ASSIGNER": "report@snyk.io",
          "ID": "CVE-2019-10744",
          "STATE": "PUBLIC"
        },
        "affects": {
          "vendor": {
            "vendor_data": [
              {
                "product": {
                  "product_data": [
                    {
                      "product_name": "lodash",
                      "version": {
                        "version_data": [
                          {
                            "version_value": "All versions prior to 4.17.12"
                          }
                        ]
                      }
                    }
                  ]
                },
                "vendor_name": "Snyk"
              }
            ]
          }
        },
        "data_format": "MITRE",
        "data_type": "CVE",
        "data_version": "4.0",
        "description": {
          "description_data": [
            {
              "lang": "eng",
              "value": "Versions of lodash lower than 4.17.12 are vulnerable to Prototype Pollution. The function defaultsDeep could be tricked into adding or modifying properties of Object.prototype using a constructor payload."
            }
          ]
        },
        "problemtype": {
          "problemtype_data": [
            {
              "description": [
                {
                  "lang": "eng",
                  "value": "Prototype Pollution"
                }
              ]
            }
          ]
        },
        "references": {
          "reference_data": [
            {
              "name": "RHSA-2019:3024",
              "refsource": "REDHAT",
              "url": "https://access.redhat.com/errata/RHSA-2019:3024"
            },
            {
              "name": "https://www.oracle.com/security-alerts/cpuoct2020.html",
              "refsource": "MISC",
              "url": "https://www.oracle.com/security-alerts/cpuoct2020.html"
            },
            {
              "name": "https://snyk.io/vuln/SNYK-JS-LODASH-450202",
              "refsource": "CONFIRM",
              "url": "https://snyk.io/vuln/SNYK-JS-LODASH-450202"
            },
            {
              "name": "https://security.netapp.com/advisory/ntap-20191004-0005/",
              "refsource": "CONFIRM",
              "url": "https://security.netapp.com/advisory/ntap-20191004-0005/"
            },
            {
              "name": "https://support.f5.com/csp/article/K47105354?utm_source=f5support\u0026amp;utm_medium=RSS",
              "refsource": "CONFIRM",
              "url": "https://support.f5.com/csp/article/K47105354?utm_source=f5support\u0026amp;utm_medium=RSS"
            },
            {
              "name": "https://www.oracle.com/security-alerts/cpujan2021.html",
              "refsource": "MISC",
              "url": "https://www.oracle.com/security-alerts/cpujan2021.html"
            }
          ]
        }
      }
    }
  },
  "cveMetadata": {
    "assignerOrgId": "bae035ff-b466-4ff4-94d0-fc9efd9e1730",
    "assignerShortName": "snyk",
    "cveId": "CVE-2019-10744",
    "datePublished": "2019-07-25T23:43:03",
    "dateReserved": "2019-04-03T00:00:00",
    "dateUpdated": "2024-08-04T22:32:01.271Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}

cve-2020-28500
Vulnerability from cvelistv5
Published
2021-02-15 11:10
Modified
2024-09-16 22:15
Summary
Regular Expression Denial of Service (ReDoS)
Impacted products
n/aLodash
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "providerMetadata": {
          "dateUpdated": "2024-08-04T16:40:59.899Z",
          "orgId": "af854a3a-2127-422b-91ae-364da2661108",
          "shortName": "CVE"
        },
        "references": [
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://snyk.io/vuln/SNYK-JS-LODASH-1018905"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARSBOWER-1074892"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARSNPM-1074893"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARS-1074894"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARSBOWERGITHUBLODASH-1074895"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://snyk.io/vuln/SNYK-JAVA-ORGFUJIONWEBJARS-1074896"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://github.com/lodash/lodash/blob/npm/trimEnd.js%23L8"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://github.com/lodash/lodash/pull/5065"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://www.oracle.com//security-alerts/cpujul2021.html"
          },
          {
            "tags": [
              "x_refsource_CONFIRM",
              "x_transferred"
            ],
            "url": "https://security.netapp.com/advisory/ntap-20210312-0006/"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://www.oracle.com/security-alerts/cpujan2022.html"
          },
          {
            "tags": [
              "x_refsource_MISC",
              "x_transferred"
            ],
            "url": "https://www.oracle.com/security-alerts/cpujul2022.html"
          },
          {
            "tags": [
              "x_refsource_CONFIRM",
              "x_transferred"
            ],
            "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf"
          }
        ],
        "title": "CVE Program Container"
      }
    ],
    "cna": {
      "affected": [
        {
          "product": "Lodash",
          "vendor": "n/a",
          "versions": [
            {
              "status": "affected",
              "version": "versions prior to 4.17.21"
            }
          ]
        }
      ],
      "credits": [
        {
          "lang": "en",
          "value": "Liyuan Chen"
        }
      ],
      "datePublic": "2021-02-15T00:00:00",
      "descriptions": [
        {
          "lang": "en",
          "value": "Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions."
        }
      ],
      "metrics": [
        {
          "cvssV3_1": {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "availabilityImpact": "LOW",
            "baseScore": 5.3,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "NONE",
            "exploitCodeMaturity": "PROOF_OF_CONCEPT",
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "remediationLevel": "NOT_DEFINED",
            "reportConfidence": "NOT_DEFINED",
            "scope": "UNCHANGED",
            "temporalScore": 5,
            "temporalSeverity": "MEDIUM",
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L/E:P",
            "version": "3.1"
          }
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "description": "Regular Expression Denial of Service (ReDoS)",
              "lang": "en",
              "type": "text"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2022-09-13T11:06:20",
        "orgId": "bae035ff-b466-4ff4-94d0-fc9efd9e1730",
        "shortName": "snyk"
      },
      "references": [
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://snyk.io/vuln/SNYK-JS-LODASH-1018905"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARSBOWER-1074892"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARSNPM-1074893"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARS-1074894"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARSBOWERGITHUBLODASH-1074895"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://snyk.io/vuln/SNYK-JAVA-ORGFUJIONWEBJARS-1074896"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://github.com/lodash/lodash/blob/npm/trimEnd.js%23L8"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://github.com/lodash/lodash/pull/5065"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://www.oracle.com//security-alerts/cpujul2021.html"
        },
        {
          "tags": [
            "x_refsource_CONFIRM"
          ],
          "url": "https://security.netapp.com/advisory/ntap-20210312-0006/"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://www.oracle.com/security-alerts/cpujan2022.html"
        },
        {
          "tags": [
            "x_refsource_MISC"
          ],
          "url": "https://www.oracle.com/security-alerts/cpujul2022.html"
        },
        {
          "tags": [
            "x_refsource_CONFIRM"
          ],
          "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf"
        }
      ],
      "title": "Regular Expression Denial of Service (ReDoS)",
      "x_legacyV4Record": {
        "CVE_data_meta": {
          "ASSIGNER": "report@snyk.io",
          "DATE_PUBLIC": "2021-02-15T11:10:02.896752Z",
          "ID": "CVE-2020-28500",
          "STATE": "PUBLIC",
          "TITLE": "Regular Expression Denial of Service (ReDoS)"
        },
        "affects": {
          "vendor": {
            "vendor_data": [
              {
                "product": {
                  "product_data": [
                    {
                      "product_name": "Lodash",
                      "version": {
                        "version_data": [
                          {
                            "version_value": "versions prior to 4.17.21"
                          }
                        ]
                      }
                    }
                  ]
                },
                "vendor_name": "n/a"
              }
            ]
          }
        },
        "credit": [
          {
            "lang": "eng",
            "value": "Liyuan Chen"
          }
        ],
        "data_format": "MITRE",
        "data_type": "CVE",
        "data_version": "4.0",
        "description": {
          "description_data": [
            {
              "lang": "eng",
              "value": "Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions."
            }
          ]
        },
        "impact": {
          "cvss": {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "availabilityImpact": "LOW",
            "baseScore": 5.3,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "NONE",
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L/E:P",
            "version": "3.1"
          }
        },
        "problemtype": {
          "problemtype_data": [
            {
              "description": [
                {
                  "lang": "eng",
                  "value": "Regular Expression Denial of Service (ReDoS)"
                }
              ]
            }
          ]
        },
        "references": {
          "reference_data": [
            {
              "name": "https://snyk.io/vuln/SNYK-JS-LODASH-1018905",
              "refsource": "MISC",
              "url": "https://snyk.io/vuln/SNYK-JS-LODASH-1018905"
            },
            {
              "name": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARSBOWER-1074892",
              "refsource": "MISC",
              "url": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARSBOWER-1074892"
            },
            {
              "name": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARSNPM-1074893",
              "refsource": "MISC",
              "url": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARSNPM-1074893"
            },
            {
              "name": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARS-1074894",
              "refsource": "MISC",
              "url": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARS-1074894"
            },
            {
              "name": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARSBOWERGITHUBLODASH-1074895",
              "refsource": "MISC",
              "url": "https://snyk.io/vuln/SNYK-JAVA-ORGWEBJARSBOWERGITHUBLODASH-1074895"
            },
            {
              "name": "https://snyk.io/vuln/SNYK-JAVA-ORGFUJIONWEBJARS-1074896",
              "refsource": "MISC",
              "url": "https://snyk.io/vuln/SNYK-JAVA-ORGFUJIONWEBJARS-1074896"
            },
            {
              "name": "https://github.com/lodash/lodash/blob/npm/trimEnd.js%23L8",
              "refsource": "MISC",
              "url": "https://github.com/lodash/lodash/blob/npm/trimEnd.js%23L8"
            },
            {
              "name": "https://github.com/lodash/lodash/pull/5065",
              "refsource": "MISC",
              "url": "https://github.com/lodash/lodash/pull/5065"
            },
            {
              "name": "https://www.oracle.com//security-alerts/cpujul2021.html",
              "refsource": "MISC",
              "url": "https://www.oracle.com//security-alerts/cpujul2021.html"
            },
            {
              "name": "https://security.netapp.com/advisory/ntap-20210312-0006/",
              "refsource": "CONFIRM",
              "url": "https://security.netapp.com/advisory/ntap-20210312-0006/"
            },
            {
              "name": "https://www.oracle.com/security-alerts/cpuoct2021.html",
              "refsource": "MISC",
              "url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
            },
            {
              "name": "https://www.oracle.com/security-alerts/cpujan2022.html",
              "refsource": "MISC",
              "url": "https://www.oracle.com/security-alerts/cpujan2022.html"
            },
            {
              "name": "https://www.oracle.com/security-alerts/cpujul2022.html",
              "refsource": "MISC",
              "url": "https://www.oracle.com/security-alerts/cpujul2022.html"
            },
            {
              "name": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf",
              "refsource": "CONFIRM",
              "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf"
            }
          ]
        }
      }
    }
  },
  "cveMetadata": {
    "assignerOrgId": "bae035ff-b466-4ff4-94d0-fc9efd9e1730",
    "assignerShortName": "snyk",
    "cveId": "CVE-2020-28500",
    "datePublished": "2021-02-15T11:10:16.225227Z",
    "dateReserved": "2020-11-12T00:00:00",
    "dateUpdated": "2024-09-16T22:15:52.206Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}