var-202006-1640
Vulnerability from variot

A logic issue was addressed with improved restrictions. This issue is fixed in iOS 13.5 and iPadOS 13.5, tvOS 13.4.5, watchOS 6.2.5, Safari 13.1.1, iTunes 12.10.7 for Windows, iCloud for Windows 11.2, iCloud for Windows 7.19. A remote attacker may be able to cause arbitrary code execution. plural Apple The product contains a logic vulnerability due to a flawed handling of restrictions.Arbitrary code could be executed by a remote attacker. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file.The specific flaw exists within the implementation of the HasIndexedProperty DFG node. By performing actions in JavaScript, an attacker can trigger a type confusion condition. An attacker can leverage this vulnerability to execute code in the context of the current process. Apple iOS, etc. are all products of Apple (Apple). Apple iOS is an operating system developed for mobile devices. Apple tvOS is a smart TV operating system. Apple iPadOS is an operating system for iPad tablets. WebKit is one of the web browser engine components. A security vulnerability exists in the WebKit component of several Apple products. In addition to persistent storage, Red Hat OpenShift Container Storage provisions a multicloud data management service with an S3 compatible API.

These updated images include numerous security fixes, bug fixes, and enhancements. Solution:

Before applying this update, make sure all previously released errata relevant to your system have been applied.

For details on how to apply this update, refer to:

https://access.redhat.com/articles/11258

  1. Bugs fixed (https://bugzilla.redhat.com/):

1806266 - Require an extension to the cephfs subvolume commands, that can return metadata regarding a subvolume 1813506 - Dockerfile not compatible with docker and buildah 1817438 - OSDs not distributed uniformly across OCS nodes on a 9-node AWS IPI setup 1817850 - [BAREMETAL] rook-ceph-operator does not reconcile when osd deployment is deleted when performed node replacement 1827157 - OSD hitting default CPU limit on AWS i3en.2xlarge instances limiting performance 1829055 - [RFE] add insecureEdgeTerminationPolicy: Redirect to noobaa mgmt route (http to https) 1833153 - add a variable for sleep time of rook operator between checks of downed OSD+Node. 1836299 - NooBaa Operator deploys with HPA that fires maxreplicas alerts by default 1842254 - [NooBaa] Compression stats do not add up when compression id disabled 1845976 - OCS 4.5 Independent mode: must-gather commands fails to collect ceph command outputs from external cluster 1849771 - [RFE] Account created by OBC should have same permissions as bucket owner 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1854500 - [tracker-rhcs bug 1838931] mgr/volumes: add command to return metadata of a subvolume snapshot 1854501 - [Tracker-rhcs bug 1848494 ]pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes independent of the source subvolume 1854503 - [tracker-rhcs-bug 1848503] cephfs: Provide alternatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume 1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS 1858195 - [GSS] registry pod stuck in ContainerCreating due to pvc from cephfs storage class fail to mount 1859183 - PV expansion is failing in retry loop in pre-existing PV after upgrade to OCS 4.5 (i.e. if the PV spec does not contain expansion params) 1859229 - Rook should delete extra MON PVCs in case first reconcile takes too long and rook skips "b" and "c" (spawned from Bug 1840084#c14) 1859478 - OCS 4.6 : Upon deployment, CSI Pods in CLBO with error - flag provided but not defined: -metadatastorage 1860022 - OCS 4.6 Deployment: LBP CSV and pod should not be deployed since ob/obc CRDs are owned from OCS 4.5 onwards 1860034 - OCS 4.6 Deployment in ocs-ci : Toolbox pod in ContainerCreationError due to key admin-secret not found 1860670 - OCS 4.5 Uninstall External: Openshift-storage namespace in Terminating state as CephObjectStoreUser had finalizers remaining 1860848 - Add validation for rgw-pool-prefix in the ceph-external-cluster-details-exporter script 1861780 - [Tracker BZ1866386][IBM s390x] Mount Failed for CEPH while running couple of OCS test cases. Solution:

Download the release images via:

quay.io/redhat/quay:v3.3.3 quay.io/redhat/clair-jwt:v3.3.3 quay.io/redhat/quay-builder:v3.3.3 quay.io/redhat/clair:v3.3.3

  1. Bugs fixed (https://bugzilla.redhat.com/):

1905758 - CVE-2020-27831 quay: email notifications authorization bypass 1905784 - CVE-2020-27832 quay: persistent XSS in repository notification display

  1. JIRA issues fixed (https://issues.jboss.org/):

PROJQUAY-1124 - NVD feed is broken for latest Clair v2 version

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update Advisory ID: RHSA-2020:5633-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2020:5633 Issue date: 2021-02-24 CVE Names: CVE-2018-10103 CVE-2018-10105 CVE-2018-14461 CVE-2018-14462 CVE-2018-14463 CVE-2018-14464 CVE-2018-14465 CVE-2018-14466 CVE-2018-14467 CVE-2018-14468 CVE-2018-14469 CVE-2018-14470 CVE-2018-14553 CVE-2018-14879 CVE-2018-14880 CVE-2018-14881 CVE-2018-14882 CVE-2018-16227 CVE-2018-16228 CVE-2018-16229 CVE-2018-16230 CVE-2018-16300 CVE-2018-16451 CVE-2018-16452 CVE-2018-20843 CVE-2019-3884 CVE-2019-5018 CVE-2019-6977 CVE-2019-6978 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9455 CVE-2019-9458 CVE-2019-11068 CVE-2019-12614 CVE-2019-13050 CVE-2019-13225 CVE-2019-13627 CVE-2019-14889 CVE-2019-15165 CVE-2019-15166 CVE-2019-15903 CVE-2019-15917 CVE-2019-15925 CVE-2019-16167 CVE-2019-16168 CVE-2019-16231 CVE-2019-16233 CVE-2019-16935 CVE-2019-17450 CVE-2019-17546 CVE-2019-18197 CVE-2019-18808 CVE-2019-18809 CVE-2019-19046 CVE-2019-19056 CVE-2019-19062 CVE-2019-19063 CVE-2019-19068 CVE-2019-19072 CVE-2019-19221 CVE-2019-19319 CVE-2019-19332 CVE-2019-19447 CVE-2019-19524 CVE-2019-19533 CVE-2019-19537 CVE-2019-19543 CVE-2019-19602 CVE-2019-19767 CVE-2019-19770 CVE-2019-19906 CVE-2019-19956 CVE-2019-20054 CVE-2019-20218 CVE-2019-20386 CVE-2019-20387 CVE-2019-20388 CVE-2019-20454 CVE-2019-20636 CVE-2019-20807 CVE-2019-20812 CVE-2019-20907 CVE-2019-20916 CVE-2020-0305 CVE-2020-0444 CVE-2020-1716 CVE-2020-1730 CVE-2020-1751 CVE-2020-1752 CVE-2020-1971 CVE-2020-2574 CVE-2020-2752 CVE-2020-2922 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3898 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-6405 CVE-2020-7595 CVE-2020-7774 CVE-2020-8177 CVE-2020-8492 CVE-2020-8563 CVE-2020-8566 CVE-2020-8619 CVE-2020-8622 CVE-2020-8623 CVE-2020-8624 CVE-2020-8647 CVE-2020-8648 CVE-2020-8649 CVE-2020-9327 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-10018 CVE-2020-10029 CVE-2020-10732 CVE-2020-10749 CVE-2020-10751 CVE-2020-10763 CVE-2020-10773 CVE-2020-10774 CVE-2020-10942 CVE-2020-11565 CVE-2020-11668 CVE-2020-11793 CVE-2020-12465 CVE-2020-12655 CVE-2020-12659 CVE-2020-12770 CVE-2020-12826 CVE-2020-13249 CVE-2020-13630 CVE-2020-13631 CVE-2020-13632 CVE-2020-14019 CVE-2020-14040 CVE-2020-14381 CVE-2020-14382 CVE-2020-14391 CVE-2020-14422 CVE-2020-15157 CVE-2020-15503 CVE-2020-15862 CVE-2020-15999 CVE-2020-16166 CVE-2020-24490 CVE-2020-24659 CVE-2020-25211 CVE-2020-25641 CVE-2020-25658 CVE-2020-25661 CVE-2020-25662 CVE-2020-25681 CVE-2020-25682 CVE-2020-25683 CVE-2020-25684 CVE-2020-25685 CVE-2020-25686 CVE-2020-25687 CVE-2020-25694 CVE-2020-25696 CVE-2020-26160 CVE-2020-27813 CVE-2020-27846 CVE-2020-28362 CVE-2020-29652 CVE-2021-2007 CVE-2021-3121 =====================================================================

  1. Summary:

Red Hat OpenShift Container Platform release 4.7.0 is now available.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

This advisory contains the container images for Red Hat OpenShift Container Platform 4.7.0. See the following advisory for the RPM packages for this release:

https://access.redhat.com/errata/RHSA-2020:5634

Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:

https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html

You may download the oc tool and use it to inspect release image metadata as follows:

(For x86_64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-x86_64

The image digest is sha256:d74b1cfa81f8c9cc23336aee72d8ae9c9905e62c4874b071317a078c316f8a70

(For s390x architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-s390x

The image digest is sha256:a68ca03d87496ddfea0ac26b82af77231583a58a7836b95de85efe5e390ad45d

(For ppc64le architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-ppc64le

The image digest is sha256:bc7b04e038c8ff3a33b827f4ee19aa79b26e14c359a7dcc1ced9f3b58e5f1ac6

All OpenShift Container Platform 4.7 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor.

Security Fix(es):

  • crewjam/saml: authentication bypass in saml authentication (CVE-2020-27846)

  • golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference (CVE-2020-29652)

  • gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)

  • nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)

  • kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider (CVE-2020-8563)

  • containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters (CVE-2020-10749)

  • heketi: gluster-block volume password details available in logs (CVE-2020-10763)

  • golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash (CVE-2020-14040)

  • jwt-go: access restriction bypass vulnerability (CVE-2020-26160)

  • golang-github-gorilla-websocket: integer overflow leads to denial of service (CVE-2020-27813)

  • golang: math/big: panic during recursive division of very large numbers (CVE-2020-28362)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

  1. Solution:

For OpenShift Container Platform 4.7, see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -cli.html.

  1. Bugs fixed (https://bugzilla.redhat.com/):

1620608 - Restoring deployment config with history leads to weird state 1752220 - [OVN] Network Policy fails to work when project label gets overwritten 1756096 - Local storage operator should implement must-gather spec 1756173 - /etc/udev/rules.d/66-azure-storage.rules missing from initramfs 1768255 - installer reports 100% complete but failing components 1770017 - Init containers restart when the exited container is removed from node. 1775057 - [MSTR-485] Cluster is abnormal after etcd backup/restore when the backup is conducted during etcd encryption is migrating 1775444 - RFE: k8s cpu manager does not restrict /usr/bin/pod cpuset 1777038 - Cluster scaled beyond host subnet limits does not fire alert or cleanly report why it cannot scale 1777224 - InfraID in metadata.json and .openshift_install_state.json is not consistent when repeating create commands 1784298 - "Displaying with reduced resolution due to large dataset." would show under some conditions 1785399 - Under condition of heavy pod creation, creation fails with 'error reserving pod name ...: name is reserved" 1797766 - Resource Requirements" specDescriptor fields - CPU and Memory injects empty string YAML editor 1801089 - [OVN] Installation failed and monitoring pod not created due to some network error. 1805025 - [OSP] Machine status doesn't become "Failed" when creating a machine with invalid image 1805639 - Machine status should be "Failed" when creating a machine with invalid machine configuration 1806000 - CRI-O failing with: error reserving ctr name 1806915 - openshift-service-ca: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be 1806917 - openshift-service-ca-operator: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be 1810438 - Installation logs are not gathered from OCP nodes 1812085 - kubernetes-networking-namespace-pods dashboard doesn't exist 1812412 - Monitoring Dashboard: on restricted cluster, query timed out in expression evaluation 1813012 - EtcdDiscoveryDomain no longer needed 1813949 - openshift-install doesn't use env variables for OS_* for some of API endpoints 1816812 - OpenShift test suites are not resilient to rate limited registries (like docker.io) and cannot control their dependencies for offline use 1819053 - loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist 1819457 - Package Server is in 'Cannot update' status despite properly working 1820141 - [RFE] deploy qemu-quest-agent on the nodes 1822744 - OCS Installation CI test flaking 1824038 - Integration Tests: StaleElementReferenceError in OLM single-installmode scenario 1825892 - StorageClasses and PVs are not cleaned completely after running the csi verification tool 1826301 - Wrong NodeStatus reports in file-integrity scan when configuration error in aide.conf file 1829723 - User workload monitoring alerts fire out of the box 1832968 - oc adm catalog mirror does not mirror the index image itself 1833012 - Lower OVNKubernetes HTTP E/W performance compared with OpenShiftSDN 1833220 - CVE-2020-10749 containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters 1834995 - olmFull suite always fails once th suite is run on the same cluster 1836017 - vSphere UPI: Both Internal and External load balancers for kube-apiserver should use /readyz 1837953 - Replacing masters doesn't work for ovn-kubernetes 4.4 1838352 - OperatorExited, Pending marketplace-operator-... pod for several weeks 1838751 - [oVirt][Tracker] Re-enable skipped network tests 1839239 - csi-snapshot-controller flickers Degraded=True on etcd hiccups 1840759 - [aws-ebs-csi-driver] The volume created by aws ebs csi driver can not be deleted when the cluster is destroyed 1841039 - authentication-operator: Add e2e test for password grants to Keycloak being set as OIDC IdP 1841119 - Get rid of config patches and pass flags directly to kcm 1841175 - When an Install Plan gets deleted, OLM does not create a new one 1841381 - Issue with memoryMB validation 1841885 - oc adm catalog mirror command attempts to pull from registry.redhat.io when using --from-dir option 1844727 - Etcd container leaves grep and lsof zombie processes 1845387 - CVE-2020-10763 heketi: gluster-block volume password details available in logs 1847074 - Filter bar layout issues at some screen widths on search page 1848358 - CRDs with preserveUnknownFields:true don't reflect in status that they are non-structural 1849543 - [4.5]kubeletconfig's description will show multiple lines for finalizers when upgrade from 4.4.8->4.5 1851103 - Use of NetworkManager-wait-online.service in rhcos-growpart.service 1851203 - [GSS] [RFE] Need a simpler representation of capactiy breakdown in total usage and per project breakdown in OCS 4 dashboard 1851351 - OCP 4.4.9: EtcdMemberIPMigratorDegraded: rpc error: code = Canceled desc = grpc: the client connection is closing 1851693 - The oc apply should return errors instead of hanging there when failing to create the CRD 1852289 - Upgrade testsuite fails on ppc64le environment - Unsupported LoadBalancer service 1853115 - the restriction of --cloud option should be shown in help text. 1853116 - --to option does not work with --credentials-requests flag. 1853352 - [v2v][UI] Storage Class fields Should Not be empty in VM disks view 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1854567 - "Installed Operators" list showing "duplicated" entries during installation 1855325 - [Feature:Prometheus][Conformance] Prometheus when installed on the cluster [Top Level] [Feature:Prometheus][Conformance] Prometheus when installed on the cluster should report telemetry if a cloud.openshift.com token is present 1855351 - Inconsistent Installer reactions to Ctrl-C during user input process 1855408 - OVN cluster unstable after running minimal scale test 1856351 - Build page should show metrics for when the build ran, not the last 30 minutes 1856354 - New APIServices missing from OpenAPI definitions 1857446 - ARO/Azure: excessive pod memory allocation causes node lockup 1857877 - Operator upgrades can delete existing CSV before completion 1858578 - [v2v] [ui] VM import RHV to CNV Target VM Name longer than 63 chars should not be allowed 1859174 - [IPI][OSP] Having errors from 4.3 to 4.6 about Security group rule already created 1860136 - default ingress does not propagate annotations to route object on update 1860322 - [OCPv4.5.2] after unexpected shutdown one of RHV Hypervisors, OCP worker nodes machine are marked as "Failed" 1860518 - unable to stop a crio pod 1861383 - Route with haproxy.router.openshift.io/timeout: 365d kills the ingress controller 1862430 - LSO: PV creation lock should not be acquired in a loop 1862489 - LSO autoprovisioning should exclude top level disks that are part of LVM volume group. 1862608 - Virtual media does not work on hosts using BIOS, only UEFI 1862918 - [v2v] User should only select SRIOV network when importin vm with SRIOV network 1865743 - Some pods are stuck in ContainerCreating and some sdn pods are in CrashLoopBackOff 1865839 - rpm-ostree fails with "System transaction in progress" when moving to kernel-rt 1866043 - Configurable table column headers can be illegible 1866087 - Examining agones helm chart resources results in "Oh no!" 1866261 - Need to indicate the intentional behavior for Ansible in the create api help info 1866298 - [RHOCS Usability Study][Installation] Labeling the namespace should be a part of the installation flow or be clearer as a requirement 1866320 - [RHOCS Usability Study][Dashboard] Users were confused by Available Capacity and the Total Capacity 1866334 - [RHOCS Usability Study][Installation] On the Operator installation page, there’s no indication on which labels offer tooltip/help 1866340 - [RHOCS Usability Study][Dashboard] It was not clear why “No persistent storage alerts” was prominently displayed 1866343 - [RHOCS Usability Study][Dashboard] User wanted to know the time frame for Data Consumption, e.g I/O Operations 1866445 - kola --basic-qemu-scenarios scenario fail on ppc64le & s390x 1866482 - Few errors are seen when oc adm must-gather is run 1866605 - No metadata.generation set for build and buildconfig objects 1866873 - MCDDrainError "Drain failed on , updates may be blocked" missing rendered node name 1866901 - Deployment strategy for BMO allows multiple pods to run at the same time 1866925 - openshift-install destroy cluster should fail quickly when provided with invalid credentials on Azure. 1867165 - Cannot assign static address to baremetal install bootstrap vm 1867380 - When using webhooks in OCP 4.5 fails to rollout latest deploymentconfig 1867400 - [OCs 4.5]UI should not allow creation of second storagecluster of different mode in a single OCS 1867477 - HPA monitoring cpu utilization fails for deployments which have init containers 1867518 - [oc] oc should not print so many goroutines when ANY command fails 1867608 - ds/machine-config-daemon takes 100+ minutes to rollout on 250 node cluster 1867965 - OpenShift Console Deployment Edit overwrites deployment yaml 1868004 - opm index add appears to produce image with wrong registry server binary 1868065 - oc -o jsonpath prints possible warning / bug "Unable to decode server response into a Table" 1868104 - Baremetal actuator should not delete Machine objects 1868125 - opm index add is not creating an index with valid images when --permissive flag is added, the index is empty instead 1868384 - CLI does not save login credentials as expected when using the same username in multiple clusters 1868527 - OpenShift Storage using VMWare vSAN receives error "Failed to add disk 'scsi0:2'" when mounted pod is created on separate node 1868645 - After a disaster recovery pods a stuck in "NodeAffinity" state and not running 1868748 - ClusterProvisioningIP in baremetal platform has wrong JSON annotation 1868765 - [vsphere][ci] could not reserve an IP address: no available addresses 1868770 - catalogSource named "redhat-operators" deleted in a disconnected cluster 1868976 - Prometheus error opening query log file on EBS backed PVC 1869293 - The configmap name looks confusing in aide-ds pod logs 1869606 - crio's failing to delete a network namespace 1870337 - [sig-storage] Managed cluster should have no crashlooping recycler pods over four minutes 1870342 - [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] 1870373 - Ingress Operator reports available when DNS fails to provision 1870467 - D/DC Part of Helm / Operator Backed should not have HPA 1870728 - openshift-install creates expired ignition files from stale .openshift_install_state.json 1870800 - [4.6] Managed Column not appearing on Pods Details page 1871170 - e2e tests are needed to validate the functionality of the etcdctl container 1872001 - EtcdDiscoveryDomain no longer needed 1872095 - content are expanded to the whole line when only one column in table on Resource Details page 1872124 - Could not choose device type as "disk" or "part" when create localvolumeset from web console 1872128 - Can't run container with hostPort on ipv6 cluster 1872166 - 'Silences' link redirects to unexpected 'Alerts' view after creating a silence in the Developer perspective 1872251 - [aws-ebs-csi-driver] Verify job in CI doesn't check for vendor dir sanity 1872786 - Rules in kube-apiserver.rules are taking too long and consuming too much memory for Prometheus to evaluate them 1872821 - [DOC] Typo in Ansible Operator Tutorial 1872907 - Fail to create CR from generated Helm Base Operator 1872923 - Click "Cancel" button on the "initialization-resource" creation form page should send users to the "Operator details" page instead of "Install Operator" page (previous page) 1873007 - [downstream] failed to read config when running the operator-sdk in the home path 1873030 - Subscriptions without any candidate operators should cause resolution to fail 1873043 - Bump to latest available 1.19.x k8s 1873114 - Nodes goes into NotReady state (VMware) 1873288 - Changing Cluster-Wide Pull Secret Does Not Trigger Updates In Kubelet Filesystem 1873305 - Failed to power on /inspect node when using Redfish protocol 1873326 - Accessibility - The symbols e.g checkmark in the overview page has no text description, label, or other accessible information 1873480 - Accessibility - No text description, alt text, label, or other accessible information associated with the help icon: “?” button/icon in Developer Console ->Navigation 1873556 - [Openstack] HTTP_PROXY setting for NetworkManager-resolv-prepender not working 1873593 - MCO fails to cope with ContainerRuntimeConfig thas has a name > 63 characters 1874057 - Pod stuck in CreateContainerError - error msg="container_linux.go:348: starting container process caused \"chdir to cwd (\\"/mount-point\\") set in config.json failed: permission denied\"" 1874074 - [CNV] Windows 2019 Default Template Not Defaulting to Proper NIC/Storage Driver 1874192 - [RFE] "Create Backing Store" page doesn't allow to select already defined k8s secret as target bucket credentials when Google Cloud Storage is selected as a provider 1874240 - [vsphere] unable to deprovision - Runtime error list attached objects 1874248 - Include validation for vcenter host in the install-config 1874340 - vmware: NodeClockNotSynchronising alert is triggered in openshift cluster after upgrading form 4.4.16 to 4.5.6 1874583 - apiserver tries and fails to log an event when shutting down 1874584 - add retry for etcd errors in kube-apiserver 1874638 - Missing logging for nbctl daemon 1874736 - [downstream] no version info for the helm-operator 1874901 - add utm_source parameter to Red Hat Marketplace URLs for attribution 1874968 - Accessibility: The project selection drop down is a keyboard trap 1875247 - Dependency resolution error "found more than one head for channel" is unhelpful for users 1875516 - disabled scheduling is easy to miss in node page of OCP console 1875598 - machine status is Running for a master node which has been terminated from the console 1875806 - When creating a service of type "LoadBalancer" (Kuryr,OVN) communication through this loadbalancer failes after 2-5 minutes. 1876166 - need to be able to disable kube-apiserver connectivity checks 1876469 - Invalid doc link on yaml template schema description 1876701 - podCount specDescriptor change doesn't take effect on operand details page 1876815 - Installer uses the environment variable OS_CLOUD for manifest generation despite explicit prompt 1876935 - AWS volume snapshot is not deleted after the cluster is destroyed 1877071 - vSphere IPI - Nameserver limits were exceeded, some nameservers have been omitted 1877105 - add redfish to enabled_bios_interfaces 1877116 - e2e aws calico tests fail with rpc error: code = ResourceExhausted 1877273 - [OVN] EgressIP cannot fail over to available nodes after one egressIP node shutdown 1877648 - [sriov]VF from allocatable and capacity of node is incorrect when the policy is only 'rootDevices' 1877681 - Manually created PV can not be used 1877693 - dnsrecords specify recordTTL as 30 but the value is null in AWS Route 53 1877740 - RHCOS unable to get ip address during first boot 1877812 - [ROKS] IBM cloud failed to terminate OSDs when upgraded between internal builds of OCS 4.5 1877919 - panic in multus-admission-controller 1877924 - Cannot set BIOS config using Redfish with Dell iDracs 1878022 - Met imagestreamimport error when import the whole image repository 1878086 - OCP 4.6+OCS 4.6(multiple SC) Internal Mode- UI should populate the default "Filesystem Name" instead of providing a textbox, & the name should be validated 1878301 - [4.6] [UI] Unschedulable used to always be displayed when Node is Ready status 1878701 - After deleting and recreating a VM with same name, the VM events contain the events from the old VM 1878766 - CPU consumption on nodes is higher than the CPU count of the node. 1878772 - On the nodes there are up to 547 zombie processes caused by thanos and Prometheus. 1878823 - "oc adm release mirror" generating incomplete imageContentSources when using "--to" and "--to-release-image" 1878845 - 4.5 to 4.6.rc.4 upgrade failure: authentication operator health check connection refused for multitenant mode 1878900 - Installer complains about not enough vcpu for the baremetal flavor where generic bm flavor is being used 1878953 - RBAC error shows when normal user access pvc upload page 1878956 - oc api-resources does not include API version 1878972 - oc adm release mirror removes the architecture information 1879013 - [RFE]Improve CD-ROM interface selection 1879056 - UI should allow to change or unset the evictionStrategy 1879057 - [CSI Certificate Test] Test failed for CSI certification tests for CSIdriver openshift-storage.rbd.csi.ceph.com with RWX enabled 1879094 - RHCOS dhcp kernel parameters not working as expected 1879099 - Extra reboot during 4.5 -> 4.6 upgrade 1879244 - Error adding container to network "ipvlan-host-local": "master" field is required 1879248 - OLM Cert Dir for Webhooks does not align SDK/Kubebuilder 1879282 - Update OLM references to point to the OLM's new doc site 1879283 - panic after nil pointer dereference in pkg/daemon/update.go 1879365 - Overlapping, divergent openshift-cluster-storage-operator manifests 1879419 - [RFE]Improve boot source description for 'Container' and ‘URL’ 1879430 - openshift-object-counts quota is not dynamically updating as the resource is deleted. 1879565 - IPv6 installation fails on node-valid-hostname 1879777 - Overlapping, divergent openshift-machine-api namespace manifests 1879878 - Messages flooded in thanos-querier pod- oauth-proxy container: Authorization header does not start with 'Basic', skipping basic authentication in Log message in thanos-querier pod the oauth-proxy 1879930 - Annotations shouldn't be removed during object reconciliation 1879976 - No other channel visible from console 1880068 - image pruner is not aware of image policy annotation, StatefulSets, etc. 1880148 - dns daemonset rolls out slowly in large clusters 1880161 - Actuator Update calls should have fixed retry time 1880259 - additional network + OVN network installation failed 1880389 - Pipeline Runs with skipped Tasks incorrectly show Tasks as "Failed" 1880410 - Convert Pipeline Visualization node to SVG 1880417 - [vmware] Fail to boot with Secure Boot enabled, kernel lockdown denies iopl access to afterburn 1880443 - broken machine pool management on OpenStack 1880450 - Host failed to install because its installation stage joined took longer than expected 20m0s. 1880473 - IBM Cloudpak operators installation stuck "UpgradePending" with InstallPlan status updates failing due to size limitation 1880680 - [4.3] [Tigera plugin] - openshift-kube-proxy fails - Failed to execute iptables-restore: exit status 4 (iptables-restore v1.8.4 (nf_tables) 1880785 - CredentialsRequest missing description in oc explain 1880787 - No description for Provisioning CRD for oc explain 1880902 - need dnsPlocy set in crd ingresscontrollers 1880913 - [DeScheduler] - change loglevel from Info to Error when priority class given in the descheduler params is not present in the cluster 1881027 - Cluster installation fails at with error : the container name \"assisted-installer\" is already in use 1881046 - [OSP] openstack-cinder-csi-driver-operator doesn't contain required manifests and assets 1881155 - operator install authentication: Authentication require functional ingress which requires at least one schedulable and ready node 1881268 - Image uploading failed but wizard claim the source is available 1881322 - kube-scheduler not scheduling pods for certificates not renewed automatically after nodes restoration 1881347 - [v2v][ui]VM Import Wizard does not call Import provider cleanup 1881881 - unable to specify target port manually resulting in application not reachable 1881898 - misalignment of sub-title in quick start headers 1882022 - [vsphere][ipi] directory path is incomplete, terraform can't find the cluster 1882057 - Not able to select access modes for snapshot and clone 1882140 - No description for spec.kubeletConfig 1882176 - Master recovery instructions don't handle IP change well 1882191 - Installation fails against external resources which lack DNS Subject Alternative Name 1882209 - [ BateMetal IPI ] local coredns resolution not working 1882210 - [release 4.7] insights-operator: Fix bug in reflector not recovering from "Too large resource version" 1882268 - [e2e][automation]Add Integration Test for Snapshots 1882361 - Retrieve and expose the latest report for the cluster 1882485 - dns-node-resolver corrupts /etc/hosts if internal registry is not in use 1882556 - git:// protocol in origin tests is not currently proxied 1882569 - CNO: Replacing masters doesn't work for ovn-kubernetes 4.4 1882608 - Spot instance not getting created on AzureGovCloud 1882630 - Fstype is changed after deleting pv provisioned by localvolumeset instance 1882649 - IPI installer labels all images it uploads into glance as qcow2 1882653 - The Approval should display the Manual after the APPROVAL changed to Manual from the Automatic 1882658 - [RFE] Volume Snapshot is not listed under inventory in Project Details page 1882660 - Operators in a namespace should be installed together when approve one 1882667 - [ovn] br-ex Link not found when scale up RHEL worker 1882723 - [vsphere]Suggested mimimum value for providerspec not working 1882730 - z systems not reporting correct core count in recording rule 1882750 - [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully 1882781 - nameserver= option to dracut creates extra NM connection profile 1882785 - Multi-Arch CI Jobs destroy libvirt network but occasionally leave it defined 1882844 - [IPI on vsphere] Executing 'openshift-installer destroy cluster' leaves installer tag categories in vsphere 1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability 1883388 - Bare Metal Hosts Details page doesn't show Mainitenance and Power On/Off status 1883422 - operator-sdk cleanup fail after installing operator with "run bundle" without installmode and og with ownnamespace 1883425 - Gather top installplans and their count 1883502 - Logging is broken due to mix of k8s.io/klog v1 and v2 1883523 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel] 1883538 - must gather report "cannot file manila/aws ebs/ovirt csi related namespaces and objects" error 1883560 - operator-registry image needs clean up in /tmp 1883563 - Creating duplicate namespace from create namespace modal breaks the UI 1883614 - [OCP 4.6] [UI] UI should not describe power cycle as "graceful" 1883642 - [sig-imageregistry][Feature:ImageTriggers][Serial] ImageStream admission TestImageStreamAdmitSpecUpdate 1883660 - e2e-metal-ipi CI job consistently failing on 4.4 1883765 - [user workload monitoring] improve latency of Thanos sidecar when streaming read requests 1883766 - [e2e][automation] Adjust tests for UI changes 1883768 - [user workload monitoring] The Prometheus operator should discard invalid TLS configurations 1883773 - opm alpha bundle build fails on win10 home 1883790 - revert "force cert rotation every couple days for development" in 4.7 1883803 - node pull secret feature is not working as expected 1883836 - Jenkins imagestream ubi8 and nodejs12 update 1883847 - The UI does not show checkbox for enable encryption at rest for OCS 1883853 - go list -m all does not work 1883905 - race condition in opm index add --overwrite-latest 1883946 - Understand why trident CSI pods are getting deleted by OCP 1884035 - Pods are illegally transitioning back to pending 1884041 - e2e should provide error info when minimum number of pods aren't ready in kube-system namespace 1884131 - oauth-proxy repository should run tests 1884165 - Repos should be disabled in -firstboot.service before OS extensions are applied 1884221 - IO becomes unhealthy due to a file change 1884258 - Node network alerts should work on ratio rather than absolute values 1884270 - Git clone does not support SCP-style ssh locations 1884334 - CVO marks an upgrade as failed when an operator takes more than 20 minutes to rollout 1884435 - vsphere - loopback is randomly not being added to resolver 1884565 - oauth-proxy crashes on invalid usage 1884584 - Kuryr controller continuously restarting due to unable to clean up Network Policy 1884613 - Create Instance of Prometheus from operator returns blank page for non cluster-admin users 1884628 - ovs-configuration service fails when the external network is configured on a tagged vlan on top of a bond device on a baremetal IPI deployment 1884629 - Visusally impaired user using screen reader not able to select Admin/Developer console options in drop down menu. 1884632 - Adding BYOK disk encryption through DES 1884654 - Utilization of a VMI is not populated 1884655 - KeyError on self._existing_vifs[port_id] 1884664 - Operator install page shows "installing..." instead of going to install status page 1884672 - Failed to inspect hardware. Reason: unable to start inspection: 'idrac' 1884691 - Installer blocks cloud-credential-operator manual mode on GCP and Azure 1884724 - Quick Start: Serverless quickstart doesn't match Operator install steps 1884739 - Node process segfaulted 1884824 - Update baremetal-operator libraries to k8s 1.19 1885002 - network kube-rbac-proxy scripts crashloop rather than non-crash looping 1885138 - Wrong detection of pending state in VM details 1885151 - [Cloud Team - Cluster API Provider Azure] Logging is broken due to mix of k8s.io/klog v1 and v2 1885165 - NoRunningOvnMaster alert falsely triggered 1885170 - Nil pointer when verifying images 1885173 - [e2e][automation] Add test for next run configuration feature 1885179 - oc image append fails on push (uploading a new layer) 1885213 - Vertical Pod Autoscaler (VPA) not working with DeploymentConfig 1885218 - [e2e][automation] Add virtctl to gating script 1885223 - Sync with upstream (fix panicking cluster-capacity binary) 1885235 - Prometheus: Logging is broken due to mix of k8s.io/klog v1 and v2 1885241 - kube-rbac-proxy: Logging is broken due to mix of k8s.io/klog v1 and v2 1885243 - prometheus-adapter: Logging is broken due to mix of k8s.io/klog v1 and v2 1885244 - prometheus-operator: Logging is broken due to mix of k8s.io/klog v1 and v2 1885246 - cluster-monitoring-operator: Logging is broken due to mix of k8s.io/klog v1 and v2 1885249 - openshift-state-metrics: Logging is broken due to mix of k8s.io/klog v1 and v2 1885308 - Supermicro nodes failed to boot via disk during installation when using IPMI and UEFI 1885315 - unit tests fail on slow disks 1885319 - Remove redundant use of group and kind of DataVolumeTemplate 1885343 - Console doesn't load in iOS Safari when using self-signed certificates 1885344 - 4.7 upgrade - dummy bug for 1880591 1885358 - add p&f configuration to protect openshift traffic 1885365 - MCO does not respect the install section of systemd files when enabling 1885376 - failed to initialize the cluster: Cluster operator marketplace is still updating 1885398 - CSV with only Webhook conversion can't be installed 1885403 - Some OLM events hide the underlying errors 1885414 - Need to disable HTX when not using HTTP/2 in order to preserve HTTP header name case 1885425 - opm index add cannot batch add multiple bundles that use skips 1885543 - node tuning operator builds and installs an unsigned RPM 1885644 - Panic output due to timeouts in openshift-apiserver 1885676 - [OCP 4.7]UI should fallback to minimal deployment only after total CPU < 30 || totalMemory < 72 GiB for initial deployment 1885702 - Cypress: Fix 'aria-hidden-focus' accesibility violations 1885706 - Cypress: Fix 'link-name' accesibility violation 1885761 - DNS fails to resolve in some pods 1885856 - Missing registry v1 protocol usage metric on telemetry 1885864 - Stalld service crashed under the worker node 1885930 - [release 4.7] Collect ServiceAccount statistics 1885940 - kuryr/demo image ping not working 1886007 - upgrade test with service type load balancer will never work 1886022 - Move range allocations to CRD's 1886028 - [BM][IPI] Failed to delete node after scale down 1886111 - UpdatingopenshiftStateMetricsFailed: DeploymentRollout of openshift-monitoring/openshift-state-metrics: got 1 unavailable replicas 1886134 - Need to set GODEBUG=x509ignoreCN=0 in initrd 1886154 - System roles are not present while trying to create new role binding through web console 1886166 - 1885517 Clone - Not needed for 4.7 - upgrade from 4.5->4.6 causes broadcast storm 1886168 - Remove Terminal Option for Windows Nodes 1886200 - greenwave / CVP is failing on bundle validations, cannot stage push 1886229 - Multipath support for RHCOS sysroot 1886294 - Unable to schedule a pod due to Insufficient ephemeral-storage 1886327 - Attempt to add a worker using bad roodDeviceHint: bmh and machine become Provisioned, no error in status 1886353 - [e2e][automation] kubevirt-gating job fails for a missing virtctl URL 1886397 - Move object-enum to console-shared 1886423 - New Affinities don't contain ID until saving 1886435 - Azure UPI uses deprecated command 'group deployment' 1886449 - p&f: add configuration to protect oauth server traffic 1886452 - layout options doesn't gets selected style on click i.e grey background 1886462 - IO doesn't recognize namespaces - 2 resources with the same name in 2 namespaces -> only 1 gets collected 1886488 - move e2e test off of nfs image from docker.io/gmontero/nfs-server:latest 1886524 - Change default terminal command for Windows Pods 1886553 - i/o timeout experienced from build02 when targeting CI test cluster during test execution 1886600 - panic: assignment to entry in nil map 1886620 - Application behind service load balancer with PDB is not disrupted 1886627 - Kube-apiserver pods restarting/reinitializing periodically 1886635 - CVE-2020-8563 kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider 1886636 - Panic in machine-config-operator 1886749 - Removing network policy from namespace causes inability to access pods through loadbalancer. 1886751 - Gather MachineConfigPools 1886766 - PVC dropdown has 'Persistent Volume' Label 1886834 - ovn-cert is mandatory in both master and node daemonsets 1886848 - [OSP] machine instance-state annotation discrepancy with providerStatus.instanceState 1886861 - ordered-values.yaml not honored if values.schema.json provided 1886871 - Neutron ports created for hostNetworking pods 1886890 - Overwrite jenkins-agent-base imagestream 1886900 - Cluster-version operator fills logs with "Manifest: ..." spew 1886922 - [sig-network] pods should successfully create sandboxes by getting pod 1886973 - Local storage operator doesn't include correctly populate LocalVolumeDiscoveryResult in console 1886977 - [v2v]Incorrect VM Provider type displayed in UI while importing VMs through VMIO 1887010 - Imagepruner met error "Job has reached the specified backoff limit" which causes image registry degraded 1887026 - FC volume attach fails with “no fc disk found” error on OCP 4.6 PowerVM cluster 1887040 - [upgrade] ovs pod crash for rhel worker when upgarde from 4.5 to 4.6 1887046 - Event for LSO need update to avoid confusion 1887088 - cluster-node-tuning-operator refers to missing cluster-node-tuned image 1887375 - User should be able to specify volumeMode when creating pvc from web-console 1887380 - Unsupported access mode should not be available to select when creating pvc by aws-ebs-csi-driver(gp2-csi) from web-console 1887392 - openshift-apiserver: delegated authn/z should have ttl > metrics/healthz/readyz/openapi interval 1887428 - oauth-apiserver service should be monitored by prometheus 1887441 - ingress misconfiguration may break authentication but ingress operator keeps reporting "degraded: False" 1887454 - [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data 1887456 - It is impossible to attach the default NIC to a bridge with the latest version of OVN Kubernetes 1887465 - Deleted project is still referenced 1887472 - unable to edit application group for KSVC via gestures (shift+Drag) 1887488 - OCP 4.6: Topology Manager OpenShift E2E test fails: gu workload attached to SRIOV networks should let resource-aligned PODs have working SRIOV network interface 1887509 - Openshift-tests conformance TopologyManager tests run when Machine Config Operator is not installed on cluster 1887525 - Failures to set master HardwareDetails cannot easily be debugged 1887545 - 4.5 to 4.6 upgrade fails when external network is configured on a bond device: ovs-configuration service fails and node becomes unreachable 1887585 - ovn-masters stuck in crashloop after scale test 1887651 - [Internal Mode] Object gateway (RGW) in unknown state after OCP upgrade. 1887737 - Test TestImageRegistryRemovedWithImages is failing on e2e-vsphere-operator 1887740 - cannot install descheduler operator after uninstalling it 1887745 - API server is throwing 5xx error code for 42.11% of requests for LIST events 1887750 - oc explain localvolumediscovery returns empty description 1887751 - oc explain localvolumediscoveryresult returns empty description 1887778 - Add ContainerRuntimeConfig gatherer 1887783 - PVC upload cannot continue after approve the certificate 1887797 - [CNV][V2V] Default network type is bridge for interface bound to POD network in VMWare migration wizard 1887799 - User workload monitoring prometheus-config-reloader OOM 1887850 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install test is flaky 1887863 - Installer panics on invalid flavor 1887864 - Clean up dependencies to avoid invalid scan flagging 1887934 - TestForwardedHeaderPolicyAppend, TestForwardedHeaderPolicyReplace, and TestForwardedHeaderPolicyIfNone consistently fail because of case-sensitive comparison 1887936 - Kube-scheduler should be able to parse v1beta1 KubeSchedulerConfig 1888015 - workaround kubelet graceful termination of static pods bug 1888028 - prevent extra cycle in aggregated apiservers 1888036 - Operator details shows old CRD versions 1888041 - non-terminating pods are going from running to pending 1888072 - Setting Supermicro node to PXE boot via Redfish doesn't take affect 1888073 - Operator controller continuously busy looping 1888118 - Memory requests not specified for image registry operator 1888150 - Install Operand Form on OperatorHub is displaying unformatted text 1888172 - PR 209 didn't update the sample archive, but machineset and pdbs are now namespaced 1888227 - Failed to deploy some of container image on the recent OCP 4.6 nightly build 1888292 - Fix CVE-2015-7501 affecting agent-maven-3.5 1888311 - p&f: make SAR traffic from oauth and openshift apiserver exempt 1888363 - namespaces crash in dev 1888378 - [IPI on Azure] errors destroying cluster when Azure resource group was never created 1888381 - instance:node_network_receive_bytes_excluding_lo:rate1m value twice expected 1888464 - installer missing permission definitions for TagResources and UntagResources when installing in existing VPC 1888494 - imagepruner pod is error when image registry storage is not configured 1888565 - [OSP] machine-config-daemon-firstboot.service failed with "error reading osImageURL from rpm-ostree" 1888595 - cluster-policy-controller logs shows error which reads initial monitor sync has error 1888601 - The poddisruptionbudgets is using the operator service account, instead of gather 1888657 - oc doesn't know its name 1888663 - sdn starts after kube-apiserver, delay readyz until oauth-apiserver is reachable 1888671 - Document the Cloud Provider's ignore-volume-az setting 1888738 - quay.io/openshift/origin-must-gather:latest is not a multi-arch, manifest-list image 1888763 - at least one of these parameters (Vendor, DeviceID or PfNames) has to be defined in nicSelector in CR %s", cr.GetName() 1888827 - ovnkube-master may segfault when trying to add IPs to a nil address set 1888861 - need to pass dual-stack service CIDRs to kube-apiserver in dual-stack cluster 1888866 - AggregatedAPIDown permanently firing after removing APIService 1888870 - JS error when using autocomplete in YAML editor 1888874 - hover message are not shown for some properties 1888900 - align plugins versions 1888985 - Cypress: Fix 'Ensures buttons have discernible text' accesibility violation 1889213 - The error message of uploading failure is not clear enough 1889267 - Increase the time out for creating template and upload image in the terraform 1889348 - Project link should be removed from Application Details page, since it is inaccurate (Application Stages) 1889374 - Kiali feature won't work on fresh 4.6 cluster 1889388 - ListBundles returns incorrect replaces/skips when bundles have been added via semver-skippatch mode 1889420 - OCP failed to add vsphere disk when pod moved to new node during cluster upgrade 1889515 - Accessibility - The symbols e.g checkmark in the Node > overview page has no text description, label, or other accessible information 1889529 - [Init-CR annotation] Inline alert shows operand instance was needed still appearing after creating an Operand instance 1889540 - [4.5 upgrade][alert]CloudCredentialOperatorDown 1889577 - Resources are not shown on project workloads page 1889620 - [Azure] - Machineset not scaling when publicIP:true in disconnected Azure enviroment 1889630 - Scheduling disabled popovers are missing for Node status in Node Overview and Details pages 1889692 - Selected Capacity is showing wrong size 1889694 - usbguard fails to install as RHCOS extension due to missing libprotobuf.so.15 1889698 - When the user clicked cancel at the Create Storage Class confirmation dialog all the data from the Local volume set goes off 1889710 - Prometheus metrics on disk take more space compared to OCP 4.5 1889721 - opm index add semver-skippatch mode does not respect prerelease versions 1889724 - When LocalVolumeDiscovery CR is created form the LSO page User doesn't see the Disk tab 1889767 - [vsphere] Remove certificate from upi-installer image 1889779 - error when destroying a vSphere installation that failed early 1889787 - OCP is flooding the oVirt engine with auth errors 1889838 - race in Operator update after fix from bz1888073 1889852 - support new AWS regions ap-east-1, af-south-1, eu-south-1 1889863 - Router prints incorrect log message for namespace label selector 1889891 - Backport timecache LRU fix 1889912 - Drains can cause high CPU usage 1889921 - Reported Degraded=False Available=False pair does not make sense 1889928 - [e2e][automation] Add more tests for golden os 1889943 - EgressNetworkPolicy does not work when setting Allow rule to a dnsName 1890038 - Infrastructure status.platform not migrated to status.platformStatus causes warnings 1890074 - MCO extension kernel-headers is invalid 1890104 - with Serverless 1.10 version of trigger/subscription/channel/IMC is V1 as latest 1890130 - multitenant mode consistently fails CI 1890141 - move off docker.io images for build/image-eco/templates/jenkins e2e 1890145 - The mismatched of font size for Status Ready and Health Check secondary text 1890180 - FieldDependency x-descriptor doesn't support non-sibling fields 1890182 - DaemonSet with existing owner garbage collected 1890228 - AWS: destroy stuck on route53 hosted zone not found 1890235 - e2e: update Protractor's checkErrors logging 1890250 - workers may fail to join the cluster during an update from 4.5 1890256 - Replacing a master node on a baremetal IPI deployment gets stuck when deleting the machine of the unhealthy member 1890270 - External IP doesn't work if the IP address is not assigned to a node 1890361 - s390x: Generate new ostree rpm with fix for rootfs immutability 1890456 - [vsphere] mapi_instance_create_failed doesn't work on vsphere 1890467 - unable to edit an application without a service 1890472 - [Kuryr] Bulk port creation exception not completely formatted 1890494 - Error assigning Egress IP on GCP 1890530 - cluster-policy-controller doesn't gracefully terminate 1890630 - [Kuryr] Available port count not correctly calculated for alerts 1890671 - [SA] verify-image-signature using service account does not work 1890677 - 'oc image info' claims 'does not exist' for application/vnd.oci.image.manifest.v1+json manifest 1890808 - New etcd alerts need to be added to the monitoring stack 1890951 - Mirror of multiarch images together with cluster logging case problems. It doesn't sync the "overall" sha it syncs only the sub arch sha. 1890984 - Rename operator-webhook-config to sriov-operator-webhook-config 1890995 - wew-app should provide more insight into why image deployment failed 1891023 - ovn-kubernetes rbac proxy never starts waiting for an incorrect API call 1891047 - Helm chart fails to install using developer console because of TLS certificate error 1891068 - [sig-instrumentation] Prometheus when installed on the cluster shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Early] failing due to TargetDown alert from kube-scheduler 1891080 - [LSO] When Localvolumeset and SC is already created before OCS install Creation of LVD and LVS is skipped when user click created storage cluster from UI 1891108 - p&f: Increase the concurrency share of workload-low priority level 1891143 - CVO deadlocked while shutting down, shortly after fresh cluster install (metrics goroutine) 1891189 - [LSO] max device limit is accepting negative values. PVC is not getting created and no error is shown 1891314 - Display incompatible helm charts for installation (kubeVersion of cluster doesn't meet requirements of chart) 1891362 - Wrong metrics count for openshift_build_result_total 1891368 - fync should be fsync for etcdHighFsyncDurations alert's annotations.message 1891374 - fync should be fsync for etcdHighFsyncDurations critical alert's annotations.message 1891376 - Extra text in Cluster Utilization charts 1891419 - Wrong detail head on network policy detail page. 1891459 - Snapshot tests should report stderr of failed commands 1891498 - Other machine config pools do not show during update 1891543 - OpenShift 4.6/OSP install fails when node flavor has less than 25GB, even with dedicated storage 1891551 - Clusterautoscaler doesn't scale up as expected 1891552 - Handle missing labels as empty. 1891555 - The windows oc.exe binary does not have version metadata 1891559 - kuryr-cni cannot start new thread 1891614 - [mlx] testpmd fails inside OpenShift pod using DevX version 19.11 1891625 - [Release 4.7] Mutable LoadBalancer Scope 1891702 - installer get pending when additionalTrustBundle is added into install-config.yaml 1891716 - OVN cluster upgrade from 4.6.1 to 4.7 fails 1891740 - OperatorStatusChanged is noisy 1891758 - the authentication operator may spam DeploymentUpdated event endlessly 1891759 - Dockerfile builds cannot change /etc/pki/ca-trust 1891816 - [UPI] [OSP] control-plane.yml provisioning playbook fails on OSP 16.1 1891825 - Error message not very informative in case of mode mismatch 1891898 - The ClusterServiceVersion can define Webhooks that cannot be created. 1891951 - UI should show warning while creating pools with compression on 1891952 - [Release 4.7] Apps Domain Enhancement 1891993 - 4.5 to 4.6 upgrade doesn't remove deployments created by marketplace 1891995 - OperatorHub displaying old content 1891999 - Storage efficiency card showing wrong compression ratio 1892004 - OCP 4.6 opm on Ubuntu 18.04.4 - error /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.28' not found (required by ./opm) 1892167 - [SR-IOV] SriovNetworkNodePolicies apply ignoring the spec.nodeSelector. 1892198 - TypeError in 'Performance Profile' tab displayed for 'Performance Addon Operator' 1892288 - assisted install workflow creates excessive control-plane disruption 1892338 - HAProxyReloadFail alert only briefly fires in the event of a broken HAProxy config 1892358 - [e2e][automation] update feature gate for kubevirt-gating job 1892376 - Deleted netnamespace could not be re-created 1892390 - TestOverwrite/OverwriteBundle/DefaultBehavior in operator-registry is flaky 1892393 - TestListPackages is flaky 1892448 - MCDPivotError alert/metric missing 1892457 - NTO-shipped stalld needs to use FIFO for boosting. 1892467 - linuxptp-daemon crash 1892521 - [AWS] Startup bootstrap machine failed due to ignition file is missing in disconnected UPI env 1892653 - User is unable to create KafkaSource with v1beta 1892724 - VFS added to the list of devices of the nodeptpdevice CRD 1892799 - Mounting additionalTrustBundle in the operator 1893117 - Maintenance mode on vSphere blocks installation. 1893351 - TLS secrets are not able to edit on console. 1893362 - The ovs-xxxxx_openshift-sdn container does not terminate gracefully, slowing down reboots 1893386 - false-positive ReadyIngressNodes_NoReadyIngressNodes: Auth operator makes risky "worker" assumption when guessing about ingress availability 1893546 - Deploy using virtual media fails on node cleaning step 1893601 - overview filesystem utilization of OCP is showing the wrong values 1893645 - oc describe route SIGSEGV 1893648 - Ironic image building process is not compatible with UEFI secure boot 1893724 - OperatorHub generates incorrect RBAC 1893739 - Force deletion doesn't work for snapshots if snapshotclass is already deleted 1893776 - No useful metrics for image pull time available, making debugging issues there impossible 1893798 - Lots of error messages starting with "get namespace to enqueue Alertmanager instances failed" in the logs of prometheus-operator 1893832 - ErrorCount field is missing in baremetalhosts.metal3.io CRD 1893889 - disabled dropdown items in the pf dropdown component are skipped over and unannounced by JAWS 1893926 - Some "Dynamic PV (block volmode)" pattern storage e2e tests are wrongly skipped 1893944 - Wrong product name for Multicloud Object Gateway 1893953 - (release-4.7) Gather default StatefulSet configs 1893956 - Installation always fails at "failed to initialize the cluster: Cluster operator image-registry is still updating" 1893963 - [Testday] Workloads-> Virtualization is not loading for Firefox browser 1893972 - Should skip e2e test cases as early as possible 1894013 - [v2v][Testday] VMware to CNV VM import]VMware URL: It is not clear that only the FQDN/IP address is required without 'https://' 1894020 - User with edit users cannot deploy images from their own namespace from the developer perspective 1894025 - OCP 4.5 to 4.6 upgrade for "aws-ebs-csi-driver-operator" fails when "defaultNodeSelector" is set 1894041 - [v2v][[Testday]VM import from VMware/RHV] VM import wizard: The target storage class name is not displayed if default storage class is used. 1894065 - tag new packages to enable TLS support 1894110 - Console shows wrong value for maxUnavailable and maxSurge when set to 0 1894144 - CI runs of baremetal IPI are failing due to newer libvirt libraries 1894146 - ironic-api used by metal3 is over provisioned and consumes a lot of RAM 1894194 - KuryrPorts leftovers from 4.6 GA need to be deleted 1894210 - Failed to encrypt OSDs on OCS4.6 installation (via UI) 1894216 - Improve OpenShift Web Console availability 1894275 - Fix CRO owners file to reflect node owner 1894278 - "database is locked" error when adding bundle to index image 1894330 - upgrade channels needs to be updated for 4.7 1894342 - oauth-apiserver logs many "[SHOULD NOT HAPPEN] failed to update managedFields for ... OAuthClient ... no corresponding type for oauth.openshift.io/v1, Kind=OAuthClient" 1894374 - Dont prevent the user from uploading a file with incorrect extension 1894432 - [oVirt] sometimes installer timeout on tmp_import_vm 1894477 - bash syntax error in nodeip-configuration.service 1894503 - add automated test for Polarion CNV-5045 1894519 - [OSP] External mode cluster creation disabled for Openstack and oVirt platform 1894539 - [on-prem] Unable to deploy additional machinesets on separate subnets 1894645 - Cinder volume provisioning crashes on nil cloud provider 1894677 - image-pruner job is panicking: klog stack 1894810 - Remove TechPreview Badge from Eventing in Serverless version 1.11.0 1894860 - 'backend' CI job passing despite failing tests 1894910 - Update the node to use the real-time kernel fails 1894992 - All nightly jobs for e2e-metal-ipi failing due to ipa image missing tenacity package 1895065 - Schema / Samples / Snippets Tabs are all selected at the same time 1895099 - vsphere-upi and vsphere-upi-serial jobs time out waiting for bootstrap to complete in CI 1895141 - panic in service-ca injector 1895147 - Remove memory limits on openshift-dns 1895169 - VM Template does not properly manage Mount Windows guest tools check box during VM creation 1895268 - The bundleAPIs should NOT be empty 1895309 - [OCP v47] The RHEL node scaleup fails due to "No package matching 'cri-o-1.19.*' found available" on OCP 4.7 cluster 1895329 - The infra index filled with warnings "WARNING: kubernetes.io/cinder built-in volume provider is now deprecated. The Cinder volume provider is deprecated and will be removed in a future release" 1895360 - Machine Config Daemon removes a file although its defined in the dropin 1895367 - Missing image in metadata DB index.db in disconnected Operator Hub installation. OCP 4.6.1 1895372 - Web console going blank after selecting any operator to install from OperatorHub 1895385 - Revert KUBELET_LOG_LEVEL back to level 3 1895423 - unable to edit an application with a custom builder image 1895430 - unable to edit custom template application 1895509 - Backup taken on one master cannot be restored on other masters 1895537 - [sig-imageregistry][Feature:ImageExtract] Image extract should extract content from an image 1895838 - oc explain description contains '/' 1895908 - "virtio" option is not available when modifying a CD-ROM to disk type 1895909 - e2e-metal-ipi-ovn-dualstack is failing 1895919 - NTO fails to load kernel modules 1895959 - configuring webhook token authentication should prevent cluster upgrades 1895979 - Unable to get coreos-installer with --copy-network to work 1896101 - [cnv][automation] Added negative tests for migration from VMWare and RHV 1896160 - CI: Some cluster operators are not ready: marketplace (missing: Degraded) 1896188 - [sig-cli] oc debug deployment configs from a build: local-busybox-1-build not completed 1896218 - Occasional GCP install failures: Error setting IAM policy for project ...: googleapi: Error 400: Service account ... does not exist., badRequest 1896229 - Current Rate of Bytes Received and Current Rate of Bytes Transmitted data can not be loaded 1896244 - Found a panic in storage e2e test 1896296 - Git links should avoid .git as part of the URL and should not link git:// urls in general 1896302 - [e2e][automation] Fix 4.6 test failures 1896365 - [Migration]The SDN migration cannot revert under some conditions 1896384 - [ovirt IPI]: local coredns resolution not working 1896446 - Git clone from private repository fails after upgrade OCP 4.5 to 4.6 1896529 - Incorrect instructions in the Serverless operator and application quick starts 1896645 - documentationBaseURL needs to be updated for 4.7 1896697 - [Descheduler] policy.yaml param in cluster configmap is empty 1896704 - Machine API components should honour cluster wide proxy settings 1896732 - "Attach to Virtual Machine OS" button should not be visible on old clusters 1896866 - File /etc/NetworkManager/system-connections/default_connection.nmconnection is incompatible with SR-IOV operator 1896898 - ovs-configuration.service fails when multiple IPv6 default routes are provided via RAs over the same interface and deployment bootstrap fails 1896918 - start creating new-style Secrets for AWS 1896923 - DNS pod /metrics exposed on anonymous http port 1896977 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters 1897003 - VNC console cannot be connected after visit it in new window 1897008 - Cypress: reenable check for 'aria-hidden-focus' rule & checkA11y test for modals 1897026 - [Migration] With updating optional network operator configuration, migration stucks on MCO 1897039 - router pod keeps printing log: template "msg"="router reloaded" "output"="[WARNING] 316/065823 (15) : parsing [/var/lib/haproxy/conf/haproxy.config:52]: option 'http-use-htx' is deprecated and ignored 1897050 - [IBM Power] LocalVolumeSet provisions boot partition as PV. 1897073 - [OCP 4.5] wrong netid assigned to Openshift projects/namespaces 1897138 - oVirt provider uses depricated cluster-api project 1897142 - When scaling replicas to zero, Octavia loadbalancer pool members are not updated accordingly 1897252 - Firing alerts are not showing up in console UI after cluster is up for some time 1897354 - Operator installation showing success, but Provided APIs are missing 1897361 - The MCO GCP-OP tests fail consistently on containerruntime tests with "connection refused" 1897412 - [sriov]disableDrain did not be updated in CRD of manifest 1897423 - Max unavailable and Max surge value are not shown on Deployment Config Details page 1897516 - Baremetal IPI deployment with IPv6 control plane fails when the nodes obtain both SLAAC and DHCPv6 addresses as they set their hostname to 'localhost' 1897520 - After restarting nodes the image-registry co is in degraded true state. 1897584 - Add casc plugins 1897603 - Cinder volume attachment detection failure in Kubelet 1897604 - Machine API deployment fails: Kube-Controller-Manager can't reach API: "Unauthorized" 1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers 1897641 - Baremetal IPI with IPv6 control plane: nodes respond with duplicate packets to ICMP6 echo requests 1897676 - [CI] [Azure] [UPI] CI failing since 4.6 changes in ignition 1897830 - [GSS] Unable to deploy OCS 4.5.2 on OCP 4.6.1, cannotCreate OCS Cluster Service1897891 - [RFE][v2v][UI][CNV VM import] Providing error message or/and block migration when vddk-init-image is missing 1897897 - ptp lose sync openshift 4.6 1898036 - no network after reboot (IPI) 1898045 - AWS EBS CSI Driver can not get updated cloud credential secret automatically 1898097 - mDNS floods the baremetal network 1898118 - Lack of logs on some image stream tests make hard to find root cause of a problem 1898134 - Descheduler logs show absolute values instead of percentage when LowNodeUtilization strategy is applied 1898159 - kcm operator shall pass --allocate-node-cidrs=false to kcm for ovn-kube and openshift-sdn cluster 1898174 - [OVN] EgressIP does not guard against node IP assignment 1898194 - GCP: can't install on custom machine types 1898238 - Installer validations allow same floating IP for API and Ingress 1898268 - [OVN]:make checkbroken on 4.6 1898289 - E2E test: Use KUBEADM_PASSWORD_FILE by default 1898320 - Incorrect Apostrophe Translation of "it's" in Scheduling Disabled Popover 1898357 - Within the operatorhub details view, long unbroken text strings do not wrap cause breaking display. 1898407 - [Deployment timing regression] Deployment takes longer with 4.7 1898417 - GCP: the dns targets in Google Cloud DNS is not updated after recreating loadbalancer service 1898487 - [oVirt] Node is not removed when VM has been removed from oVirt engine 1898500 - Failure to upgrade operator when a Service is included in a Bundle 1898517 - Ironic auto-discovery may result in rogue nodes registered in ironic 1898532 - Display names defined in specDescriptors not respected 1898580 - When adding more than one node selector to the sriovnetworknodepolicy, the cni and the device plugin pods are constantly rebooted 1898613 - Whereabouts should exclude IPv6 ranges 1898655 - [oVirt] Node deleted in oVirt should cause the Machine to go into a Failed phase 1898679 - Operand creation form - Required "type: object" properties (Accordion component) are missing red asterisk 1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability 1898745 - installation failing with CVO reporting openshift-samples not rolled out, samples not setting versions in its ClusterOperator 1898839 - Wrong YAML in operator metadata 1898851 - Multiple Pods access the same volume on the same node e2e test cases are missed from aws ebs csi driver e2e test job 1898873 - Remove TechPreview Badge from Monitoring 1898954 - Backup script does not take /etc/kubernetes/static-pod-resources on a reliable way 1899111 - [RFE] Update jenkins-maven-agen to maven36 1899128 - VMI details screen -> show the warning that it is preferable to have a VM only if the VM actually does not exist 1899175 - bump the RHCOS boot images for 4.7 1899198 - Use new packages for ipa ramdisks 1899200 - In Installed Operators page I cannot search for an Operator by it's name 1899220 - Support AWS IMDSv2 1899350 - configure-ovs.sh doesn't configure bonding options 1899433 - When Creating OCS from ocs wizard Step Discover Disks shows Error "An error occurred Not Found" 1899459 - Failed to start monitoring pods once the operator removed from override list of CVO 1899515 - Passthrough credentials are not immediately re-distributed on update 1899575 - update discovery burst to reflect lots of CRDs on openshift clusters 1899582 - update discovery burst to reflect lots of CRDs on openshift clusters 1899588 - Operator objects are re-created after all other associated resources have been deleted 1899600 - Increased etcd fsync latency as of OCP 4.6 1899603 - workers-rhel7 CI jobs failing: Failed to remove rollback: error running rpm-ostree cleanup 1899627 - Project dashboard Active status using small icon 1899725 - Pods table does not wrap well with quick start sidebar open 1899746 - [ovn] error while waiting on flows for pod: OVS sandbox port is no longer active (probably due to a subsequent CNI ADD) 1899760 - etcd_request_duration_seconds_bucket metric has excessive cardinality 1899835 - catalog-operator repeatedly crashes with "runtime error: index out of range [0] with length 0" 1899839 - thanosRuler.resources.requests does not take effect in user-workload-monitoring-config confimap 1899853 - additionalSecurityGroupIDs not working for master nodes 1899922 - NP changes sometimes influence new pods. 1899949 - [Platform] Remove restriction on disk type selection for LocalVolumeSet 1900008 - Fix internationalized sentence fragments in ImageSearch.tsx 1900010 - Fix internationalized sentence fragments in BuildImageSelector.tsx 1900020 - Remove &apos; from internationalized keys 1900022 - Search Page - Top labels field is not applied to selected Pipeline resources 1900030 - disruption_tests: [sig-imageregistry] Image registry remain available failing consistently 1900126 - Creating a VM results in suggestion to create a default storage class when one already exists 1900138 - [OCP on RHV] Remove insecure mode from the installer 1900196 - stalld is not restarted after crash 1900239 - Skip "subPath should be able to unmount" NFS test 1900322 - metal3 pod's toleration for key: node-role.kubernetes.io/master currently matches on exact value matches but should match on Exists 1900377 - [e2e][automation] create new css selector for active users 1900496 - (release-4.7) Collect spec config for clusteroperator resources 1900672 - (s390x) Upgrade from old LUKS to new not working with DASD disks 1900699 - Impossible to add new Node on OCP 4.6 using large ECKD disks - fdasd issue 1900759 - include qemu-guest-agent by default 1900790 - Track all resource counts via telemetry 1900835 - Multus errors when cachefile is not found 1900935 -oc adm release mirrorpanic panic: runtime error 1900989 - accessing the route cannot wake up the idled resources 1901040 - When scaling down the status of the node is stuck on deleting 1901057 - authentication operator health check failed when installing a cluster behind proxy 1901107 - pod donut shows incorrect information 1901111 - Installer dependencies are broken 1901200 - linuxptp-daemon crash when enable debug log level 1901301 - CBO should handle platform=BM without provisioning CR 1901355 - [Azure][4.7] Invalid vm size from customized compute nodes does not fail properly 1901363 - High Podready Latency due to timed out waiting for annotations 1901373 - redundant bracket on snapshot restore button 1901376 - [on-prem] Upgrade from 4.6 to 4.7 failed with "timed out waiting for the condition during waitForControllerConfigToBeCompleted: controllerconfig is not completed: ControllerConfig has not completed: completed(false) running(false) failing(true" 1901395 - "Edit virtual machine template" action link should be removed 1901472 - [OSP] Bootstrap and master nodes use different keepalived unicast setting 1901517 - RHCOS 4.6.1 uses a single NetworkManager connection for multiple NICs when using default DHCP 1901531 - Console returns a blank page while trying to create an operator Custom CR with Invalid Schema 1901594 - Kubernetes resource CRUD operations.Kubernetes resource CRUD operations Pod "before all" hook for "creates the resource instance" 1901604 - CNO blocks editing Kuryr options 1901675 - [sig-network] multicast when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' should allow multicast traffic in namespaces where it is enabled 1901909 - The device plugin pods / cni pod are restarted every 5 minutes 1901982 - [sig-builds][Feature:Builds] build can reference a cluster service with a build being created from new-build should be able to run a build that references a cluster service 1902019 - when podTopologySpreadConstraint strategy is enabled for descheduler it throws error 1902059 - Wire a real signer for service accout issuer 1902091 -cluster-image-registry-operatorpod leaves connections open when fails connecting S3 storage 1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service 1902157 - The DaemonSet machine-api-termination-handler couldn't allocate Pod 1902253 - MHC status doesnt set RemediationsAllowed = 0 1902299 - Failed to mirror operator catalog - error: destination registry required 1902545 - Cinder csi driver node pod should add nodeSelector for Linux 1902546 - Cinder csi driver node pod doesn't run on master node 1902547 - Cinder csi driver controller pod doesn't run on master node 1902552 - Cinder csi driver does not use the downstream images 1902595 - Project workloads list view doesn't show alert icon and hover message 1902600 - Container csi-snapshotter in Cinder csi driver needs to use ImagePullPolicy=IfNotPresent 1902601 - Cinder csi driver pods run as BestEffort qosClass 1902653 - [BM][IPI] Master deployment failed: No valid host was found. Reason: No conductor service registered which supports driver redfish for conductor group 1902702 - [sig-auth][Feature:LDAP][Serial] ldap group sync can sync groups from ldap: oc cp over non-existing directory/file fails 1902746 - [BM][IP] Master deployment failed - Base.1.0.GeneralError: database is locked 1902824 - failed to generate semver informed package manifest: unable to determine default channel 1902894 - hybrid-overlay-node crashing trying to get node object during initialization 1902969 - Cannot load vmi detail page 1902981 - It should default to current namespace when create vm from template 1902996 - [AWS] UPI on USGov, bootstrap machine can not fetch ignition file via s3:// URI 1903033 - duplicated lines of imageContentSources is seen when mirror release image to local registry 1903034 - OLM continuously printing debug logs 1903062 - [Cinder csi driver] Deployment mounted volume have no write access 1903078 - Deleting VolumeSnapshotClass makes VolumeSnapshot not Ready 1903107 - Enable vsphere-problem-detector e2e tests 1903164 - OpenShift YAML editor jumps to top every few seconds 1903165 - Improve Canary Status Condition handling for e2e tests 1903172 - Column Management: Fix sticky footer on scroll 1903186 - [Descheduler] cluster logs should report some info when PodTopologySpreadConstraints strategy is enabled 1903188 - [Descheduler] cluster log reports failed to validate server configuration" err="unsupported log format: 1903192 - Role name missing on create role binding form 1903196 - Popover positioning is misaligned for Overview Dashboard status items 1903206 - Ingress controller incorrectly routes traffic to non-ready pods/backends. 1903226 - MutatingWebhookConfiguration pod-identity-webhook does not exclude critical control-plane components 1903248 - Backport Upstream Static Pod UID patch 1903277 - Deprovisioning Not Deleting Security Groups [VpcLimitExceeded on e2e-aws tests] 1903290 - Kubelet repeatedly log the same log line from exited containers 1903346 - PV backed by FC lun is not being unmounted properly and this leads to IO errors / xfs corruption. 1903382 - Panic when task-graph is canceled with a TaskNode with no tasks 1903400 - Migrate a VM which is not running goes to pending state 1903402 - Nic/Disk on VMI overview should link to VMI's nic/disk page 1903414 - NodePort is not working when configuring an egress IP address 1903424 - mapi_machine_phase_transition_seconds_sum doesn't work 1903464 - "Evaluating rule failed" for "record: cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum" and "record: cluster:kubelet_volume_stats_used_bytes:provisioner:sum" 1903639 - Hostsubnet gatherer produces wrong output 1903651 - Network Policies are not working as expected with OVN-Kubernetes when traffic hairpins back to the same source through a service 1903660 - Cannot install with Assisted Installer on top of IPv6 since network provider is not started 1903674 - [sig-apps] ReplicationController should serve a basic image on each replica with a private image 1903717 - Handle different Pod selectors for metal3 Deployment 1903733 - Scale up followed by scale down can delete all running workers 1903917 - Failed to load "Developer Catalog" page 1903999 - Httplog response code is always zero 1904026 - The quota controllers should resync on new resources and make progress 1904064 - Automated cleaning is disabled by default 1904124 - DHCP to static lease script doesn't work correctly if starting with infinite leases 1904125 - Boostrap VM .ign image gets added into 'default' pool instead of <cluster-name>-<id>-bootstrap 1904131 - kuryr tempest plugin test test_ipblock_network_policy_sg_rules fails 1904133 - KubeletConfig flooded with failure conditions 1904161 - AlertmanagerReceiversNotConfigured fires unconditionally on alertmanager restart 1904243 - RHCOS 4.6.1 missing ISCSI initiatorname.iscsi ! 1904244 - MissingKey errors for two plugins using i18next.t 1904262 - clusterresourceoverride-operator has version: 1.0.0 every build 1904296 - VPA-operator has version: 1.0.0 every build 1904297 - The index image generated by "opm index prune" leaves unrelated images 1904305 - Should have scroll-down bar for the field which the values list has too many results under dashboards 1904385 - [oVirt] registry cannot mount volume on 4.6.4 -> 4.6.6 upgrade 1904497 - vsphere-problem-detector: Run on vSphere cloud only 1904501 - [Descheduler] descheduler does not evict any pod when PodTopologySpreadConstraint strategy is set 1904502 - vsphere-problem-detector: allow longer timeouts for some operations 1904503 - vsphere-problem-detector: emit alerts 1904538 - [sig-arch][Early] Managed cluster should start all core operators: monitoring: container has runAsNonRoot and image has non-numeric user (nobody) 1904578 - metric scraping for vsphere problem detector is not configured 1904582 - All application traffic broken due to unexpected load balancer change on 4.6.4 -> 4.6.6 upgrade 1904663 - IPI pointer customization MachineConfig always generated 1904679 - [Feature:ImageInfo] Image info should display information about images 1904683 -[sig-builds][Feature:Builds] s2i build with a root user imagetests use docker.io image 1904684 - [sig-cli] oc debug ensure it works with image streams 1904713 - Helm charts with kubeVersion restriction are filtered incorrectly 1904776 - Snapshot modal alert is not pluralized 1904824 - Set vSphere hostname from guestinfo before NM starts 1904941 - Insights status is always showing a loading icon 1904973 - KeyError: 'nodeName' on NP deletion 1904985 - Prometheus and thanos sidecar targets are down 1904993 - Many ampersand special characters are found in strings 1905066 - QE - Monitoring test cases - smoke test suite automation 1905074 - QE -Gherkin linter to maintain standards 1905100 - Too many haproxy processes in default-router pod causing high load average 1905104 - Snapshot modal disk items missing keys 1905115 - CI: dev-scripts fail on 02_configure_host: Failed to start network ostestbm 1905119 - Race in AWS EBS determining whether custom CA bundle is used 1905128 - [e2e][automation] e2e tests succeed without actually execute 1905133 - operator conditions special-resource-operator 1905141 - vsphere-problem-detector: report metrics through telemetry 1905146 - Backend Tests: TestHelmRepoGetter_SkipDisabled failures 1905194 - Detecting broken connections to the Kube API takes up to 15 minutes 1905221 - CVO transitions from "Initializing" to "Updating" despite not attempting many manifests 1905232 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them failing due to inconsistent images between CI and OCP 1905253 - Inaccurate text at bottom of Events page 1905298 - openshift-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory 1905299 - OLM fails to update operator 1905307 - Provisioning CR is missing from must-gather 1905319 - cluster-samples-operator containers are not requesting required memory resource 1905320 - csi-snapshot-webhook is not requesting required memory resource 1905323 - dns-operator is not requesting required memory resource 1905324 - ingress-operator is not requesting required memory resource 1905327 - openshift-kube-scheduler initContainer wait-for-host-port is not requesting required resources: cpu, memory 1905328 - Changing the bound token service account issuer invalids previously issued bound tokens 1905329 - openshift-oauth-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory 1905330 - openshift-monitoring init-textfile is not requesting required resources: cpu, memory 1905338 - QE -Cypress Automation for Add Flow - Database, Yaml, OperatorBacked, PageDetails 1905347 - QE - Design Gherkin Scenarios 1905348 - QE - Design Gherkin Scenarios 1905362 - [sriov] Error message 'Fail to update DaemonSet' always shown in sriov operator pod 1905368 - [sriov] net-attach-def generated from sriovnetwork cannot be restored once it was deleted 1905370 - A-Z/Z-A sorting dropdown on Developer Catalog page is not aligned with filter text input 1905380 - Default to Red Hat/KubeVirt provider if common template does not have provider annotation 1905393 - CMO uses rbac.authorization.k8s.io/v1beta1 instead of rbac.authorization.k8s.io/v1 1905404 - The example of "Remove the entrypoint on the mysql:latest image" foroc image appenddoes not work 1905416 - Hyperlink not working from Operator Description 1905430 - usbguard extension fails to install because of missing correct protobuf dependency version 1905492 - The stalld service has a higher scheduler priority than ksoftirq and rcu{b, c} threads 1905502 - Test flake - unable to get https transport for ephemeral-registry 1905542 - [GSS] The "External" mode option is not available when the OCP cluster is deployed using Redhat Cluster Assisted Installer 4.6. 1905599 - Errant change to lastupdatetime in copied CSV status can trigger runaway csv syncs 1905610 - Fix typo in export script 1905621 - Protractor login test fails against a 4.7 (nightly) Power cluster 1905640 - Subscription manual approval test is flaky 1905647 - Report physical core valid-for-subscription min/max/cumulative use to telemetry 1905696 - ClusterMoreUpdatesModal component did not get internationalized 1905748 - with sharded ingresscontrollers, all shards reload when any endpoint changes 1905761 - NetworkPolicy with Egress policyType is resulting in SDN errors and improper communication within Project 1905778 - inconsistent ingresscontroller between fresh installed cluster and upgraded cluster 1905792 - [OVN]Cannot create egressfirewalll with dnsName 1905889 - Should create SA for each namespace that the operator scoped 1905920 - Quickstart exit and restart 1905941 - Page goes to error after create catalogsource 1905977 - QE ghaekin design scenaio-pipeline metrics ODC-3711 1906032 - Canary Controller: Canary daemonset rolls out slowly in large clusters 1906100 - Disconnected cluster upgrades are failing from the cli, when signature retrieval is being blackholed instead of quickly rejected 1906105 - CBO annotates an existing Metal3 deployment resource to indicate that it is managing it 1906118 - OCS feature detection constantly polls storageclusters and storageclasses 1906120 - 'Create Role Binding' form not setting user or group value when created from a user or group resource 1906121 - [oc] After new-project creation, the kubeconfig file does not set the project 1906134 - OLM should not create OperatorConditions for copied CSVs 1906143 - CBO supports log levels 1906186 - i18n: Translators are not able to translatethiswithout context for alert manager config 1906228 - tuned and openshift-tuned sometimes do not terminate gracefully, slowing reboots 1906274 - StorageClass installed by Cinder csi driver operator should enable the allowVolumeExpansion to support volume resize. 1906276 -oc image appendcan't work with multi-arch image with --filter-by-os='.*' 1906318 - use proper term for Authorized SSH Keys 1906335 - The lastTransitionTime, message, reason field of operatorcondition should be optional 1906356 - Unify Clone PVC boot source flow with URL/Container boot source 1906397 - IPA has incorrect kernel command line arguments 1906441 - HorizontalNav and NavBar have invalid keys 1906448 - Deploy using virtualmedia with provisioning network disabled fails - 'Failed to connect to the agent' in ironic-conductor log 1906459 - openstack: Quota Validation fails if unlimited quotas are given to a project 1906496 - [BUG] Thanos having possible memory leak consuming huge amounts of node's memory and killing them 1906508 - TestHeaderNameCaseAdjust outputs nil error message on some failures 1906511 - Root reprovisioning tests flaking often in CI 1906517 - Validation is not robust enough and may prevent to generate install-confing. 1906518 - Update snapshot API CRDs to v1 1906519 - Update LSO CRDs to use v1 1906570 - Number of disruptions caused by reboots on a cluster cannot be measured 1906588 - [ci][sig-builds] nodes is forbidden: User "e2e-test-jenkins-pipeline-xfghs-user" cannot list resource "nodes" in API group "" at the cluster scope 1906650 - Cannot collect network policy, EgressFirewall, egressip logs with gather_network_logs 1906655 - [SDN]Cannot colloect ovsdb-server.log and ovs-vswitchd.log with gather_network_logs 1906679 - quick start panel styles are not loaded 1906683 - Kn resources are not showing in Topology if triggers has KSVC and IMC as subscriber 1906684 - Event Source creation fails if user selects no app group and switch to yaml and then to form 1906685 - SinkBinding is shown in topology view if underlying resource along with actual source created 1906689 - user can pin to nav configmaps and secrets multiple times 1906691 - Add doc which describes disabling helm chart repository 1906713 - Quick starts not accesible for a developer user 1906718 - helm chart "provided by Redhat" is misspelled 1906732 - Machine API proxy support should be tested 1906745 - Update Helm endpoints to use Helm 3.4.x 1906760 - performance issues with topology constantly re-rendering 1906766 - localizedAutoscaled&Autoscalingpod texts overlap with the pod ring 1906768 - Virtualization nav item is incorrectly placed in the Admin Workloads section 1906769 - topology fails to load with non-kubeadmin user 1906770 - shortcuts on mobiles view occupies a lot of space 1906798 - Dev catalog customization doesn't update console-config ConfigMap 1906806 - Allow installing extra packages in ironic container images 1906808 - [test-disabled] ServiceAccounts should support OIDC discovery of service account issuer 1906835 - Topology view shows add page before then showing full project workloads 1906840 - ClusterOperator should not have status "Updating" if operator version is the same as the release version 1906844 - EndpointSlice and EndpointSliceProxying feature gates should be disabled for openshift-sdn kube-proxy 1906860 - Bump kube dependencies to v1.20 for Net Edge components 1906864 - Quick Starts Tour: Need to adjust vertical spacing 1906866 - Translations of Sample-Utils 1906871 - White screen when sort by name in monitoring alerts page 1906872 - Pipeline Tech Preview Badge Alignment 1906875 - Provide an option to force backup even when API is not available. 1906877 - Placeholder' value in search filter do not match column heading in Vulnerabilities 1906879 - Add missing i18n keys 1906880 - oidcdiscoveryendpoint controller invalidates all TokenRequest API tokens during install 1906896 - No Alerts causes odd empty Table (Need no content message) 1906898 - Missing User RoleBindings in the Project Access Web UI 1906899 - Quick Start - Highlight Bounding Box Issue 1906916 - Teach CVO about flowcontrol.apiserver.k8s.io/v1beta1 1906933 - Cluster Autoscaler should have improved mechanisms for group identifiers 1906935 - Delete resources when Provisioning CR is deleted 1906968 - Must-gather should support collecting kubernetes-nmstate resources 1906986 - Ensure failed pod adds are retried even if the pod object doesn't change 1907199 - Need to upgrade machine-api-operator module version under cluster-api-provider-kubevirt 1907202 - configs.imageregistry.operator.openshift.io cluster does not update its status fields after URL change 1907211 - beta promotion of p&f switched storage version to v1beta1, making downgrades impossible. 1907269 - Tooltips data are different when checking stack or not checking stack for the same time 1907280 - Install tour of OCS not available. 1907282 - Topology page breaks with white screen 1907286 - The default mhc machine-api-termination-handler couldn't watch spot instance 1907287 - [csi-snapshot-webhook] should support both v1beta1 and v1 version when creating volumesnapshot/volumesnapshotcontent 1907293 - Increase timeouts in e2e tests 1907295 - Gherkin script for improve management for helm 1907299 - Advanced Subscription Badge for KMS and Arbiter not present 1907303 - Align VM template list items by baseline 1907304 - Use PF styles for selected template card in VM Wizard 1907305 - Drop 'ISO' from CDROM boot source message 1907307 - Support and provider labels should be passed on between templates and sources 1907310 - Pin action should be renamed to favorite 1907312 - VM Template source popover is missing info about added date 1907313 - ClusterOperator objects cannot be overriden with cvo-overrides 1907328 - iproute-tc package is missing in ovn-kube image 1907329 - CLUSTER_PROFILE env. variable is not used by the CVO 1907333 - Node stuck in degraded state, mcp reports "Failed to remove rollback: error running rpm-ostree cleanup -r: error: Timeout was reached" 1907373 - Rebase to kube 1.20.0 1907375 - Bump to latest available 1.20.x k8s - workloads team 1907378 - Gather netnamespaces networking info 1907380 - kube-rbac-proxy exposes tokens, has excessive verbosity 1907381 - OLM fails to deploy an operator if its deployment template contains a description annotation that doesn't match the CSV one 1907390 - prometheus-adapter: panic after k8s 1.20 bump 1907399 - build log icon link on topology nodes cause app to reload 1907407 - Buildah version not accessible 1907421 - [4.6.1]oc-image-mirror command failed on "error: unable to copy layer" 1907453 - Dev Perspective -> running vm details -> resources -> no data 1907454 - Install PodConnectivityCheck CRD with CNO 1907459 - "The Boot source is also maintained by Red Hat." is always shown for all boot sources 1907475 - Unable to estimate the error rate of ingress across the connected fleet 1907480 -Active alertssection throwing forbidden error for users. 1907518 - Kamelets/Eventsource should be shown to user if they have create access 1907543 - Korean timestamps are shown when users' language preferences are set to German-en-en-US 1907610 - Update kubernetes deps to 1.20 1907612 - Update kubernetes deps to 1.20 1907621 - openshift/installer: bump cluster-api-provider-kubevirt version 1907628 - Installer does not set primary subnet consistently 1907632 - Operator Registry should update its kubernetes dependencies to 1.20 1907639 - pass dual-stack node IPs to kubelet in dual-stack clusters 1907644 - fix up handling of non-critical annotations on daemonsets/deployments 1907660 - Pod list does not render cell height correctly when pod names are too long (dynamic table rerendering issue?) 1907670 - CVE-2020-27846 crewjam/saml: authentication bypass in saml authentication 1907671 - Ingress VIP assigned to two infra nodes simultaneously - keepalived process running in pods seems to fail 1907767 - [e2e][automation]update test suite for kubevirt plugin 1907770 - Recent RHCOS 47.83 builds (from rhcos-47.83.202012072210-0 on) don't allow master and worker nodes to boot 1907792 - Theoverridesof the OperatorCondition cannot block the operator upgrade 1907793 - Surface support info in VM template details 1907812 - 4.7 to 4.6 downgrade stuck in clusteroperator storage 1907822 - [OCP on OSP] openshift-install panic when checking quota with install-config have no flavor set 1907863 - Quickstarts status not updating when starting the tour 1907872 - dual stack with an ipv6 network fails on bootstrap phase 1907874 - QE - Design Gherkin Scenarios for epic ODC-5057 1907875 - No response when try to expand pvc with an invalid size 1907876 - Refactoring record package to make gatherer configurable 1907877 - QE - Automation- pipelines builder scripts 1907883 - Fix Pipleine creation without namespace issue 1907888 - Fix pipeline list page loader 1907890 - Misleading and incomplete alert message shown in pipeline-parameters and pipeline-resources form 1907892 - Unable to edit application deployed using "From Devfile" option 1907893 - navSortUtils.spec.ts unit test failure 1907896 - When a workload is added, Topology does not place the new items well 1907908 - VM Wizard always uses VirtIO for the VM rootdisk regardless what is defined in common-template 1907924 - Enable madvdontneed in OpenShift Images 1907929 - Enable madvdontneed in OpenShift System Components Part 2 1907936 - NTO is not reporting nto_profile_set_total metrics correctly after reboot 1907947 - The kubeconfig saved in tenantcluster shouldn't include anything that is not related to the current context 1907948 - OCM-O bump to k8s 1.20 1907952 - bump to k8s 1.20 1907972 - Update OCM link to open Insights tab 1907989 - DataVolumes was intorduced in common templates - VM creation fails in the UI 1907998 - Gather kube_pod_resource_request/limit metrics as exposed in upstream KEP 1916 1908001 - [CVE-2020-10749] Update github.com/containernetworking/plugins to v.0.8.6 in egress-router-cni 1908014 - e2e-aws-ansible and e2e-aws-helm are broken in ocp-release-operator-sdk 1908035 - dynamic-demo-plugin build does not generate dist directory 1908135 - quick search modal is not centered over topology 1908145 - kube-scheduler-recovery-controller container crash loop when router pod is co-scheduled 1908159 - [AWS C2S] MCO fails to sync cloud config 1908171 - GCP: Installation fails when installing cluster with n1-custom-4-16384custom type (n1-custom-4-16384) 1908180 - Add source for template is stucking in preparing pvc 1908217 - CI: Server-Side Apply should work for oauth.openshift.io/v1: has no tokens 1908231 - [Migration] The pods ovnkube-node are in CrashLoopBackOff after SDN to OVN 1908277 - QE - Automation- pipelines actions scripts 1908280 - Documentation describingignore-volume-azis incorrect 1908296 - Fix pipeline builder form yaml switcher validation issue 1908303 - [CVE-2020-28367 CVE-2020-28366] Remove CGO flag from rhel Dockerfile in Egress-Router-CNI 1908323 - Create button missing for PLR in the search page 1908342 - The new pv_collector_total_pv_count is not reported via telemetry 1908344 - [vsphere-problem-detector] CheckNodeProviderID and CheckNodeDiskUUID have the same name 1908347 - CVO overwrites ValidatingWebhookConfiguration for snapshots 1908349 - Volume snapshot tests are failing after 1.20 rebase 1908353 - QE - Automation- pipelines runs scripts 1908361 - bump to k8s 1.20 1908367 - QE - Automation- pipelines triggers scripts 1908370 - QE - Automation- pipelines secrets scripts 1908375 - QE - Automation- pipelines workspaces scripts 1908381 - Go Dependency Fixes for Devfile Lib 1908389 - Loadbalancer Sync failing on Azure 1908400 - Tests-e2e, increase timeouts, re-add TestArchiveUploadedAndResultsReceived 1908407 - Backport Upstream 95269 to fix potential crash in kubelet 1908410 - Exclude Yarn from VSCode search 1908425 - Create Role Binding form subject type and name are undefined when All Project is selected 1908431 - When the marketplace-operator pod get's restarted, the custom catalogsources are gone, as well as the pods 1908434 - Remove &apos from metal3-plugin internationalized strings 1908437 - Operator backed with no icon has no badge associated with the CSV tag 1908459 - bump to k8s 1.20 1908461 - Add bugzilla component to OWNERS file 1908462 - RHCOS 4.6 ostree removed dhclient 1908466 - CAPO AZ Screening/Validating 1908467 - Zoom in and zoom out in topology package should be sentence case 1908468 - [Azure][4.7] Installer can't properly parse instance type with non integer memory size 1908469 - nbdb failed to come up while bringing up OVNKubernetes cluster 1908471 - OLM should bump k8s dependencies to 1.20 1908484 - oc adm release extract --cloud=aws --credentials-requests dumps all manifests 1908493 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM 1908545 - VM clone dialog does not open 1908557 - [e2e][automation]Miss css id on bootsource and reviewcreate step on wizard 1908562 - Pod readiness is not being observed in real world cases 1908565 - [4.6] Cannot filter the platform/arch of the index image 1908573 - Align the style of flavor 1908583 - bootstrap does not run on additional networks if configured for master in install-config 1908596 - Race condition on operator installation 1908598 - Persistent Dashboard shows events for all provisioners 1908641 - Go back to Catalog Page link on Virtual Machine page vanishes on empty state 1908648 - Skip TestKernelType test on OKD, adjust TestExtensions 1908650 - The title of customize wizard is inconsistent 1908654 - cluster-api-provider: volumes and disks names shouldn't change by machine-api-operator 1908675 - Reenable [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default [Suite:openshift/conformance/parallel] [Suite:k8s] 1908687 - Option to save user settings separate when using local bridge (affects console developers only) 1908697 - Showkubectl diff command in the oc diff help page 1908715 - Pressing the arrow up key when on topmost quick-search list item it should loop back to bottom 1908716 - UI breaks on click of sidebar of ksvc (if revisions not up) in topology on 4.7 builds 1908717 - "missing unit character in duration" error in some network dashboards 1908746 - [Safari] Drop Shadow doesn't works as expected on hover on workload 1908747 - stale S3 CredentialsRequest in CCO manifest 1908758 - AWS: NLB timeout value is rejected by AWS cloud provider after 1.20 rebase 1908830 - RHCOS 4.6 - Missing Initiatorname 1908868 - Update empty state message for EventSources and Channels tab 1908880 - 4.7 aws-serial CI: NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes 1908883 - CVE-2020-29652 golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference 1908888 - Dualstack does not work with multiple gateways 1908889 - Bump CNO to k8s 1.20 1908891 - TestDNSForwarding DNS operator e2e test is failing frequently 1908914 - CNO: upgrade nodes before masters 1908918 - Pipeline builder yaml view sidebar is not responsive 1908960 - QE - Design Gherkin Scenarios 1908971 - Gherkin Script for pipeline debt 4.7 1908983 - i18n: Add Horizontal Pod Autoscaler action menu is not translated 1908997 - Unsupported access mode should not be available when creating pvc by cinder-csi-driver/gcp-pd-csi-driver from web-console 1908998 - [cinder-csi-driver] doesn't detect the credentials change 1909004 - "No datapoints found" for RHEL node's filesystem graph 1909005 - i18n: workloads list view heading is not translated 1909012 - csi snapshot webhook does not block any invalid update for volumesnapshot and volumesnapshotcontent objects 1909027 - Disks option of Sectected capacity chart shows HDD disk even on selection of SDD disk type 1909043 - OCP + OCS 4.7 Internal - Storage cluster creation throws warning when zone=0 in VMware 1909067 - Web terminal should keep latest output when connection closes 1909070 - PLR and TR Logs component is not streaming as fast as tkn 1909092 - Error Message should not confuse user on Channel form 1909096 - OCP 4.7+OCS 4.7 - The Requested Cluster Capacity field needs to include the selected capacity in calculation in Review and Create Page 1909108 - Machine API components should use 1.20 dependencies 1909116 - Catalog Sort Items dropdown is not aligned on Firefox 1909198 - Move Sink action option is not working 1909207 - Accessibility Issue on monitoring page 1909236 - Remove pinned icon overlap on resource name 1909249 - Intermittent packet drop from pod to pod 1909276 - Accessibility Issue on create project modal 1909289 - oc debug of an init container no longer works 1909290 - Logging may be broken due to mix of k8s.io/klog v1 and v2 1909358 - registry.redhat.io/redhat/community-operator-index:latest only have hyperfoil-bundle 1909453 - Boot disk RAID can corrupt ESP if UEFI firmware writes to it 1909455 - Boot disk RAID will not boot if the primary disk enumerates but fails I/O 1909464 - Build operator-registry with golang-1.15 1909502 - NO_PROXY is not matched between bootstrap and global cluster setting which lead to desired master machineconfig is not found 1909521 - Add kubevirt cluster type for e2e-test workflow 1909527 - [IPI Baremetal] After upgrade from 4.6 to 4.7 metal3 pod does not get created 1909587 - [OCP4] all of the OCP master nodes with soft-anti-affinity run on the same OSP node 1909610 - Fix available capacity when no storage class selected 1909678 - scale up / down buttons available on pod details side panel 1909723 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder & base images to be consistent with ART 1909730 - unbound variable error if EXTRA_PKGS_LIST is not defined 1909739 - Arbiter request data changes 1909744 - cluster-api-provider-openstack: Bump gophercloud 1909790 - PipelineBuilder yaml view cannot be used for editing a pipeline 1909791 - Update standalone kube-proxy config for EndpointSlice 1909792 - Empty states for some details page subcomponents are not i18ned 1909815 - Perspective switcher is only half-i18ned 1909821 - OCS 4.7 LSO installation blocked because of Error "Invalid value: "integer": spec.flexibleScaling in body 1909836 - operator-install-global Cypress test was failing in OLM as it depends on an operator that isn't installed in CI 1909864 - promote-release-openshift-machine-os-content-e2e-aws-4.5 is perm failing 1909911 - [OVN]EgressFirewall caused a segfault 1909943 - Upgrade from 4.6 to 4.7 stuck due to write /sys/devices/xxxx/block/sda/queue/scheduler: invalid argument 1909958 - Support Quick Start Highlights Properly 1909978 - ignore-volume-az = yes not working on standard storageClass 1909981 - Improve statement in template select step 1909992 - Fail to pull the bundle image when using the private index image 1910024 - Reload issue in latest(4.7) UI code on 4.6 cluster locally in dev 1910036 - QE - Design Gherkin Scenarios ODC-4504 1910049 - UPI: ansible-galaxy is not supported 1910127 - [UPI on oVirt]: Improve UPI Documentation 1910140 - fix the api dashboard with changes in upstream kube 1.20 1910160 - If two OperatorConditions include the same deployments they will keep updating the deployment's containers with the OPERATOR_CONDITION_NAME Environment Variable 1910165 - DHCP to static lease script doesn't handle multiple addresses 1910305 - [Descheduler] - The minKubeVersion should be 1.20.0 1910409 - Notification drawer is not localized for i18n 1910459 - Could not provision gcp volume if delete secret gcp-pd-cloud-credentials 1910492 - KMS details are auto-populated on the screen in next attempt at Storage cluster creation 1910501 - Installed Operators->Operand required: Clicking on cancel in Storage cluster page takes back to the Install Operator page 1910533 - [OVN] It takes about 5 minutes for EgressIP failover to work 1910581 - library-go: proxy ENV is not injected into csi-driver-controller which lead to storage operator never get ready 1910666 - Creating a Source Secret from type SSH-Key should use monospace font for better usability 1910738 - OCP 4.7 Installation fails on VMWare due to 1 worker that is degraded 1910739 - Redfish-virtualmedia (idrac) deploy fails on "The Virtual Media image server is already connected" 1910753 - Support Directory Path to Devfile 1910805 - Missing translation for Pipeline status and breadcrumb text 1910829 - Cannot delete a PVC if the dv's phase is WaitForFirstConsumer 1910840 - Show Nonexistent command info in theoc rollback -hhelp page 1910859 - breadcrumbs doesn't use last namespace 1910866 - Unify templates string 1910870 - Unify template dropdown action 1911016 - Prometheus unable to mount NFS volumes after upgrading to 4.6 1911129 - Monitoring charts renders nothing when switching from a Deployment to "All workloads" 1911176 - [MSTR-998] Wrong text shown when hovering on lines of charts in API Performance dashboard 1911212 - [MSTR-998] API Performance Dashboard "Period" drop-down has a choice "$__auto_interval_period" which can bring "1:154: parse error: missing unit character in duration" 1911213 - Wrong and misleading warning for VMs that were created manually (not from template) 1911257 - [aws-c2s] failed to create cluster, kube-cloud-config was not created 1911269 - waiting for the build message present when build exists 1911280 - Builder images are not detected for Dotnet, Httpd, NGINX 1911307 - Pod Scale-up requires extra privileges in OpenShift web-console 1911381 - "Select Persistent Volume Claim project" shows in customize wizard when select a source available template 1911382 - "source volumeMode (Block) and target volumeMode (Filesystem) do not match" shows in VM Error 1911387 - Hit error - "Cannot read property 'value' of undefined" while creating VM from template 1911408 - [e2e][automation] Add auto-clone cli tests and new flow of VM creation 1911418 - [v2v] The target storage class name is not displayed if default storage class is used 1911434 - git ops empty state page displays icon with watermark 1911443 - SSH Cretifiaction field should be validated 1911465 - IOPS display wrong unit 1911474 - Devfile Application Group Does Not Delete Cleanly (errors) 1911487 - Pruning Deployments should use ReplicaSets instead of ReplicationController 1911574 - Expose volume mode on Upload Data form 1911617 - [CNV][UI] Failure to add source to VM template when no default storage class is defined 1911632 - rpm-ostree command fail due to wrong options when updating ocp-4.6 to 4.7 on worker nodes with rt-kernel 1911656 - using 'operator-sdk run bundle' to install operator successfully, but the command output said 'Failed to run bundle'' 1911664 - [Negative Test] After deleting metal3 pod, scaling worker stuck on provisioning state 1911782 - Descheduler should not evict pod used local storage by the PVC 1911796 - uploading flow being displayed before submitting the form 1912066 - The ansible type operator's manager container is not stable when managing the CR 1912077 - helm operator's default rbac forbidden 1912115 - [automation] Analyze job keep failing because of 'JavaScript heap out of memory' 1912237 - Rebase CSI sidecars for 4.7 1912381 - [e2e][automation] Miss css ID on Create Network Attachment Definition page 1912409 - Fix flow schema deployment 1912434 - Update guided tour modal title 1912522 - DNS Operator e2e test: TestCoreDNSImageUpgrade is fundamentally broken 1912523 - Standalone pod status not updating in topology graph 1912536 - Console Plugin CR for console-demo-plugin has wrong apiVersion 1912558 - TaskRun list and detail screen doesn't show Pending status 1912563 - p&f: carry 97206: clean up executing request on panic 1912565 - OLM macOS local build broken by moby/term dependency 1912567 - [OCP on RHV] Node becomes to 'NotReady' status when shutdown vm from RHV UI only on the second deletion 1912577 - 4.1/4.2->4.3->...-> 4.7 upgrade is stuck during 4.6->4.7 with co/openshift-apiserver Degraded, co/network not Available and several other components pods CrashLoopBackOff 1912590 - publicImageRepository not being populated 1912640 - Go operator's controller pods is forbidden 1912701 - Handle dual-stack configuration for NIC IP 1912703 - multiple queries can't be plotted in the same graph under some conditons 1912730 - Operator backed: In-context should support visual connector if SBO is not installed 1912828 - Align High Performance VMs with High Performance in RHV-UI 1912849 - VM from wizard - default flavor does not match the actual flavor set by common templates 1912852 - VM from wizard - available VM templates - "storage" field is "0 B" 1912888 - recycler template should be moved to KCM operator 1912907 - Helm chart repository index can contain unresolvable relative URL's 1912916 - Set external traffic policy to cluster for IBM platform 1912922 - Explicitly specifying the operator generated default certificate for an ingress controller breaks the ingress controller 1912938 - Update confirmation modal for quick starts 1912942 - cluster-storage-operator: proxy ENV is not injected into vsphere-problem-detector deployment 1912944 - cluster-storage-operator: proxy ENV is not injected into Manila CSI driver operator deployment 1912945 - aws-ebs-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912946 - gcp-pd-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912947 - openstack-cinder-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912948 - csi-driver-manila-operator: proxy ENV is not injected into the CSI driver 1912949 - ovirt-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912977 - rebase upstream static-provisioner 1913006 - Remove etcd v2 specific alerts with etcd_http* metrics 1913011 - [OVN] Pod's external traffic not use egressrouter macvlan ip as a source ip 1913037 - update static-provisioner base image 1913047 - baremetal clusteroperator progressing status toggles between true and false when cluster is in a steady state 1913085 - Regression OLM uses scoped client for CRD installation 1913096 - backport: cadvisor machine metrics are missing in k8s 1.19 1913132 - The installation of Openshift Virtualization reports success early before it 's succeeded eventually 1913154 - Upgrading to 4.6.10 nightly failed with RHEL worker nodes: Failed to find /dev/disk/by-label/root 1913196 - Guided Tour doesn't handle resizing of browser 1913209 - Support modal should be shown for community supported templates 1913226 - [Migration] The SDN migration rollback failed if customize vxlanPort 1913249 - update info alert this template is not aditable 1913285 - VM list empty state should link to virtualization quick starts 1913289 - Rebase AWS EBS CSI driver for 4.7 1913292 - OCS 4.7 Installation failed over vmware when arbiter was enabled, as flexibleScaling is also getting enabled 1913297 - Remove restriction of taints for arbiter node 1913306 - unnecessary scroll bar is present on quick starts panel 1913325 - 1.20 rebase for openshift-apiserver 1913331 - Import from git: Fails to detect Java builder 1913332 - Pipeline visualization breaks the UI when multiple taskspecs are used 1913343 - (release-4.7) Added changelog file for insights-operator 1913356 - (release-4.7) Implemented gathering specific logs from openshift apiserver operator 1913371 - Missing i18n key "Administrator" in namespace "console-app" and language "en." 1913386 - users can see metrics of namespaces for which they don't have rights when monitoring own services with prometheus user workloads 1913420 - Time duration setting of resources is not being displayed 1913536 - 4.6.9 -> 4.7 upgrade hangs. RHEL 7.9 worker stuck on "error enabling unit: Failed to execute operation: File exists\\n\" 1913554 - Recording rule for ingress error fraction SLI is incorrect, uses irate instead of increase 1913560 - Normal user cannot load template on the new wizard 1913563 - "Virtual Machine" is not on the same line in create button when logged with normal user 1913567 - Tooltip data should be same for line chart or stacked chart, display data value same as the table 1913568 - Normal user cannot create template 1913582 - [Migration]SDN to OVN migration stucks on MCO for rhel worker 1913585 - Topology descriptive text fixes 1913608 - Table data contains data value None after change time range in graph and change back 1913651 - Improved Red Hat image and crashlooping OpenShift pod collection 1913660 - Change location and text of Pipeline edit flow alert 1913685 - OS field not disabled when creating a VM from a template 1913716 - Include additional use of existing libraries 1913725 - Refactor Insights Operator Plugin states 1913736 - Regression: fails to deploy computes when using root volumes 1913747 - Update operator to kubernetes 1.20.1 to pickup upstream fixes 1913751 - add third-party network plugin test suite to openshift-tests 1913783 - QE-To fix the merging pr issue, commenting the afterEach() block 1913807 - Template support badge should not be shown for community supported templates 1913821 - Need definitive steps about uninstalling descheduler operator 1913851 - Cluster Tasks are not sorted in pipeline builder 1913864 - BuildConfig YAML template references ruby ImageStreamTag that no longer exists 1913951 - Update the Devfile Sample Repo to an Official Repo Host 1913960 - Cluster Autoscaler should use 1.20 dependencies 1913969 - Field dependency descriptor can sometimes cause an exception 1914060 - Disk created from 'Import via Registry' cannot be used as boot disk 1914066 - [sriov] sriov dp pod crash when delete ovs HW offload policy 1914090 - Grafana - The resulting dataset is too large to graph (OCS RBD volumes being counted as disks) 1914119 - vsphere problem detector operator has no permission to update storages.operator.openshift.io instances 1914125 - Still using /dev/vde as default device path when create localvolume 1914183 - Empty NAD page is missing link to quickstarts 1914196 - target port infrom dockerfileflow does nothing 1914204 - Creating VM from dev perspective may fail with template not found error 1914209 - Associate image secret name to pipeline serviceaccount imagePullSecrets 1914212 - [e2e][automation] Add test to validate bootable disk souce 1914250 - ovnkube-node fails on master nodes when both DHCPv6 and SLAAC addresses are configured on nodes 1914284 - Upgrade to OCP 4.6.9 results in cluster-wide DNS and connectivity issues due to bad NetworkPolicy flows 1914287 - Bring back selfLink 1914301 - User VM Template source should show the same provider as template itself 1914303 - linuxptp-daemon is not forwarding ptp4l stderr output to openshift logs 1914309 - /terminal page when WTO not installed shows nonsensical error 1914334 - order of getting started samples is arbitrary 1914343 - [sig-imageregistry][Feature:ImageTriggers] Annotation trigger reconciles after the image is overwritten [Suite:openshift/conformance/parallel] timeout on s390x 1914349 - Increase and decrease buttons in max and min pods in HPA page has distorted UI 1914405 - Quick search modal should be opened when coming back from a selection 1914407 - Its not clear that node-ca is running as non-root 1914427 - Count of pods on the dashboard is incorrect 1914439 - Typo in SRIOV port create command example 1914451 - cluster-storage-operator pod running as root 1914452 - oc image append, oc image extract outputs wrong suggestion to use --keep-manifest-list=true 1914642 - Customize Wizard Storage tab does not pass validation 1914723 - SamplesTBRInaccessibleOnBoot Alert has a misspelling 1914793 - device names should not be translated 1914894 - Warn about using non-groupified api version 1914926 - webdriver-manager pulls incorrect version of ChomeDriver due to a bug 1914932 - Put correct resource name in relatedObjects 1914938 - PVC disk is not shown on customization wizard general tab 1914941 - VM Template rootdisk is not deleted after fetching default disk bus 1914975 - Collect logs from openshift-sdn namespace 1915003 - No estimate of average node readiness during lifetime of a cluster 1915027 - fix MCS blocking iptables rules 1915041 - s3:ListMultipartUploadParts is relied on implicitly 1915079 - Canary controller should not periodically rotate the canary route endpoint for performance reasons 1915080 - Large number of tcp connections with shiftstack ocp cluster in about 24 hours 1915085 - Pods created and rapidly terminated get stuck 1915114 - [aws-c2s] worker machines are not create during install 1915133 - Missing default pinned nav items in dev perspective 1915176 - Update snapshot API CRDs to v1 in web-console when creating volumesnapshot related resource 1915187 - Remove the "Tech preview" tag in web-console for volumesnapshot 1915188 - Remove HostSubnet anonymization 1915200 - [OCP 4.7+ OCS 4.6]Arbiter related Note should not show up during UI deployment 1915217 - OKD payloads expect to be signed with production keys 1915220 - Remove dropdown workaround for user settings 1915235 - Failed to upgrade to 4.7 from 4.6 due to the machine-config failure 1915262 - When deploying with assisted install the CBO operator is installed and enabled without metal3 pod 1915277 - [e2e][automation]fix cdi upload form test 1915295 - [BM][IP][Dualstack] Installation failed - operators report dial tcp 172.30.0.1:443: i/o timeout 1915304 - Updating scheduling component builder & base images to be consistent with ART 1915312 - Prevent schedule Linux openshift-network-diagnostics pod on Windows node 1915318 - [Metal] bareMetal IPI - cannot interact with toolbox container after first execution only in parallel from different connection 1915348 - [RFE] linuxptp operator needs to expose the uds_address_socket to be used by an application pod 1915357 - Dev Catalog doesn't load anything if virtualization operator is installed 1915379 - New template wizard should require provider and make support input a dropdown type 1915408 - Failure in operator-registry kind e2e test 1915416 - [Descheduler] descheduler evicts pod which does not have any ownerRef or descheduler evict annotation 1915460 - Cluster name size might affect installations 1915500 - [aws c2s] kube-controller-manager crash loops trying to fetch the AWS instance 1915540 - Silent 4.7 RHCOS install failure on ppc64le 1915579 - [Metal] redhat-support-tool became unavailable after tcpdump usage (BareMetal IPI) 1915582 - p&f: carry upstream pr 97860 1915594 - [e2e][automation] Improve test for disk validation 1915617 - Bump bootimage for various fixes 1915624 - "Please fill in the following field: Template provider" blocks customize wizard 1915627 - Translate Guided Tour text. 1915643 - OCP4.6 to 4.7 upgrade failed due to manila csi driver operator sync error 1915647 - Intermittent White screen when the connector dragged to revision 1915649 - "Template support" pop up is not a warning; checkbox text should be rephrased 1915654 - [e2e][automation] Add a verification for Afinity modal should hint "Matching node found" 1915661 - Can't run the 'oc adm prune' command in a pod 1915672 - Kuryr doesn't work with selfLink disabled. 1915674 - Golden image PVC creation - storage size should be taken from the template 1915685 - Message for not supported template is not clear enough 1915760 - Need to increase timeout to wait rhel worker get ready 1915793 - quick starts panel syncs incorrectly across browser windows 1915798 - oauth connection errors for openshift console pods on an OVNKube OCP 4.7 cluster 1915818 - vsphere-problem-detector: use "_totals" in metrics 1915828 - Latest Dell firmware (04.40.00.00) fails to install IPI on BM using idrac-virtualmedia protocol 1915859 - vsphere-problem-detector: does not report ESXi host version nor VM HW version 1915871 - operator-sdk version in new downstream image should be v1.2.0-ocp not v4.7.0 1915879 - Pipeline Dashboard tab Rename to Pipeline Metrics 1915885 - Kuryr doesn't support workers running on multiple subnets 1915898 - TaskRun log output shows "undefined" in streaming 1915907 - test/cmd/builds.sh uses docker.io 1915912 - sig-storage-csi-snapshotter image not available 1915926 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder & base images to be consistent with ART 1915929 - A11y Violation: svg-img-alt for time axis of Utilization Card on Cluster Dashboard 1915939 - Resizing the browser window removes Web Terminal Icon 1915945 - [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] 1915959 - Baremetal cluster operator is included in a ROKS installation of 4.7 1915962 - ROKS: manifest with machine health check fails to apply in 4.7 1915972 - Global configuration breadcrumbs do not work as expected 1915981 - Install ethtool and conntrack in container for debugging 1915995 - "Edit RoleBinding Subject" action under RoleBinding list page kebab actions causes unhandled exception 1915998 - Installer bootstrap node setting of additional subnets inconsistent with additional security groups 1916021 - OLM enters infinite loop if Pending CSV replaces itself 1916056 - Need Visual Web Terminal metric enabled for OCP monitoring telemetry 1916081 - non-existant should be non-existent in CloudCredentialOperatorTargetNamespaceMissing alert's annotations 1916099 - VM creation - customization wizard - user should be allowed to delete and re-create root disk 1916126 - [e2e][automation] Help fix tests for vm guest-agent and next-run-configuration 1916145 - Explicitly set minimum versions of python libraries 1916164 - Update csi-driver-nfs builder & base images to be consistent with ART 1916221 - csi-snapshot-controller-operator: bump dependencies for 4.7 1916271 - Known issues should mention failure to apply soft-anti-affinity to masters beyond the third 1916363 - [OVN] ovs-configuration.service reports as failed within all nodes using version 4.7.0-fc.2 1916379 - error metrics from vsphere-problem-detector should be gauge 1916382 - Can't create ext4 filesystems with Ignition 1916384 - 4.5.15 and later cluster-version operator does not sync ClusterVersion status before exiting, leaving 'verified: false' even for verified updates 1916401 - Deleting an ingress controller with a bad DNS Record hangs 1916417 - [Kuryr] Must-gather does not have all Custom Resources information 1916419 - [sig-devex][Feature:ImageEcosystem][Slow] openshift images should be SCL enabled returning s2i usage when running the image 1916454 - teach CCO about upgradeability from 4.6 to 4.7 1916486 - [OCP RHV] [Docs] Update RHV CSI provisioning section in OCP documenation 1916502 - Boot disk mirroring fails with mdadm error 1916524 - Two rootdisk shows on storage step 1916580 - Default yaml is broken for VM and VM template 1916621 - oc adm node-logs examples are wrong 1916642 - [zh_CN] Redundant period in Secrets - Create drop down menu - Key value secret. 1916692 - Possibly fails to destroy LB and thus cluster 1916711 - Update Kube dependencies in MCO to 1.20.0 1916747 - remove links to quick starts if virtualization operator isn't updated to 2.6 1916764 - editing a workload with no application applied, will auto fill the app 1916834 - Pipeline Metrics - Text Updates 1916843 - collect logs from openshift-sdn-controller pod 1916853 - cluster will not gracefully recover if openshift-etcd namespace is removed 1916882 - OCS 4.7 LSO : wizard (Discover disks and create storageclass) does not show zone when topology.kubernetes.io/zone are added manually 1916888 - OCS wizard Donor chart does not get updated whenDevice Typeis edited 1916938 - Using 4.6 install-config.yaml file with lbFloatingIP results in validation error "Forbidden: cannot specify lbFloatingIP and apiFloatingIP together" 1916949 - ROKS: manifests in openshift-oauth-apiserver ns fails to create with non-existent namespace 1917101 - [UPI on oVirt] - 'RHCOS image' topic isn't located in the right place in UPI document 1917114 - Upgrade from 4.5.9 to 4.7 fails as authentication operator is Degraded due to '"ProxyConfigController" controller failed to sync "key"' error 1917117 - Common templates - disks screen: invalid disk name 1917124 - Custom template - clone existing PVC - the name of the target VM's data volume is hard-coded; only one VM can be created 1917146 - [oVirt] Consume 23-10 ovirt sdk- csi operator 1917147 - [oVirt] csi operator panics if ovirt-engine suddenly becomes unavailable. 1917148 - [oVirt] Consume 23-10 ovirt sdk 1917239 - Monitoring time options overlaps monitoring tab navigation when Quickstart panel is opened 1917272 - Should update the default minSize to 1Gi when create localvolumeset on web console 1917303 - [automation][e2e] make kubevirt-plugin gating job mandatory 1917315 - localvolumeset-local-provisoner-xxx pods are not killed after upgrading from 4.6 to 4.7 1917327 - annotations.message maybe wrong for NTOPodsNotReady alert 1917367 - Refactor periodic.go 1917371 - Add docs on how to use the built-in profiler 1917372 - Application metrics are shown on Metrics dashboard but not in linked Prometheus UI in OCP management console 1917395 - pv-pool backing store name restriction should be at 43 characters from the ocs ui 1917484 - [BM][IPI] Failed to scale down machineset 1917522 - Deprecate --filter-by-os in oc adm catalog mirror 1917537 - controllers continuously busy reconciling operator 1917551 - use min_over_time for vsphere prometheus alerts 1917585 - OLM Operator install page missing i18n 1917587 - Manila CSI operator becomes degraded if user doesn't have permissions to list share types 1917605 - Deleting an exgw causes pods to no longer route to other exgws 1917614 - [aws c2s] ingress operator uses unavailable resourcegrouptaggings API 1917656 - Add to Project/application for eventSources from topology shows 404 1917658 - Show TP badge for sources powered by camel connectors in create flow 1917660 - Editing parallelism of job get error info 1917678 - Could not provision pv when no symlink and target found on rhel worker 1917679 - Hide double CTA in admin pipelineruns tab 1917683 -NodeTextFileCollectorScrapeErroralert in OCP 4.6 cluster. 1917759 - Console operator panics after setting plugin that does not exists to the console-operator config 1917765 - ansible-operator version in downstream image should be v1.3.0 not v4.7.0 1917770 - helm-operator version in downstream image should be v1.3.0 not v4.7.0 1917799 - Gather s list of names and versions of installed OLM operators 1917803 - [sig-storage] Pod Disks should be able to delete a non-existent PD without error 1917814 - Show Broker create option in eventing under admin perspective 1917838 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types 1917872 - [oVirt] rebase on latest SDK 2021-01-12 1917911 - network-tools needs ovnkube-trace binary from ovn-kubernetes image 1917938 - upgrade version of dnsmasq package 1917942 - Canary controller causes panic in ingress-operator 1918019 - Undesired scrollbars in markdown area of QuickStart 1918068 - Flaky olm integration tests 1918085 - reversed name of job and namespace in cvo log 1918112 - Flavor is not editable if a customize VM is created from cli 1918129 - Update IO sample archive with missing resources & remove IP anonymization from clusteroperator resources 1918132 - i18n: Volume Snapshot Contents menu is not translated 1918133 - [e2e][automation] Fix ocp 4.7 existing tests - part2 1918140 - Deployment openstack-cinder-csi-driver-controller and openstack-manila-csi-controllerplugin doesn't be installed on OSP 1918153 - When&character is set as an environment variable in a build config it is getting converted as\u00261918185 - Capitalization on PLR details page 1918287 - [ovirt] ovirt csi driver is flooding RHV with API calls and spam the event UI with new connections 1918318 - Kamelet connector's are not shown in eventing section under Admin perspective 1918351 - Gather SAP configuration (SCC & ClusterRoleBinding) 1918375 - [calico] rbac-proxy container in kube-proxy fails to create tokenreviews 1918395 - [ovirt] increase livenessProbe period 1918415 - MCD nil pointer on dropins 1918438 - [ja_JP, zh_CN] Serverless i18n misses 1918440 - Kernel Arguments get reapplied even when no new kargs has been added in MachineConfig 1918471 - CustomNoUpgrade Feature gates are not working correctly 1918558 - Supermicro nodes boot to PXE upon reboot after successful deployment to disk 1918622 - Updating ose-jenkins-agent-maven builder & base images to be consistent with ART 1918623 - Updating ose-jenkins-agent-nodejs-12 builder & base images to be consistent with ART 1918625 - Updating ose-jenkins-agent-nodejs-10 builder & base images to be consistent with ART 1918635 - Updating openshift-jenkins-2 builder & base images to be consistent with ART #1197 1918639 - Event listener with triggerRef crashes the console 1918648 - Subscription page doesn't show InstallPlan correctly 1918716 - Manilacsi becomes degraded even though it is not available with the underlying Openstack 1918748 - helmchartrepo is not http(s)_proxy-aware 1918757 - Consistant fallures of features/project-creation.feature Cypress test in CI 1918803 - Need dedicated details page w/ global config breadcrumbs for 'KnativeServing' plugin 1918826 - Insights popover icons are not horizontally aligned 1918879 - need better debug for bad pull secrets 1918958 - The default NMstate instance from the operator is incorrect 1919097 - Close bracket ")" missing at the end of the sentence in the UI 1919231 - quick search modal cut off on smaller screens 1919259 - Make "Add x" singular in Pipeline Builder 1919260 - VM Template list actions should not wrap 1919271 - NM prepender script doesn't support systemd-resolved 1919341 - Updating ose-jenkins-agent-maven builder & base images to be consistent with ART 1919360 - Need managed-cluster-info metric enabled for OCP monitoring telemetry 1919379 - dotnet logo out of date 1919387 - Console login fails with no error when it can't write to localStorage 1919396 - A11y Violation: svg-img-alt on Pod Status ring 1919407 - OpenStack IPI has three-node control plane limitation, but InstallConfigs aren't verified 1919750 - Search InstallPlans got Minified React error 1919778 - Upgrade is stuck in insights operator Degraded with "Source clusterconfig could not be retrieved" until insights operator pod is manually deleted 1919823 - OCP 4.7 Internationalization Chinese tranlate issue 1919851 - Visualization does not render when Pipeline & Task share same name 1919862 - The tip information foroc new-project --skip-config-writeis wrong 1919876 - VM created via customize wizard cannot inherit template's PVC attributes 1919877 - Click on KSVC breaks with white screen 1919879 - The toolbox container name is changed from 'toolbox-root' to 'toolbox-' in a chroot environment 1919945 - user entered name value overridden by default value when selecting a git repository 1919968 - [release-4.7] Undiagnosed panic detected in pod runtime.go:76: invalid memory address or nil pointer dereference 1919970 - NTO does not update when the tuned profile is updated. 1919999 - Bump Cluster Resource Operator Golang Versions 1920027 - machine-config-operator consistently failing during 4.6 to 4.7 upgrades and clusters do not install successfully with proxy configuration 1920200 - user-settings network error results in infinite loop of requests 1920205 - operator-registry e2e tests not working properly 1920214 - Bump golang to 1.15 in cluster-resource-override-admission 1920248 - re-running the pipelinerun with pipelinespec crashes the UI 1920320 - VM template field is "Not available" if it's created from common template 1920367 - When creating localvolumeset instance from the web console, the title for setting volumeMode isDisk Mode1920368 - Fix containers creation issue resulting in runc running on Guaranteed Pod CPUs 1920390 - Monitoring > Metrics graph shifts to the left when clicking the "Stacked" option and when toggling data series lines on / off 1920426 - Egress Router CNI OWNERS file should have ovn-k team members 1920427 - Need to updateoc loginhelp page since we don't support prompt interactively for the username 1920430 - [V2V] [UI] Browser window becomes empty when running import wizard for the first time 1920438 - openshift-tuned panics on turning debugging on/off. 1920445 - e2e-gcp-ovn-upgrade job is actually using openshift-sdn 1920481 - kuryr-cni pods using unreasonable amount of CPU 1920509 - wait for port 6443 to be open in the kube-scheduler container; use ss instead of lsof 1920524 - Topology graph crashes adding Open Data Hub operator 1920526 - catalog operator causing CPU spikes and bad etcd performance 1920551 - Boot Order is not editable for Templates in "openshift" namespace 1920555 - bump cluster-resource-override-admission api dependencies 1920571 - fcp multipath will not recover failed paths automatically 1920619 - Remove default scheduler profile value 1920655 - Console should not show the Create Autoscaler link in cluster settings when the CRD is not present 1920674 - MissingKey errors in bindings namespace 1920684 - Text in language preferences modal is misleading 1920695 - CI is broken because of bad image registry reference in the Makefile 1920756 - update generic-admission-server library to get the system:masters authorization optimization 1920769 - [Upgrade] OCP upgrade from 4.6.13 to 4.7.0-fc.4 for "network-check-target" failed when "defaultNodeSelector" is set 1920771 - i18n: Delete persistent volume claim drop down is not translated 1920806 - [OVN]Nodes lost network connection after reboot on the vSphere UPI 1920912 - Unable to power off BMH from console 1920981 - When OCS was deployed with arbiter mode enable add capacity is increasing the count by "2" 1920984 - [e2e][automation] some menu items names are out dated 1921013 - Gather PersistentVolume definition (if any) used in image registry config 1921023 - Do not enable Flexible Scaling to true for Internal mode clusters(revert to 4.6 behavior) 1921087 - 'start next quick start' link doesn't work and is unintuitive 1921088 - test-cmd is failing on volumes.sh pretty consistently 1921248 - Clarify the kubelet configuration cr description 1921253 - Text filter default placeholder text not internationalized 1921258 - User Preferences: Active perspective and project change in the current window when selected in a different window 1921275 - Panic in authentication-operator in (*deploymentController).updateOperatorDeploymentInfo 1921277 - Fix Warning and Info log statements to handle arguments 1921281 - oc get -o yaml --export returns "error: unknown flag: --export" 1921458 - [SDK] Gracefully handle therun bundle-upgradeif the lower version operator doesn't exist 1921556 - [OCS with Vault]: OCS pods didn't comeup after deploying with Vault details from UI 1921572 - For external source (i.e GitHub Source) form view as well shows yaml 1921580 - [e2e][automation]Test VM detail view actions dropdown does not pass 1921610 - Pipeline metrics font size inconsistency 1921644 - [e2e][automation] tests errors with wrong cloudInit new line syntax 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1921655 - [OSP] Incorrect error handling during cloudinfo generation 1921713 - [e2e][automation] fix failing VM migration tests 1921762 - Serving and Eventing breadcrumbs should direct users back to tabbed page view 1921774 - delete application modal errors when a resource cannot be found 1921806 - Explore page APIResourceLinks aren't i18ned 1921823 - CheckBoxControls not internationalized 1921836 - AccessTableRows don't internationalize "User" or "Group" 1921857 - Test flake when hitting router in e2e tests due to one router not being up to date 1921880 - Dynamic plugins are not initialized on console load in production mode 1921911 - Installer PR #4589 is causing leak of IAM role policy bindings 1921921 - "Global Configuration" breadcrumb does not use sentence case 1921949 - Console bug - source code URL broken for gitlab self-hosted repositories 1921954 - Subscription-related constraints in ResolutionFailed events are misleading 1922015 - buttons in modal header are invisible on Safari 1922021 - Nodes terminal page 'Expand' 'Collapse' button not translated 1922050 - [e2e][automation] Improve vm clone tests 1922066 - Cannot create VM from custom template which has extra disk 1922098 - Namespace selection dialog is not closed after select a namespace 1922099 - Updated Readme documentation for QE code review and setup 1922146 - Egress Router CNI doesn't have logging support. 1922267 - Collect specific ADFS error 1922292 - Bump RHCOS boot images for 4.7 1922454 - CRI-O doesn't enable pprof by default 1922473 - reconcile LSO images for 4.8 1922573 - oc returns an error while using -o jsonpath when there is no resource found in the namespace 1922782 - Source registry missing docker:// in yaml 1922907 - Interop UI Tests - step implementation for updating feature files 1922911 - Page crash when click the "Stacked" checkbox after clicking the data series toggle buttons 1922991 - "verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build" test fails on OKD 1923003 - WebConsole Insights widget showing "Issues pending" when the cluster doesn't report anything 1923098 - [vsphere-problem-detector-operator] Need permission to access replicasets.apps resources 1923102 - [vsphere-problem-detector-operator] pod's version is not correct 1923245 - [Assisted-4.7] [Staging][Minimal-ISO] nodes fails to boot 1923674 - k8s 1.20 vendor dependencies 1923721 - PipelineRun running status icon is not rotating 1923753 - Increase initialDelaySeconds for ovs-daemons container in the ovs-node daemonset for upgrade scenarios 1923774 - Docker builds failing for openshift/cluster-resource-override-admission-operator 1923802 - ci/prow/e2e-aws-olm build failing for openshift/cluster-resource-override-admission-operator 1923874 - Unable to specify values with % in kubeletconfig 1923888 - Fixes error metadata gathering 1923892 - Update arch.md after refactor. 1923894 - "installed" operator status in operatorhub page does not reflect the real status of operator 1923895 - Changelog generation. 1923911 - [e2e][automation] Improve tests for vm details page and list filter 1923945 - PVC Name and Namespace resets when user changes os/flavor/workload 1923951 - EventSources showsundefined` in project 1923973 - Dynamic plugin demo README does not contain info how to enable the ConsolePlugins 1924046 - Localhost: Refreshing on a Project removes it from nav item urls 1924078 - Topology quick search View all results footer should be sticky. 1924081 - NTO should ship the latest Tuned daemon release 2.15 1924084 - backend tests incorrectly hard-code artifacts dir 1924128 - [sig-builds][Feature:Builds] verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build 1924135 - Under sufficient load, CRI-O may segfault 1924143 - Code Editor Decorator url is broken for Bitbucket repos 1924188 - Language selector dropdown doesn't always pre-select the language 1924365 - Add extra disk for VM which use boot source PXE 1924383 - Degraded network operator during upgrade to 4.7.z 1924387 - [ja_JP][zh_CN] Incorrect warning message for deleting namespace on Delete Pod dialog box. 1924480 - non cluster admin can not take VM snapshot: An error occurred, cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on 1924583 - Deprectaed templates are listed in the Templates screen 1924870 - pick upstream pr#96901: plumb context with request deadline 1924955 - Images from Private external registry not working in deploy Image 1924961 - k8sutil.TrimDNS1123Label creates invalid values 1924985 - Build egress-router-cni for both RHEL 7 and 8 1925020 - Console demo plugin deployment image shoult not point to dockerhub 1925024 - Remove extra validations on kafka source form view net section 1925039 - [e2e] Fix Test - ID(CNV-5327) Change Custom Flavor while VM is running 1925072 - NTO needs to ship the current latest stalld v1.7.0 1925163 - Missing info about dev catalog in boot source template column 1925200 - Monitoring Alert icon is missing on the workload in Topology view 1925262 - apiserver getting 2 SIGTERM signals which was immediately making it exit code 1 1925319 - bash syntax error in configure-ovs.sh script 1925408 - Remove StatefulSet gatherer and replace it with gathering corresponding config map data 1925516 - Pipeline Metrics Tooltips are overlapping data 1925562 - Add new ArgoCD link from GitOps application environments page 1925596 - Gitops details page image and commit id text overflows past card boundary 1926556 - 'excessive etcd leader changes' test case failing in serial job because prometheus data is wiped by machine set test 1926588 - The tarball of operator-sdk is not ready for ocp4.7 1927456 - 4.7 still points to 4.6 catalog images 1927500 - API server exits non-zero on 2 SIGTERM signals 1929278 - Monitoring workloads using too high a priorityclass 1929645 - Remove openshift:kubevirt-machine-controllers decleration from machine-api 1929920 - Cluster monitoring documentation link is broken - 404 not found

  1. References:

https://access.redhat.com/security/cve/CVE-2018-10103 https://access.redhat.com/security/cve/CVE-2018-10105 https://access.redhat.com/security/cve/CVE-2018-14461 https://access.redhat.com/security/cve/CVE-2018-14462 https://access.redhat.com/security/cve/CVE-2018-14463 https://access.redhat.com/security/cve/CVE-2018-14464 https://access.redhat.com/security/cve/CVE-2018-14465 https://access.redhat.com/security/cve/CVE-2018-14466 https://access.redhat.com/security/cve/CVE-2018-14467 https://access.redhat.com/security/cve/CVE-2018-14468 https://access.redhat.com/security/cve/CVE-2018-14469 https://access.redhat.com/security/cve/CVE-2018-14470 https://access.redhat.com/security/cve/CVE-2018-14553 https://access.redhat.com/security/cve/CVE-2018-14879 https://access.redhat.com/security/cve/CVE-2018-14880 https://access.redhat.com/security/cve/CVE-2018-14881 https://access.redhat.com/security/cve/CVE-2018-14882 https://access.redhat.com/security/cve/CVE-2018-16227 https://access.redhat.com/security/cve/CVE-2018-16228 https://access.redhat.com/security/cve/CVE-2018-16229 https://access.redhat.com/security/cve/CVE-2018-16230 https://access.redhat.com/security/cve/CVE-2018-16300 https://access.redhat.com/security/cve/CVE-2018-16451 https://access.redhat.com/security/cve/CVE-2018-16452 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2019-3884 https://access.redhat.com/security/cve/CVE-2019-5018 https://access.redhat.com/security/cve/CVE-2019-6977 https://access.redhat.com/security/cve/CVE-2019-6978 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9455 https://access.redhat.com/security/cve/CVE-2019-9458 https://access.redhat.com/security/cve/CVE-2019-11068 https://access.redhat.com/security/cve/CVE-2019-12614 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13225 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15165 https://access.redhat.com/security/cve/CVE-2019-15166 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-15917 https://access.redhat.com/security/cve/CVE-2019-15925 https://access.redhat.com/security/cve/CVE-2019-16167 https://access.redhat.com/security/cve/CVE-2019-16168 https://access.redhat.com/security/cve/CVE-2019-16231 https://access.redhat.com/security/cve/CVE-2019-16233 https://access.redhat.com/security/cve/CVE-2019-16935 https://access.redhat.com/security/cve/CVE-2019-17450 https://access.redhat.com/security/cve/CVE-2019-17546 https://access.redhat.com/security/cve/CVE-2019-18197 https://access.redhat.com/security/cve/CVE-2019-18808 https://access.redhat.com/security/cve/CVE-2019-18809 https://access.redhat.com/security/cve/CVE-2019-19046 https://access.redhat.com/security/cve/CVE-2019-19056 https://access.redhat.com/security/cve/CVE-2019-19062 https://access.redhat.com/security/cve/CVE-2019-19063 https://access.redhat.com/security/cve/CVE-2019-19068 https://access.redhat.com/security/cve/CVE-2019-19072 https://access.redhat.com/security/cve/CVE-2019-19221 https://access.redhat.com/security/cve/CVE-2019-19319 https://access.redhat.com/security/cve/CVE-2019-19332 https://access.redhat.com/security/cve/CVE-2019-19447 https://access.redhat.com/security/cve/CVE-2019-19524 https://access.redhat.com/security/cve/CVE-2019-19533 https://access.redhat.com/security/cve/CVE-2019-19537 https://access.redhat.com/security/cve/CVE-2019-19543 https://access.redhat.com/security/cve/CVE-2019-19602 https://access.redhat.com/security/cve/CVE-2019-19767 https://access.redhat.com/security/cve/CVE-2019-19770 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-19956 https://access.redhat.com/security/cve/CVE-2019-20054 https://access.redhat.com/security/cve/CVE-2019-20218 https://access.redhat.com/security/cve/CVE-2019-20386 https://access.redhat.com/security/cve/CVE-2019-20387 https://access.redhat.com/security/cve/CVE-2019-20388 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20636 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-20812 https://access.redhat.com/security/cve/CVE-2019-20907 https://access.redhat.com/security/cve/CVE-2019-20916 https://access.redhat.com/security/cve/CVE-2020-0305 https://access.redhat.com/security/cve/CVE-2020-0444 https://access.redhat.com/security/cve/CVE-2020-1716 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-1751 https://access.redhat.com/security/cve/CVE-2020-1752 https://access.redhat.com/security/cve/CVE-2020-1971 https://access.redhat.com/security/cve/CVE-2020-2574 https://access.redhat.com/security/cve/CVE-2020-2752 https://access.redhat.com/security/cve/CVE-2020-2922 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3898 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-6405 https://access.redhat.com/security/cve/CVE-2020-7595 https://access.redhat.com/security/cve/CVE-2020-7774 https://access.redhat.com/security/cve/CVE-2020-8177 https://access.redhat.com/security/cve/CVE-2020-8492 https://access.redhat.com/security/cve/CVE-2020-8563 https://access.redhat.com/security/cve/CVE-2020-8566 https://access.redhat.com/security/cve/CVE-2020-8619 https://access.redhat.com/security/cve/CVE-2020-8622 https://access.redhat.com/security/cve/CVE-2020-8623 https://access.redhat.com/security/cve/CVE-2020-8624 https://access.redhat.com/security/cve/CVE-2020-8647 https://access.redhat.com/security/cve/CVE-2020-8648 https://access.redhat.com/security/cve/CVE-2020-8649 https://access.redhat.com/security/cve/CVE-2020-9327 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-10029 https://access.redhat.com/security/cve/CVE-2020-10732 https://access.redhat.com/security/cve/CVE-2020-10749 https://access.redhat.com/security/cve/CVE-2020-10751 https://access.redhat.com/security/cve/CVE-2020-10763 https://access.redhat.com/security/cve/CVE-2020-10773 https://access.redhat.com/security/cve/CVE-2020-10774 https://access.redhat.com/security/cve/CVE-2020-10942 https://access.redhat.com/security/cve/CVE-2020-11565 https://access.redhat.com/security/cve/CVE-2020-11668 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-12465 https://access.redhat.com/security/cve/CVE-2020-12655 https://access.redhat.com/security/cve/CVE-2020-12659 https://access.redhat.com/security/cve/CVE-2020-12770 https://access.redhat.com/security/cve/CVE-2020-12826 https://access.redhat.com/security/cve/CVE-2020-13249 https://access.redhat.com/security/cve/CVE-2020-13630 https://access.redhat.com/security/cve/CVE-2020-13631 https://access.redhat.com/security/cve/CVE-2020-13632 https://access.redhat.com/security/cve/CVE-2020-14019 https://access.redhat.com/security/cve/CVE-2020-14040 https://access.redhat.com/security/cve/CVE-2020-14381 https://access.redhat.com/security/cve/CVE-2020-14382 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-14422 https://access.redhat.com/security/cve/CVE-2020-15157 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-15862 https://access.redhat.com/security/cve/CVE-2020-15999 https://access.redhat.com/security/cve/CVE-2020-16166 https://access.redhat.com/security/cve/CVE-2020-24490 https://access.redhat.com/security/cve/CVE-2020-24659 https://access.redhat.com/security/cve/CVE-2020-25211 https://access.redhat.com/security/cve/CVE-2020-25641 https://access.redhat.com/security/cve/CVE-2020-25658 https://access.redhat.com/security/cve/CVE-2020-25661 https://access.redhat.com/security/cve/CVE-2020-25662 https://access.redhat.com/security/cve/CVE-2020-25681 https://access.redhat.com/security/cve/CVE-2020-25682 https://access.redhat.com/security/cve/CVE-2020-25683 https://access.redhat.com/security/cve/CVE-2020-25684 https://access.redhat.com/security/cve/CVE-2020-25685 https://access.redhat.com/security/cve/CVE-2020-25686 https://access.redhat.com/security/cve/CVE-2020-25687 https://access.redhat.com/security/cve/CVE-2020-25694 https://access.redhat.com/security/cve/CVE-2020-25696 https://access.redhat.com/security/cve/CVE-2020-26160 https://access.redhat.com/security/cve/CVE-2020-27813 https://access.redhat.com/security/cve/CVE-2020-27846 https://access.redhat.com/security/cve/CVE-2020-28362 https://access.redhat.com/security/cve/CVE-2020-29652 https://access.redhat.com/security/cve/CVE-2021-2007 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYDZ+bNzjgjWX9erEAQghXg//awGwjQxJ5LEZWBTdgyuCa8mHEi2rop5T lmebolBMNRSbo9gI8LMSHlvIBBFiV4CuFvfxE0AVLNentfzOTH11TxNWe1KQYt4H EmcGHPeHWTxKDkvAHtVcWXy9WN3y5d4lHSaq6AR1nHRPcj/k1upyx22kotpnYxN8 4d49PjFTO3YbmdYpNLVJ9nY8izqUpTfM7YSyj6ANZSlaYc5Z215o6TPo6e3wobf4 mWu+VfDS0v+/AbGhQhO2sQ7r2ysJ85MB7c62cxck4a51KiA0NKd4xr0TAA4KHnNL ISHFzi5QYXu+meE+9wYRo1ZjJ5fbPj41+1TJbR6O4CbP0xQiFpcUSipNju3rGSGy Ae5G/QGT8J7HzOjlKVvY3SFu/odENR6c+xUIr7IB/FBlu7DdPF2XxMZDQD4DKHEk 4aiDbuiEL3Yf78Ic1RqPPmrj9plIwprVFQz+k3JaQXKD+1dBxO6tk+nVu2/5xNbM uR03hrthYYIpdXLSWU4lzq8j3kQ9wZ4j/m2o6/K6eHNl9PyqAG5jfQv9bVf8E3oG krzc/JLvOfHNEQ/oJs/v/DFDmnAxshCCtGWlpLJ5J0pcD3EePsrPNs1QtQurVrMv RjfBCWKOij53+BinrMKHdsHxfur7GCFCIQCVaLIv6GUjX2NWI0voIVA8JkrFNNp6 McvuEaxco7U= =sw8i -----END PGP SIGNATURE-----

-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

APPLE-SA-2020-05-26-10 iCloud for Windows 7.19

iCloud for Windows 7.19 is now available and addresses the following:

ImageIO Available for: Windows 7 and later Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2020-9789: Wenchao Li of VARAS@IIE CVE-2020-9790: Xingwei Lin of Ant-financial Light-Year Security Lab

ImageIO Available for: Windows 7 and later Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: An out-of-bounds read was addressed with improved input validation. CVE-2020-3878: Samuel Groß of Google Project Zero

SQLite Available for: Windows 7 and later Impact: A malicious application may cause a denial of service or potentially disclose memory contents Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2020-9794

WebKit Available for: Windows 7 and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A logic issue was addressed with improved restrictions. CVE-2020-9802: Samuel Groß of Google Project Zero

WebKit Available for: Windows 7 and later Impact: Processing maliciously crafted web content may lead to universal cross site scripting Description: A logic issue was addressed with improved restrictions. CVE-2020-9805: an anonymous researcher

WebKit Available for: Windows 7 and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A type confusion issue was addressed with improved memory handling. CVE-2020-9800: Brendan Draper (@6r3nd4n) working with Trend Micro Zero Day Initiative

WebKit Available for: Windows 7 and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved state management. CVE-2020-9806: Wen Xu of SSLab at Georgia Tech CVE-2020-9807: Wen Xu of SSLab at Georgia Tech

WebKit Available for: Windows 7 and later Impact: A remote attacker may be able to cause arbitrary code execution Description: A logic issue was addressed with improved restrictions. CVE-2020-9850: @jinmo123, @setuid0x0_, and @insu_yun_en of @SSLab_Gatech working with Trend Micro’s Zero Day Initiative

WebKit Available for: Windows 7 and later Impact: Processing maliciously crafted web content may lead to a cross site scripting attack Description: An input validation issue was addressed with improved input validation. CVE-2020-9843: Ryan Pickren (ryanpickren.com)

WebKit Available for: Windows 7 and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved validation. CVE-2020-9803: Wen Xu of SSLab at Georgia Tech

Additional recognition

ImageIO We would like to acknowledge Lei Sun for their assistance.

WebKit We would like to acknowledge Aidan Dunlap of UT Austin for their assistance.

This advisory provides the following updates among others:

  • Enhances profile parsing time.
  • Fixes excessive resource consumption from the Operator.
  • Fixes default content image.
  • Fixes outdated remediation handling. Bugs fixed (https://bugzilla.redhat.com/):

1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers 1918990 - ComplianceSuite scans use quay content image for initContainer 1919135 - [OCP v46] The autoApplyRemediation pauses the machineConfigPool if there is outdated complianceRemediation object present 1919846 - After remediation applied, the compliancecheckresults still reports Failed status for some rules 1920999 - Compliance operator is not displayed when disconnected mode is selected in the OpenShift Web-Console.

Bug Fix(es):

  • Aggregator pod tries to parse ConfigMaps without results (BZ#1899479)

  • The compliancesuite object returns error with ocp4-cis tailored profile (BZ#1902251)

  • The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object (BZ#1902634)

  • [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object (BZ#1907414)

  • The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator (BZ#1908991)

  • Applying the "rhcos4-moderate" compliance profile leads to Ignition error "something else exists at that path" (BZ#1909081)

  • [OCP v46] Always update the default profilebundles on Compliance operator startup (BZ#1909122)

  • Bugs fixed (https://bugzilla.redhat.com/):

1899479 - Aggregator pod tries to parse ConfigMaps without results 1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service 1902251 - The compliancesuite object returns error with ocp4-cis tailored profile 1902634 - The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object 1907414 - [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object 1908991 - The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator 1909081 - Applying the "rhcos4-moderate" compliance profile leads to Ignition error "something else exists at that path" 1909122 - [OCP v46] Always update the default profilebundles on Compliance operator startup

  1. Bugs fixed (https://bugzilla.redhat.com/):

1732329 - Virtual Machine is missing documentation of its properties in yaml editor 1783192 - Guest kernel panic when start RHEL6.10 guest with q35 machine type and virtio disk in cnv 1791753 - [RFE] [SSP] Template validator should check validations in template's parent template 1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic 1848954 - KMP missing CA extensions in cabundle of mutatingwebhookconfiguration 1848956 - KMP requires downtime for CA stabilization during certificate rotation 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1853911 - VM with dot in network name fails to start with unclear message 1854098 - NodeNetworkState on workers doesn't have "status" key due to nmstate-handler pod failure to run "nmstatectl show" 1856347 - SR-IOV : Missing network name for sriov during vm setup 1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS 1859235 - Common Templates - after upgrade there are 2 common templates per each os-workload-flavor combination 1860714 - No API information from oc explain 1860992 - CNV upgrade - users are not removed from privileged SecurityContextConstraints 1864577 - [v2v][RHV to CNV non migratable source VM fails to import to Ceph-rbd / File system due to overhead required for Filesystem 1866593 - CDI is not handling vm disk clone 1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs 1868817 - Container-native Virtualization 2.6.0 Images 1873771 - Improve the VMCreationFailed error message caused by VM low memory 1874812 - SR-IOV: Guest Agent expose link-local ipv6 address for sometime and then remove it 1878499 - DV import doesn't recover from scratch space PVC deletion 1879108 - Inconsistent naming of "oc virt" command in help text 1881874 - openshift-cnv namespace is getting stuck if the user tries to delete it while CNV is running 1883232 - Webscale: kubevirt/CNV datavolume importer pod inability to disable sidecar injection if namespace has sidecar injection enabled but VM Template does NOT 1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability 1885153 - [v2v][RHV to CNv VM import] Wrong Network mapping do not show a relevant error message 1885418 - [openshift-cnv] issues with memory overhead calculation when limits are used 1887398 - [openshift-cnv][CNV] nodes need to exist and be labeled first, before the NodeNetworkConfigurationPolicy is applied 1889295 - [v2v][VMware to CNV VM import API] diskMappings: volumeMode Block is not passed on to PVC request. Bugs fixed (https://bugzilla.redhat.com/):

1823765 - nfd-workers crash under an ipv6 environment 1838802 - mysql8 connector from operatorhub does not work with metering operator 1838845 - Metering operator can't connect to postgres DB from Operator Hub 1841883 - namespace-persistentvolumeclaim-usage query returns unexpected values 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1868294 - NFD operator does not allow customisation of nfd-worker.conf 1882310 - CVE-2020-24750 jackson-databind: Serialization gadgets in com.pastdev.httpcomponents.configuration.JndiConfiguration 1890672 - NFD is missing a build flag to build correctly 1890741 - path to the CA trust bundle ConfigMap is broken in report operator 1897346 - NFD worker pods not scheduler on a 3 node master/worker cluster 1898373 - Metering operator failing upgrade from 4.4 to 4.6 channel 1900125 - FIPS error while generating RSA private key for CA 1906129 - OCP 4.7: Node Feature Discovery (NFD) Operator in CrashLoopBackOff when deployed from OperatorHub 1908492 - OCP 4.7: Node Feature Discovery (NFD) Operator Custom Resource Definition file in olm-catalog is not in sync with the one in manifests dir leading to failed deployment from OperatorHub 1913837 - The CI and ART 4.7 metering images are not mirrored 1914869 - OCP 4.7 NFD - Operand configuration options for NodeFeatureDiscovery are empty, no supported image for ppc64le 1916010 - olm skip range is set to the wrong range 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1923998 - NFD Operator is failing to update and remains in Replacing state

  1. Bugs fixed (https://bugzilla.redhat.com/):

1808240 - Always return metrics value for pods under the user's namespace 1815189 - feature flagged UI does not always become available after operator installation 1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters 1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly 1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal 1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered 1878925 - 'oc adm upgrade --to ...' rejects versions which occur only in history, while the cluster-version operator supports history fallback 1880738 - origin e2e test deletes original worker 1882983 - oVirt csi driver should refuse to provision RWX and ROX PV 1886450 - Keepalived router id check not documented for RHV/VMware IPI 1889488 - The metrics endpoint for the Scheduler is not protected by RBAC 1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom 1896474 - Path based routing is broken for some combinations 1897431 - CIDR support for additional network attachment with the bridge CNI plug-in 1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes 1907433 - Excessive logging in image operator 1909906 - The router fails with PANIC error when stats port already in use 1911173 - [MSTR-998] Many charts' legend names show {{}} instead of words 1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. 1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true) 1917893 - [ovirt] install fails: due to terraform error "Cannot attach Virtual Disk: Disk is locked" on vm resource 1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1926522 - oc adm catalog does not clean temporary files 1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. 1928141 - kube-storage-version-migrator constantly reporting type "Upgradeable" status Unknown 1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it's storageclass is not yet finished, confusing users 1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x 1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade 1937085 - RHV UPI inventory playbook missing guarantee_memory 1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion 1938236 - vsphere-problem-detector does not support overriding log levels via storage CR 1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods 1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer 1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s] 1942913 - ThanosSidecarUnhealthy isn't resilient to WAL replays. 1943363 - [ovn] CNO should gracefully terminate ovn-northd 1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17 1948080 - authentication should not set Available=False APIServices_Error with 503s 1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set 1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0 1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer 1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs 1953264 - "remote error: tls: bad certificate" logs in prometheus-operator container 1955300 - Machine config operator reports unavailable for 23m during upgrade 1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set 1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set 1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters 1956496 - Needs SR-IOV Docs Upstream 1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret 1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid 1956964 - upload a boot-source to OpenShift virtualization using the console 1957547 - [RFE]VM name is not auto filled in dev console 1958349 - ovn-controller doesn't release the memory after cluster-density run 1959352 - [scale] failed to get pod annotation: timed out waiting for annotations 1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not 1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial] 1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects 1961391 - String updates 1961509 - DHCP daemon pod should have CPU and memory requests set but not limits 1962066 - Edit machine/machineset specs not working 1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent 1963053 - oc whoami --show-console should show the web console URL, not the server api URL 1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters 1964327 - Support containers with name:tag@digest 1964789 - Send keys and disconnect does not work for VNC console 1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7 1966445 - Unmasking a service doesn't work if it masked using MCO 1966477 - Use GA version in KAS/OAS/OauthAS to avoid: "audit.k8s.io/v1beta1" is deprecated and will be removed in a future release, use "audit.k8s.io/v1" instead 1966521 - kube-proxy's userspace implementation consumes excessive CPU 1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up 1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount 1970218 - MCO writes incorrect file contents if compression field is specified 1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel] 1970805 - Cannot create build when docker image url contains dir structure 1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io 1972827 - image registry does not remain available during upgrade 1972962 - Should set the minimum value for the --max-icsp-size flag of oc adm catalog mirror 1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run 1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established 1976301 - [ci] e2e-azure-upi is permafailing 1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. 2007379 - Events are not generated for master offset for ordinary clock 2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace 2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address 2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error 2007522 - No new local-storage-operator-metadata-container is build for 4.10 2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10 2007580 - Azure cilium installs are failing e2e tests 2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10 2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes 2007692 - 4.9 "old-rhcos" jobs are permafailing with storage test failures 2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow 2007757 - must-gather extracts imagestreams in the "openshift" namespace, but not Templates 2007802 - AWS machine actuator get stuck if machine is completely missing 2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator 2008119 - The serviceAccountIssuer field on Authentication CR is reseted to “” when installation process 2008151 - Topology breaks on clicking in empty state 2008185 - Console operator go.mod should use go 1.16.version 2008201 - openstack-az job is failing on haproxy idle test 2008207 - vsphere CSI driver doesn't set resource limits 2008223 - gather_audit_logs: fix oc command line to get the current audit profile 2008235 - The Save button in the Edit DC form remains disabled 2008256 - Update Internationalization README with scope info 2008321 - Add correct documentation link for MON_DISK_LOW 2008462 - Disable PodSecurity feature gate for 4.10 2008490 - Backing store details page does not contain all the kebab actions. 2010181 - Environment variables not getting reset on reload on deployment edit form 2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2010341 - OpenShift Alerting Rules Style-Guide Compliance 2010342 - Local console builds can have out of memory errors 2010345 - OpenShift Alerting Rules Style-Guide Compliance 2010348 - Reverts PIE build mode for K8S components 2010352 - OpenShift Alerting Rules Style-Guide Compliance 2010354 - OpenShift Alerting Rules Style-Guide Compliance 2010359 - OpenShift Alerting Rules Style-Guide Compliance 2010368 - OpenShift Alerting Rules Style-Guide Compliance 2010376 - OpenShift Alerting Rules Style-Guide Compliance 2010662 - Cluster is unhealthy after image-registry-operator tests 2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent) 2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API 2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address 2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing 2010864 - Failure building EFS operator 2010910 - ptp worker events unable to identify interface for multiple interfaces 2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24 2010921 - Azure Stack Hub does not handle additionalTrustBundle 2010931 - SRO CSV uses non default category "Drivers and plugins" 2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. 2011038 - optional operator conditions are confusing 2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass 2011171 - diskmaker-manager constantly redeployed by LSO when creating LV's 2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image 2011368 - Tooltip in pipeline visualization shows misleading data 2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels 2011411 - Managed Service's Cluster overview page contains link to missing Storage dashboards 2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster 2011513 - Kubelet rejects pods that use resources that should be freed by completed pods 2011668 - Machine stuck in deleting phase in VMware "reconciler failed to Delete machine" 2011693 - (release-4.10) "insightsclient_request_recvreport_total" metric is always incremented 2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn't export namespace labels anymore 2011733 - Repository README points to broken documentarion link 2011753 - Ironic resumes clean before raid configuration job is actually completed 2011809 - The nodes page in the openshift console doesn't work. You just get a blank page 2011822 - Obfuscation doesn't work at clusters with OVN 2011882 - SRO helm charts not synced with templates 2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot 2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages 2011903 - vsphere-problem-detector: session leak 2011927 - OLM should allow users to specify a proxy for GRPC connections 2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods 2011960 - [tracker] Storage operator is not available after reboot cluster instances 2011971 - ICNI2 pods are stuck in ContainerCreating state 2011972 - Ingress operator not creating wildcard route for hypershift clusters 2011977 - SRO bundle references non-existent image 2012069 - Refactoring Status controller 2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI 2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group 2012233 - [IBMCLOUD] IPI: "Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)" 2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig 2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off 2012407 - [e2e][automation] improve vm tab console tests 2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don't have namespace label 2012562 - migration condition is not detected in list view 2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written 2012780 - The port 50936 used by haproxy is occupied by kube-apiserver 2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working 2012902 - Neutron Ports assigned to Completed Pods are not reused Edit 2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack 2012971 - Disable operands deletes 2013034 - Cannot install to openshift-nmstate namespace 2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine) 2013199 - post reboot of node SRIOV policy taking huge time 2013203 - UI breaks when trying to create block pool before storage cluster/system creation 2013222 - Full breakage for nightly payload promotion 2013273 - Nil pointer exception when phc2sys options are missing 2013321 - TuneD: high CPU utilization of the TuneD daemon. 2013416 - Multiple assets emit different content to the same filename 2013431 - Application selector dropdown has incorrect font-size and positioning 2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8 2013545 - Service binding created outside topology is not visible 2013599 - Scorecard support storage is not included in ocp4.9 2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide) 2013646 - fsync controller will show false positive if gaps in metrics are observed. to user and tries to just load a blank screen on 'Add Capacity' button click 2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu 2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. 2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart 'x% used' is in English 2015549 - Observe - Metrics: Column heading and pagination text is in English 2015557 - Workloads - DeploymentConfigs : Error message is in English 2015568 - Compute - Nodes : CPU column's values are in English 2015635 - Storage operator fails causing installation to fail on ASH 2015660 - "Finishing boot source customization" screen should not use term "patched" 2015793 - [hypershift] The collect-profiles job's pods should run on the control-plane node 2015806 - Metrics view in Deployment reports "Forbidden" when not cluster-admin 2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning 2015837 - OS_CLOUD overwrites install-config's platform.openstack.cloud 2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch 2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail 2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed) 2016008 - [4.10] Bootimage bump tracker 2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver 2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator 2016054 - No e2e CI presubmit configured for release component cluster-autoscaler 2016055 - No e2e CI presubmit configured for release component console 2016058 - openshift-sync does not synchronise in "ose-jenkins:v4.8" 2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager 2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers 2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. 2016179 - Add Sprint 208 translations 2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager 2016235 - should update to 7.5.11 for grafana resources version label 2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails 2016334 - shiftstack: SRIOV nic reported as not supported 2016352 - Some pods start before CA resources are present 2016367 - Empty task box is getting created for a pipeline without finally task 2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts 2016438 - Feature flag gating is missing in few extensions contributed via knative plugin 2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc 2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets 2016453 - Complete i18n for GaugeChart defaults 2016479 - iface-id-ver is not getting updated for existing lsp 2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear 2016951 - dynamic actions list is not disabling "open console" for stopped vms 2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available 2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances 2017016 - [REF] Virtualization menu 2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn 2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly 2017130 - t is not a function error navigating to details page 2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue 2017244 - ovirt csi operator static files creation is in the wrong order 2017276 - [4.10] Volume mounts not created with the correct security context 2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. 2022447 - ServiceAccount in manifests conflicts with OLM 2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. 2025821 - Make "Network Attachment Definitions" available to regular user 2025823 - The console nav bar ignores plugin separator in existing sections 2025830 - CentOS capitalizaion is wrong 2025837 - Warn users that the RHEL URL expire 2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-* 2025903 - [UI] RoleBindings tab doesn't show correct rolebindings 2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2026178 - OpenShift Alerting Rules Style-Guide Compliance 2026209 - Updation of task is getting failed (tekton hub integration) 2026223 - Internal error occurred: failed calling webhook "ptpconfigvalidationwebhook.openshift.io" 2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates 2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct 2026352 - Kube-Scheduler revision-pruner fail during install of new cluster 2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment 2026383 - Error when rendering custom Grafana dashboard through ConfigMap 2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation 2026396 - Cachito Issues: sriov-network-operator Image build failure 2026488 - openshift-controller-manager - delete event is repeating pathologically 2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. 2039359 - oc adm prune deployments can't prune the RS where the associated Deployment no longer exists 2039382 - gather_metallb_logs does not have execution permission 2039406 - logout from rest session after vsphere operator sync is finished 2039408 - Add GCP region northamerica-northeast2 to allowed regions 2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration 2039425 - No need to set KlusterletAddonConfig CR applicationManager->enabled: true in RAN ztp deployment 2039491 - oc - git:// protocol used in unit tests 2039516 - Bump OVN to ovn21.12-21.12.0-25 2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate 2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled 2039541 - Resolv-prepender script duplicating entries 2039586 - [e2e] update centos8 to centos stream8 2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty 2039619 - [AWS] In tree provisioner storageclass aws disk type should contain 'gp3' and csi provisioner storageclass default aws disk type should be 'gp3' 2039670 - Create PDBs for control plane components 2039678 - Page goes blank when create image pull secret 2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported 2039743 - React missing key warning when open operator hub detail page (and maybe others as well) 2039756 - React missing key warning when open KnativeServing details 2039770 - Observe dashboard doesn't react on time-range changes after browser reload when perspective is changed in another tab 2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard 2039781 - [GSS] OBC is not visible by admin of a Project on Console 2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector 2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled 2039880 - Log level too low for control plane metrics 2039919 - Add E2E test for router compression feature 2039981 - ZTP for standard clusters installs stalld on master nodes 2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. 2043117 - Recommended operators links are erroneously treated as external 2043130 - Update CSI sidecars to the latest release for 4.10 2043234 - Missing validation when creating several BGPPeers with the same peerAddress 2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler 2043254 - crio does not bind the security profiles directory 2043296 - Ignition fails when reusing existing statically-keyed LUKS volume 2043297 - [4.10] Bootimage bump tracker 2043316 - RHCOS VM fails to boot on Nutanix AOS 2043446 - Rebase aws-efs-utils to the latest upstream version. 2043556 - Add proper ci-operator configuration to ironic and ironic-agent images 2043577 - DPU network operator 2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator 2043675 - Too many machines deleted by cluster autoscaler when scaling down 2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation 2043709 - Logging flags no longer being bound to command line 2043721 - Installer bootstrap hosts using outdated kubelet containing bugs 2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather 2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23 2043780 - Bump router to k8s.io/api 1.23 2043787 - Bump cluster-dns-operator to k8s.io/api 1.23 2043801 - Bump CoreDNS to k8s.io/api 1.23 2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown 2043961 - [OVN-K] If pod creation fails, retry doesn't work as expected. 2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests 2052598 - kube-scheduler should use configmap lease 2052599 - kube-controller-manger should use configmap lease 2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh 2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total not valid 2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop 2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set.

CVE-2020-13753

Milan Crha discovered that an attacker may be able to execute
commands outside the bubblewrap sandbox.

For the stable distribution (buster), these problems have been fixed in version 2.28.3-2~deb10u1.

We recommend that you upgrade your webkit2gtk packages.

For the detailed security status of webkit2gtk please refer to its security tracker page at: https://security-tracker.debian.org/tracker/webkit2gtk

Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/

Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAl8PaKUACgkQEMKTtsN8 Tjaf3hAAjZCKrikC4P1I/xiuL6kRepTjURi3zeZl3YywPCFLi/irWXX+5U+ejZoM kek3wtdJc1thN8w9BbXhOVyLferb/xfnt5jUVtZ6JBNDKKWGXoTY0Qfdu2lH0vw5 IV1lf5bvOdawrw/tVS9Uy3dTN1kXEBZ3q3XCpRXrBWEkrXtWG/yznGy0duebnI5h PM7D6R4PIKiCB3HBe9rszCIQrYcGQ/U3x8a/FPnPUO2TCRfVZG918M9yO1fN1v2R +08h0DcOU8ggIJQwJA9hm/V3mJWpTayHh/ouTI8PrIcwG0T2/qbtUm/9cj0wvmXW Id+RgXtQAyKeXQoXD5oP9jzVDgmm7rn03Rn2FX5hzAdTJAqdvT/Mr4IDNcOgdS8O wXmGprdRvMzx0gXO5YpeTuhjQiCZS1fB9ByIOMq/7lIjpiBctrhTZQvlSMMyauIQ P7tTTT8zCZo0DIQc/c2KyCXlD9/ORZm801U5wpXwPXT9Zq8wRAp5PodK/4plOzKc JyJiPI6BR41+31C438nl3wifO/wLh8+6nHAb2rkRQSe6Tu9SKyqOmYT9Ev6JZi/1 R8NMBFSmTYM/XUv5ECsTeL3uLvDnCpKAR0EnWz5z1Cqy2AYzEthEf+1dwXAooYvO 2johOWMaUrSWJMsYdZjTFEahaCSO5oPTDlbZCB2yIgpc6P70irU= =L5lA -----END PGP SIGNATURE-----

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202006-1640",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "watchos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "6.2.5"
      },
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "11.2"
      },
      {
        "model": "itunes",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.10.7"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.5"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.5"
      },
      {
        "model": "safari",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.1.1"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.4.5"
      },
      {
        "model": "icloud",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "11.0"
      },
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "7.19"
      },
      {
        "model": "safari",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.1.1 \u672a\u6e80 (macos catalina)"
      },
      {
        "model": "tvos",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.4.5 \u672a\u6e80 (apple tv 4k)"
      },
      {
        "model": "safari",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.1.1 \u672a\u6e80 (macos mojave)"
      },
      {
        "model": "itunes",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "for windows 12.10.7 \u672a\u6e80 (windows 7 \u4ee5\u964d)"
      },
      {
        "model": "watchos",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "6.2.5 \u672a\u6e80 (apple watch series 1 \u4ee5\u964d )"
      },
      {
        "model": "ipados",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.5.1 \u672a\u6e80 (ipad air 2 \u4ee5\u964d)"
      },
      {
        "model": "safari",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.1.1 \u672a\u6e80 (macos high sierra)"
      },
      {
        "model": "ipados",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.5.1 \u672a\u6e80 (ipad mini 4 \u4ee5\u964d)"
      },
      {
        "model": "icloud",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "for windows 7.19 \u672a\u6e80 (windows 7 \u4ee5\u964d)"
      },
      {
        "model": "icloud",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "for windows 11.2 \u672a\u6e80 (windows 10 \u4ee5\u964d)"
      },
      {
        "model": "tvos",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.4.5 \u672a\u6e80 (apple tv hd)"
      },
      {
        "model": "ios",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.5.1 \u672a\u6e80 (iphone 6s \u4ee5\u964d)"
      },
      {
        "model": "ios",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.5.1 \u672a\u6e80 (ipod touch \u7b2c 7 \u4e16\u4ee3)"
      },
      {
        "model": "safari",
        "scope": null,
        "trust": 0.7,
        "vendor": "apple",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "ZDI",
        "id": "ZDI-20-672"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-006153"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-9850"
      }
    ]
  },
  "configurations": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/configurations#",
      "children": {
        "@container": "@list"
      },
      "cpe_match": {
        "@container": "@list"
      },
      "data": {
        "@container": "@list"
      },
      "nodes": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "CVE_data_version": "4.0",
        "nodes": [
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:a:apple:icloud:*:*:*:*:*:windows:*:*",
                "cpe_name": [],
                "versionEndExcluding": "7.19",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:apple:icloud:*:*:*:*:*:windows:*:*",
                "cpe_name": [],
                "versionEndExcluding": "11.2",
                "versionStartIncluding": "11.0",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:apple:itunes:*:*:*:*:*:windows:*:*",
                "cpe_name": [],
                "versionEndExcluding": "12.10.7",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:apple:safari:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "13.1.1",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:apple:iphone_os:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "13.5",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:apple:watchos:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "6.2.5",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:apple:tvos:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "13.4.5",
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:apple:ipados:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "13.5",
                "vulnerable": true
              }
            ],
            "operator": "OR"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-9850"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      }
    ],
    "trust": 0.8
  },
  "cve": "CVE-2020-9850",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "acInsufInfo": false,
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "NVD",
            "availabilityImpact": "PARTIAL",
            "baseScore": 7.5,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 10.0,
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "obtainAllPrivilege": false,
            "obtainOtherPrivilege": false,
            "obtainUserPrivilege": false,
            "severity": "HIGH",
            "trust": 1.0,
            "userInteractionRequired": false,
            "vectorString": "AV:N/AC:L/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "acInsufInfo": null,
            "accessComplexity": "Low",
            "accessVector": "Network",
            "authentication": "None",
            "author": "NVD",
            "availabilityImpact": "Partial",
            "baseScore": 7.5,
            "confidentialityImpact": "Partial",
            "exploitabilityScore": null,
            "id": "JVNDB-2020-006153",
            "impactScore": null,
            "integrityImpact": "Partial",
            "obtainAllPrivilege": null,
            "obtainOtherPrivilege": null,
            "obtainUserPrivilege": null,
            "severity": "High",
            "trust": 0.8,
            "userInteractionRequired": null,
            "vectorString": "AV:N/AC:L/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 7.5,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 10.0,
            "id": "VHN-187975",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "HIGH",
            "trust": 0.1,
            "vectorString": "AV:N/AC:L/AU:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "acInsufInfo": null,
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULMON",
            "availabilityImpact": "PARTIAL",
            "baseScore": 7.5,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 10.0,
            "id": "CVE-2020-9850",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "obtainAllPrivilege": null,
            "obtainOtherPrivilege": null,
            "obtainUserPrivilege": null,
            "severity": "HIGH",
            "trust": 0.1,
            "userInteractionRequired": null,
            "vectorString": "AV:N/AC:L/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "NVD",
            "availabilityImpact": "HIGH",
            "baseScore": 9.8,
            "baseSeverity": "CRITICAL",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 3.9,
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 9.8,
            "baseSeverity": "Critical",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "JVNDB-2020-006153",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.0"
          },
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "ZDI",
            "availabilityImpact": "LOW",
            "baseScore": 7.3,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "LOW",
            "exploitabilityScore": 3.9,
            "id": "CVE-2020-9850",
            "impactScore": 3.4,
            "integrityImpact": "LOW",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 0.7,
            "userInteraction": "NONE",
            "vectorString": "AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:L",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "NVD",
            "id": "CVE-2020-9850",
            "trust": 1.0,
            "value": "CRITICAL"
          },
          {
            "author": "NVD",
            "id": "JVNDB-2020-006153",
            "trust": 0.8,
            "value": "Critical"
          },
          {
            "author": "ZDI",
            "id": "CVE-2020-9850",
            "trust": 0.7,
            "value": "HIGH"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202005-1256",
            "trust": 0.6,
            "value": "CRITICAL"
          },
          {
            "author": "VULHUB",
            "id": "VHN-187975",
            "trust": 0.1,
            "value": "HIGH"
          },
          {
            "author": "VULMON",
            "id": "CVE-2020-9850",
            "trust": 0.1,
            "value": "HIGH"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "ZDI",
        "id": "ZDI-20-672"
      },
      {
        "db": "VULHUB",
        "id": "VHN-187975"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-9850"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-006153"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202005-1256"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-9850"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A logic issue was addressed with improved restrictions. This issue is fixed in iOS 13.5 and iPadOS 13.5, tvOS 13.4.5, watchOS 6.2.5, Safari 13.1.1, iTunes 12.10.7 for Windows, iCloud for Windows 11.2, iCloud for Windows 7.19. A remote attacker may be able to cause arbitrary code execution. plural Apple The product contains a logic vulnerability due to a flawed handling of restrictions.Arbitrary code could be executed by a remote attacker. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file.The specific flaw exists within the implementation of the HasIndexedProperty DFG node. By performing actions in JavaScript, an attacker can trigger a type confusion condition. An attacker can leverage this vulnerability to execute code in the context of the current process. Apple iOS, etc. are all products of Apple (Apple). Apple iOS is an operating system developed for mobile devices. Apple tvOS is a smart TV operating system. Apple iPadOS is an operating system for iPad tablets. WebKit is one of the web browser engine components. A security vulnerability exists in the WebKit component of several Apple products. In addition to persistent storage, Red Hat\nOpenShift Container Storage provisions a multicloud data management service\nwith an S3 compatible API. \n\nThese updated images include numerous security fixes, bug fixes, and\nenhancements. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nFor details on how to apply this update, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1806266 - Require an extension to the cephfs subvolume commands, that can return metadata regarding a subvolume\n1813506 - Dockerfile not  compatible with docker and buildah\n1817438 - OSDs not distributed uniformly across OCS nodes on a 9-node AWS IPI setup\n1817850 - [BAREMETAL] rook-ceph-operator does not reconcile when osd deployment is deleted when performed node replacement\n1827157 - OSD hitting default CPU limit on AWS i3en.2xlarge instances limiting performance\n1829055 - [RFE] add insecureEdgeTerminationPolicy: Redirect to noobaa mgmt route (http to https)\n1833153 - add a variable for sleep time of rook operator between checks of downed OSD+Node. \n1836299 - NooBaa Operator deploys with HPA that fires maxreplicas alerts by default\n1842254 - [NooBaa] Compression stats do not add up when compression id disabled\n1845976 - OCS 4.5 Independent mode: must-gather commands fails to collect ceph command outputs from external cluster\n1849771 - [RFE] Account created by OBC should have same permissions as bucket owner\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1854500 - [tracker-rhcs bug 1838931] mgr/volumes: add command to return metadata of a subvolume snapshot\n1854501 - [Tracker-rhcs bug 1848494 ]pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes independent of the source subvolume\n1854503 - [tracker-rhcs-bug 1848503] cephfs: Provide alternatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume\n1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS\n1858195 - [GSS] registry pod stuck in ContainerCreating due to pvc from cephfs storage class fail to mount\n1859183 - PV expansion is failing in retry loop in pre-existing PV after upgrade to OCS 4.5 (i.e. if the PV spec does not contain expansion params)\n1859229 - Rook should delete extra MON PVCs in case first reconcile takes too long and rook skips \"b\" and \"c\" (spawned from Bug 1840084#c14)\n1859478 - OCS 4.6 : Upon deployment, CSI Pods in CLBO with error - flag provided but not defined: -metadatastorage\n1860022 - OCS 4.6 Deployment: LBP CSV and pod should not be deployed since ob/obc CRDs are owned from OCS 4.5 onwards\n1860034 - OCS 4.6 Deployment in ocs-ci : Toolbox pod in ContainerCreationError due to key admin-secret not found\n1860670 - OCS 4.5 Uninstall External: Openshift-storage namespace in Terminating state as CephObjectStoreUser had finalizers remaining\n1860848 - Add validation for rgw-pool-prefix in the ceph-external-cluster-details-exporter script\n1861780 - [Tracker BZ1866386][IBM s390x] Mount Failed for CEPH  while running couple of OCS test cases. Solution:\n\nDownload the release images via:\n\nquay.io/redhat/quay:v3.3.3\nquay.io/redhat/clair-jwt:v3.3.3\nquay.io/redhat/quay-builder:v3.3.3\nquay.io/redhat/clair:v3.3.3\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1905758 - CVE-2020-27831 quay: email notifications authorization bypass\n1905784 - CVE-2020-27832 quay: persistent XSS in repository notification display\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nPROJQUAY-1124 - NVD feed is broken for latest Clair v2 version\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update\nAdvisory ID:       RHSA-2020:5633-01\nProduct:           Red Hat OpenShift Enterprise\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2020:5633\nIssue date:        2021-02-24\nCVE Names:         CVE-2018-10103 CVE-2018-10105 CVE-2018-14461 \n                   CVE-2018-14462 CVE-2018-14463 CVE-2018-14464 \n                   CVE-2018-14465 CVE-2018-14466 CVE-2018-14467 \n                   CVE-2018-14468 CVE-2018-14469 CVE-2018-14470 \n                   CVE-2018-14553 CVE-2018-14879 CVE-2018-14880 \n                   CVE-2018-14881 CVE-2018-14882 CVE-2018-16227 \n                   CVE-2018-16228 CVE-2018-16229 CVE-2018-16230 \n                   CVE-2018-16300 CVE-2018-16451 CVE-2018-16452 \n                   CVE-2018-20843 CVE-2019-3884 CVE-2019-5018 \n                   CVE-2019-6977 CVE-2019-6978 CVE-2019-8625 \n                   CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n                   CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n                   CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n                   CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n                   CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n                   CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n                   CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n                   CVE-2019-8846 CVE-2019-9455 CVE-2019-9458 \n                   CVE-2019-11068 CVE-2019-12614 CVE-2019-13050 \n                   CVE-2019-13225 CVE-2019-13627 CVE-2019-14889 \n                   CVE-2019-15165 CVE-2019-15166 CVE-2019-15903 \n                   CVE-2019-15917 CVE-2019-15925 CVE-2019-16167 \n                   CVE-2019-16168 CVE-2019-16231 CVE-2019-16233 \n                   CVE-2019-16935 CVE-2019-17450 CVE-2019-17546 \n                   CVE-2019-18197 CVE-2019-18808 CVE-2019-18809 \n                   CVE-2019-19046 CVE-2019-19056 CVE-2019-19062 \n                   CVE-2019-19063 CVE-2019-19068 CVE-2019-19072 \n                   CVE-2019-19221 CVE-2019-19319 CVE-2019-19332 \n                   CVE-2019-19447 CVE-2019-19524 CVE-2019-19533 \n                   CVE-2019-19537 CVE-2019-19543 CVE-2019-19602 \n                   CVE-2019-19767 CVE-2019-19770 CVE-2019-19906 \n                   CVE-2019-19956 CVE-2019-20054 CVE-2019-20218 \n                   CVE-2019-20386 CVE-2019-20387 CVE-2019-20388 \n                   CVE-2019-20454 CVE-2019-20636 CVE-2019-20807 \n                   CVE-2019-20812 CVE-2019-20907 CVE-2019-20916 \n                   CVE-2020-0305 CVE-2020-0444 CVE-2020-1716 \n                   CVE-2020-1730 CVE-2020-1751 CVE-2020-1752 \n                   CVE-2020-1971 CVE-2020-2574 CVE-2020-2752 \n                   CVE-2020-2922 CVE-2020-3862 CVE-2020-3864 \n                   CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 \n                   CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 \n                   CVE-2020-3897 CVE-2020-3898 CVE-2020-3899 \n                   CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n                   CVE-2020-6405 CVE-2020-7595 CVE-2020-7774 \n                   CVE-2020-8177 CVE-2020-8492 CVE-2020-8563 \n                   CVE-2020-8566 CVE-2020-8619 CVE-2020-8622 \n                   CVE-2020-8623 CVE-2020-8624 CVE-2020-8647 \n                   CVE-2020-8648 CVE-2020-8649 CVE-2020-9327 \n                   CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 \n                   CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 \n                   CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 \n                   CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 \n                   CVE-2020-9925 CVE-2020-10018 CVE-2020-10029 \n                   CVE-2020-10732 CVE-2020-10749 CVE-2020-10751 \n                   CVE-2020-10763 CVE-2020-10773 CVE-2020-10774 \n                   CVE-2020-10942 CVE-2020-11565 CVE-2020-11668 \n                   CVE-2020-11793 CVE-2020-12465 CVE-2020-12655 \n                   CVE-2020-12659 CVE-2020-12770 CVE-2020-12826 \n                   CVE-2020-13249 CVE-2020-13630 CVE-2020-13631 \n                   CVE-2020-13632 CVE-2020-14019 CVE-2020-14040 \n                   CVE-2020-14381 CVE-2020-14382 CVE-2020-14391 \n                   CVE-2020-14422 CVE-2020-15157 CVE-2020-15503 \n                   CVE-2020-15862 CVE-2020-15999 CVE-2020-16166 \n                   CVE-2020-24490 CVE-2020-24659 CVE-2020-25211 \n                   CVE-2020-25641 CVE-2020-25658 CVE-2020-25661 \n                   CVE-2020-25662 CVE-2020-25681 CVE-2020-25682 \n                   CVE-2020-25683 CVE-2020-25684 CVE-2020-25685 \n                   CVE-2020-25686 CVE-2020-25687 CVE-2020-25694 \n                   CVE-2020-25696 CVE-2020-26160 CVE-2020-27813 \n                   CVE-2020-27846 CVE-2020-28362 CVE-2020-29652 \n                   CVE-2021-2007 CVE-2021-3121 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.7.0 is now available. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.7.0. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2020:5634\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-x86_64\n\nThe image digest is\nsha256:d74b1cfa81f8c9cc23336aee72d8ae9c9905e62c4874b071317a078c316f8a70\n\n(For s390x architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-s390x\n\nThe image digest is\nsha256:a68ca03d87496ddfea0ac26b82af77231583a58a7836b95de85efe5e390ad45d\n\n(For ppc64le architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-ppc64le\n\nThe image digest is\nsha256:bc7b04e038c8ff3a33b827f4ee19aa79b26e14c359a7dcc1ced9f3b58e5f1ac6\n\nAll OpenShift Container Platform 4.7 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor. \n\nSecurity Fix(es):\n\n* crewjam/saml: authentication bypass in saml authentication\n(CVE-2020-27846)\n\n* golang: crypto/ssh: crafted authentication request can lead to nil\npointer dereference (CVE-2020-29652)\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n\n* nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)\n\n* kubernetes: Secret leaks in kube-controller-manager when using vSphere\nProvider (CVE-2020-8563)\n\n* containernetworking/plugins: IPv6 router advertisements allow for MitM\nattacks on IPv4 clusters (CVE-2020-10749)\n\n* heketi: gluster-block volume password details available in logs\n(CVE-2020-10763)\n\n* golang.org/x/text: possibility to trigger an infinite loop in\nencoding/unicode could lead to crash (CVE-2020-14040)\n\n* jwt-go: access restriction bypass vulnerability (CVE-2020-26160)\n\n* golang-github-gorilla-websocket: integer overflow leads to denial of\nservice (CVE-2020-27813)\n\n* golang: math/big: panic during recursive division of very large numbers\n(CVE-2020-28362)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n3. Solution:\n\nFor OpenShift Container Platform 4.7, see the following documentation,\nwhich\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -cli.html. \n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1620608 - Restoring deployment config with history leads to weird state\n1752220 - [OVN] Network Policy fails to work when project label gets overwritten\n1756096 - Local storage operator should implement must-gather spec\n1756173 - /etc/udev/rules.d/66-azure-storage.rules missing from initramfs\n1768255 - installer reports 100% complete but failing components\n1770017 - Init containers restart when the exited container is removed from node. \n1775057 - [MSTR-485] Cluster is abnormal after etcd backup/restore when the backup is conducted during etcd encryption is migrating\n1775444 - RFE: k8s cpu manager does not restrict /usr/bin/pod cpuset\n1777038 - Cluster scaled beyond host subnet limits does not fire alert or cleanly report why it cannot scale\n1777224 - InfraID in metadata.json and .openshift_install_state.json is not consistent when repeating `create` commands\n1784298 - \"Displaying with reduced resolution due to large dataset.\" would show under some conditions\n1785399 - Under condition of heavy pod creation, creation fails with \u0027error reserving pod name ...: name is reserved\"\n1797766 - Resource Requirements\" specDescriptor fields - CPU and Memory injects empty string YAML editor\n1801089 - [OVN] Installation failed and monitoring pod not created due to some network error. \n1805025 - [OSP] Machine status doesn\u0027t become \"Failed\" when creating a machine with invalid image\n1805639 - Machine status should be \"Failed\" when creating a machine with invalid machine configuration\n1806000 - CRI-O failing with: error reserving ctr name\n1806915 - openshift-service-ca: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be\n1806917 - openshift-service-ca-operator: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be\n1810438 - Installation logs are not gathered from OCP nodes\n1812085 - kubernetes-networking-namespace-pods dashboard doesn\u0027t exist\n1812412 - Monitoring Dashboard: on restricted cluster, query timed out in expression evaluation\n1813012 - EtcdDiscoveryDomain no longer needed\n1813949 - openshift-install doesn\u0027t use env variables for OS_* for some of API endpoints\n1816812 - OpenShift test suites are not resilient to rate limited registries (like docker.io) and cannot control their dependencies for offline use\n1819053 - loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: OpenAPI spec does not exist\n1819457 - Package Server is in \u0027Cannot update\u0027 status despite properly working\n1820141 - [RFE] deploy qemu-quest-agent on the nodes\n1822744 - OCS Installation CI test flaking\n1824038 - Integration Tests: StaleElementReferenceError in OLM single-installmode scenario\n1825892 - StorageClasses and PVs are not cleaned completely after running the csi verification tool\n1826301 - Wrong NodeStatus reports in file-integrity scan when configuration error in aide.conf file\n1829723 - User workload monitoring alerts fire out of the box\n1832968 - oc adm catalog mirror does not mirror the index image itself\n1833012 - Lower OVNKubernetes HTTP E/W performance compared with OpenShiftSDN\n1833220 - CVE-2020-10749 containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters\n1834995 - olmFull suite always fails once th suite is run on the same cluster\n1836017 - vSphere UPI: Both Internal and External load balancers for kube-apiserver should use /readyz\n1837953 - Replacing masters doesn\u0027t work for ovn-kubernetes 4.4\n1838352 - OperatorExited, Pending marketplace-operator-... pod for several weeks\n1838751 - [oVirt][Tracker] Re-enable skipped network tests\n1839239 - csi-snapshot-controller flickers Degraded=True on etcd hiccups\n1840759 - [aws-ebs-csi-driver] The volume created by aws ebs csi driver can not be deleted when the cluster is destroyed\n1841039 - authentication-operator: Add e2e test for password grants to Keycloak being set as OIDC IdP\n1841119 - Get rid of config patches and pass flags directly to kcm\n1841175 - When an Install Plan gets deleted, OLM does not create a new one\n1841381 - Issue with memoryMB validation\n1841885 - oc adm catalog mirror command attempts to pull from registry.redhat.io when using --from-dir option\n1844727 - Etcd container leaves grep and lsof zombie processes\n1845387 - CVE-2020-10763 heketi: gluster-block volume password details available in logs\n1847074 - Filter bar layout issues at some screen widths on search page\n1848358 - CRDs with preserveUnknownFields:true don\u0027t reflect in status that they are non-structural\n1849543 - [4.5]kubeletconfig\u0027s description will show multiple lines for finalizers when upgrade from 4.4.8-\u003e4.5\n1851103 - Use of NetworkManager-wait-online.service in rhcos-growpart.service\n1851203 - [GSS] [RFE] Need a simpler representation of capactiy breakdown in total usage and per project breakdown in OCS 4 dashboard\n1851351 - OCP 4.4.9: EtcdMemberIPMigratorDegraded: rpc error: code = Canceled desc = grpc: the client connection is closing\n1851693 - The `oc apply` should return errors instead of hanging there when failing to create the CRD\n1852289 - Upgrade testsuite fails on ppc64le environment - Unsupported LoadBalancer service\n1853115 - the restriction of --cloud option should be shown in help text. \n1853116 - `--to` option does not work with `--credentials-requests` flag. \n1853352 - [v2v][UI] Storage Class fields Should  Not be empty  in VM  disks view\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1854567 - \"Installed Operators\" list showing \"duplicated\" entries during installation\n1855325 - [Feature:Prometheus][Conformance] Prometheus when installed on the cluster [Top Level] [Feature:Prometheus][Conformance] Prometheus when installed on the cluster should report telemetry if a cloud.openshift.com token is present\n1855351 - Inconsistent Installer reactions to Ctrl-C during user input process\n1855408 - OVN cluster unstable after running minimal scale test\n1856351 - Build page should show metrics for when the build ran, not the last 30 minutes\n1856354 - New APIServices missing from OpenAPI definitions\n1857446 - ARO/Azure: excessive pod memory allocation causes node lockup\n1857877 - Operator upgrades can delete existing CSV before completion\n1858578 - [v2v] [ui] VM import RHV to CNV Target VM Name longer than 63 chars should not be allowed\n1859174 - [IPI][OSP] Having errors from 4.3 to 4.6 about Security group rule already created\n1860136 - default ingress does not propagate annotations to route object on update\n1860322 - [OCPv4.5.2] after unexpected shutdown one of RHV Hypervisors, OCP worker nodes machine are marked as \"Failed\"\n1860518 - unable to stop a crio pod\n1861383 - Route with `haproxy.router.openshift.io/timeout: 365d` kills the ingress controller\n1862430 - LSO: PV creation lock should not be acquired in a loop\n1862489 - LSO autoprovisioning should exclude top level disks that are part of LVM volume group. \n1862608 - Virtual media does not work on hosts using BIOS, only UEFI\n1862918 - [v2v] User should only select SRIOV network when importin vm with SRIOV network\n1865743 - Some pods are stuck in ContainerCreating and some sdn pods are in CrashLoopBackOff\n1865839 - rpm-ostree fails with \"System transaction in progress\" when moving to kernel-rt\n1866043 - Configurable table column headers can be illegible\n1866087 - Examining agones helm chart resources results in \"Oh no!\"\n1866261 - Need to indicate the intentional behavior for Ansible in the `create api` help info\n1866298 - [RHOCS Usability Study][Installation] Labeling the namespace should be a part of the installation flow or be clearer as a requirement\n1866320 - [RHOCS Usability Study][Dashboard] Users were confused by Available Capacity and the Total Capacity\n1866334 - [RHOCS Usability Study][Installation] On the Operator installation page, there\u2019s no indication on which labels offer tooltip/help\n1866340 - [RHOCS Usability Study][Dashboard] It was not clear why \u201cNo persistent storage alerts\u201d was prominently displayed\n1866343 - [RHOCS Usability Study][Dashboard] User wanted to know the time frame for Data Consumption, e.g I/O Operations\n1866445 - kola --basic-qemu-scenarios scenario fail on ppc64le \u0026 s390x\n1866482 - Few errors are seen when oc adm must-gather is run\n1866605 - No metadata.generation set for build and buildconfig objects\n1866873 - MCDDrainError \"Drain failed on  , updates may be blocked\" missing rendered node name\n1866901 - Deployment strategy for BMO allows multiple pods to run at the same time\n1866925 - openshift-install destroy cluster should fail quickly when provided with invalid credentials on Azure. \n1867165 - Cannot assign static address to baremetal install bootstrap vm\n1867380 - When using webhooks in OCP 4.5 fails to rollout latest deploymentconfig\n1867400 - [OCs 4.5]UI should not allow creation of second storagecluster of different mode in a single OCS\n1867477 - HPA monitoring cpu utilization fails for deployments which have init containers\n1867518 - [oc] oc should not print so many goroutines when ANY command fails\n1867608 - ds/machine-config-daemon takes 100+ minutes to rollout on  250 node cluster\n1867965 - OpenShift Console Deployment Edit overwrites deployment yaml\n1868004 - opm index add appears to produce image with wrong registry server binary\n1868065 - oc -o jsonpath prints possible warning / bug \"Unable to decode server response into a Table\"\n1868104 - Baremetal actuator should not delete Machine objects\n1868125 - opm index add is not creating an index with valid images when --permissive flag is added, the index is empty instead\n1868384 - CLI does not save login credentials as expected when using the same username in multiple clusters\n1868527 - OpenShift Storage using VMWare vSAN receives error \"Failed to add disk \u0027scsi0:2\u0027\" when mounted pod is created on separate node\n1868645 - After a disaster recovery pods a stuck in \"NodeAffinity\" state and not running\n1868748 - ClusterProvisioningIP in baremetal platform has wrong JSON annotation\n1868765 - [vsphere][ci] could not reserve an IP address: no available addresses\n1868770 - catalogSource named \"redhat-operators\" deleted in a disconnected cluster\n1868976 - Prometheus error opening query log file on EBS backed PVC\n1869293 - The configmap name looks confusing in aide-ds pod logs\n1869606 - crio\u0027s failing to delete a network namespace\n1870337 - [sig-storage] Managed cluster should have no crashlooping recycler pods over four minutes\n1870342 - [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]\n1870373 - Ingress Operator reports available when DNS fails to provision\n1870467 - D/DC Part of Helm / Operator Backed should not have HPA\n1870728 - openshift-install creates expired ignition files from stale .openshift_install_state.json\n1870800 - [4.6] Managed Column not appearing on Pods Details page\n1871170 - e2e tests are needed to validate the functionality of the etcdctl container\n1872001 - EtcdDiscoveryDomain no longer needed\n1872095 - content are expanded to the whole line when only one column in table on Resource Details page\n1872124 - Could not choose device type as \"disk\" or \"part\" when create localvolumeset from web console\n1872128 - Can\u0027t run container with hostPort on ipv6 cluster\n1872166 - \u0027Silences\u0027 link redirects to unexpected \u0027Alerts\u0027 view after creating a silence in the Developer perspective\n1872251 - [aws-ebs-csi-driver] Verify job in CI doesn\u0027t check for vendor dir sanity\n1872786 - Rules in kube-apiserver.rules are taking too long and consuming too much memory for Prometheus to evaluate them\n1872821 - [DOC] Typo in Ansible Operator Tutorial\n1872907 - Fail to create CR from generated Helm Base Operator\n1872923 - Click \"Cancel\" button on the \"initialization-resource\" creation form page should send users to the \"Operator details\" page instead of \"Install Operator\" page (previous page)\n1873007 - [downstream] failed to read config when running the operator-sdk in the home path\n1873030 - Subscriptions without any candidate operators should cause resolution to fail\n1873043 - Bump to latest available 1.19.x k8s\n1873114 - Nodes goes into NotReady state (VMware)\n1873288 - Changing Cluster-Wide Pull Secret Does Not Trigger Updates In Kubelet Filesystem\n1873305 - Failed to power on /inspect node when using Redfish protocol\n1873326 - Accessibility - The symbols e.g checkmark in the overview page has no text description, label, or other accessible information\n1873480 - Accessibility - No text description, alt text, label, or other accessible information associated with the help icon: \u201c?\u201d button/icon in Developer Console -\u003eNavigation\n1873556 - [Openstack] HTTP_PROXY setting for NetworkManager-resolv-prepender not working\n1873593 - MCO fails to cope with ContainerRuntimeConfig thas has a name \u003e 63 characters\n1874057 - Pod stuck in CreateContainerError - error msg=\"container_linux.go:348: starting container process caused \\\"chdir to cwd (\\\\\\\"/mount-point\\\\\\\") set in config.json failed: permission denied\\\"\"\n1874074 - [CNV] Windows 2019 Default Template Not Defaulting to Proper NIC/Storage Driver\n1874192 - [RFE] \"Create Backing Store\" page doesn\u0027t allow to select already defined k8s secret as target bucket credentials when Google Cloud Storage is selected as a provider\n1874240 - [vsphere] unable to deprovision - Runtime error list attached objects\n1874248 - Include validation for vcenter host in the install-config\n1874340 - vmware: NodeClockNotSynchronising alert is triggered in openshift cluster after upgrading form 4.4.16 to 4.5.6\n1874583 - apiserver tries and fails to log an event when shutting down\n1874584 - add retry for etcd errors in kube-apiserver\n1874638 - Missing logging for nbctl daemon\n1874736 - [downstream] no version info for the helm-operator\n1874901 - add utm_source parameter to Red Hat Marketplace URLs for attribution\n1874968 - Accessibility: The project selection drop down is a keyboard trap\n1875247 - Dependency resolution error \"found more than one head for channel\" is unhelpful for users\n1875516 - disabled scheduling is easy to miss in node page of OCP console\n1875598 - machine status is Running for a master node which has been terminated from the console\n1875806 - When creating a service of type \"LoadBalancer\" (Kuryr,OVN) communication through this loadbalancer failes after 2-5 minutes. \n1876166 - need to be able to disable kube-apiserver connectivity checks\n1876469 - Invalid doc link on yaml template schema description\n1876701 - podCount specDescriptor change doesn\u0027t take effect on operand details page\n1876815 - Installer uses the environment variable OS_CLOUD for manifest generation despite explicit prompt\n1876935 - AWS volume snapshot is not deleted after the cluster is destroyed\n1877071 - vSphere IPI - Nameserver limits were exceeded, some nameservers have been omitted\n1877105 - add redfish to enabled_bios_interfaces\n1877116 - e2e aws calico tests fail with `rpc error: code = ResourceExhausted`\n1877273 - [OVN] EgressIP cannot fail over to available nodes after one egressIP node shutdown\n1877648 - [sriov]VF from allocatable and capacity of node is incorrect when the policy is only \u0027rootDevices\u0027\n1877681 - Manually created PV can not be used\n1877693 - dnsrecords specify recordTTL as 30 but the value is null in AWS Route 53\n1877740 - RHCOS unable to get ip address during first boot\n1877812 - [ROKS] IBM cloud failed to terminate OSDs when upgraded between internal builds of OCS 4.5\n1877919 - panic in multus-admission-controller\n1877924 - Cannot set BIOS config using Redfish with Dell iDracs\n1878022 - Met imagestreamimport error when import the whole image repository\n1878086 - OCP 4.6+OCS 4.6(multiple SC) Internal Mode- UI should populate the default \"Filesystem Name\" instead of providing a textbox, \u0026 the name should be validated\n1878301 - [4.6] [UI] Unschedulable used to always be displayed when Node is Ready status\n1878701 - After deleting and recreating a VM with same name, the VM events contain the events from the old VM\n1878766 - CPU consumption on nodes is higher than the CPU count of the node. \n1878772 - On the nodes there are up to 547 zombie processes caused by thanos and Prometheus. \n1878823 - \"oc adm release mirror\" generating incomplete imageContentSources when using \"--to\" and \"--to-release-image\"\n1878845 - 4.5 to 4.6.rc.4 upgrade failure: authentication operator health check connection refused for multitenant mode\n1878900 - Installer complains about not enough vcpu for the baremetal flavor where generic bm flavor is being used\n1878953 - RBAC error shows when normal user access pvc upload page\n1878956 - `oc api-resources` does not include API version\n1878972 - oc adm release mirror removes the architecture information\n1879013 - [RFE]Improve CD-ROM interface selection\n1879056 - UI should allow to change or unset the evictionStrategy\n1879057 - [CSI Certificate Test] Test failed for CSI certification tests for CSIdriver openshift-storage.rbd.csi.ceph.com with RWX enabled\n1879094 - RHCOS dhcp kernel parameters not working as expected\n1879099 - Extra reboot during 4.5 -\u003e 4.6 upgrade\n1879244 - Error adding container to network \"ipvlan-host-local\": \"master\" field is required\n1879248 - OLM Cert Dir for Webhooks does not align SDK/Kubebuilder\n1879282 - Update OLM references to point to the OLM\u0027s new doc site\n1879283 - panic after nil pointer dereference in pkg/daemon/update.go\n1879365 - Overlapping, divergent openshift-cluster-storage-operator manifests\n1879419 - [RFE]Improve boot source description for \u0027Container\u0027 and \u2018URL\u2019\n1879430 - openshift-object-counts quota is not dynamically updating as the resource is deleted. \n1879565 - IPv6 installation fails on node-valid-hostname\n1879777 - Overlapping, divergent openshift-machine-api namespace manifests\n1879878 - Messages flooded in thanos-querier pod- oauth-proxy container: Authorization header does not start with \u0027Basic\u0027, skipping basic authentication in Log message in thanos-querier pod the oauth-proxy\n1879930 - Annotations shouldn\u0027t be removed during object reconciliation\n1879976 - No other channel visible from console\n1880068 - image pruner is not aware of image policy annotation, StatefulSets, etc. \n1880148 - dns daemonset rolls out slowly in large clusters\n1880161 - Actuator Update calls should have fixed retry time\n1880259 - additional network + OVN network installation failed\n1880389 - Pipeline Runs with skipped Tasks incorrectly show Tasks as \"Failed\"\n1880410 - Convert Pipeline Visualization node to SVG\n1880417 - [vmware] Fail to boot with Secure Boot enabled, kernel lockdown denies iopl access to afterburn\n1880443 - broken machine pool management on OpenStack\n1880450 - Host failed to install because its installation stage joined took longer than expected 20m0s. \n1880473 - IBM Cloudpak operators installation stuck \"UpgradePending\" with InstallPlan status updates failing due to size limitation\n1880680 - [4.3] [Tigera plugin] - openshift-kube-proxy fails - Failed to execute iptables-restore: exit status 4 (iptables-restore v1.8.4 (nf_tables)\n1880785 - CredentialsRequest missing description in `oc explain`\n1880787 - No description for Provisioning CRD for `oc explain`\n1880902 - need dnsPlocy set in crd ingresscontrollers\n1880913 - [DeScheduler] - change loglevel from Info to Error when priority class given in the descheduler params is not present in the cluster\n1881027 - Cluster installation fails at with error :  the container name \\\"assisted-installer\\\" is already in use\n1881046 - [OSP] openstack-cinder-csi-driver-operator doesn\u0027t contain required manifests and assets\n1881155 - operator install authentication: Authentication require functional ingress which requires at least one schedulable and ready node\n1881268 - Image uploading failed but wizard claim the source is available\n1881322 - kube-scheduler not scheduling pods for certificates not renewed automatically after nodes restoration\n1881347 - [v2v][ui]VM Import Wizard does not call Import provider cleanup\n1881881 - unable to specify target port manually resulting in application not reachable\n1881898 - misalignment of sub-title in quick start headers\n1882022 - [vsphere][ipi] directory path is incomplete, terraform can\u0027t find the cluster\n1882057 - Not able to select access modes for snapshot and clone\n1882140 - No description for spec.kubeletConfig\n1882176 - Master recovery instructions don\u0027t handle IP change well\n1882191 - Installation fails against external resources which lack DNS Subject Alternative Name\n1882209 - [ BateMetal IPI ] local coredns resolution not working\n1882210 - [release 4.7] insights-operator: Fix bug in reflector not recovering from \"Too large resource version\"\n1882268 - [e2e][automation]Add Integration Test for Snapshots\n1882361 - Retrieve and expose the latest report for the cluster\n1882485 - dns-node-resolver corrupts /etc/hosts if internal registry is not in use\n1882556 - git:// protocol in origin tests is not currently proxied\n1882569 - CNO: Replacing masters doesn\u0027t work for ovn-kubernetes 4.4\n1882608 - Spot instance not getting created on AzureGovCloud\n1882630 - Fstype is changed after deleting pv provisioned by localvolumeset instance\n1882649 - IPI installer labels all images it uploads into glance as qcow2\n1882653 - The Approval should display the Manual after the APPROVAL changed to Manual from the Automatic\n1882658 - [RFE] Volume Snapshot is not listed under inventory in Project Details page\n1882660 - Operators in a namespace should be installed together when approve one\n1882667 - [ovn] br-ex Link not found when scale up RHEL worker\n1882723 - [vsphere]Suggested mimimum value for providerspec not working\n1882730 - z systems not reporting correct core count in recording rule\n1882750 - [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully\n1882781 - nameserver= option to dracut creates extra NM connection profile\n1882785 - Multi-Arch CI Jobs destroy libvirt network but occasionally leave it defined\n1882844 - [IPI on vsphere] Executing \u0027openshift-installer destroy cluster\u0027 leaves installer tag categories in vsphere\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1883388 - Bare Metal Hosts Details page doesn\u0027t show Mainitenance and Power On/Off status\n1883422 - operator-sdk cleanup fail after installing operator with \"run bundle\" without installmode and og with ownnamespace\n1883425 - Gather top installplans and their count\n1883502 - Logging is broken due to mix of k8s.io/klog v1 and v2\n1883523 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel]\n1883538 - must gather report \"cannot file manila/aws ebs/ovirt csi related namespaces and objects\" error\n1883560 - operator-registry image needs clean up in /tmp\n1883563 - Creating duplicate namespace from create namespace modal breaks the UI\n1883614 - [OCP 4.6] [UI] UI should not describe power cycle as \"graceful\"\n1883642 - [sig-imageregistry][Feature:ImageTriggers][Serial] ImageStream admission TestImageStreamAdmitSpecUpdate\n1883660 - e2e-metal-ipi CI job consistently failing on 4.4\n1883765 - [user workload monitoring] improve latency of Thanos sidecar  when streaming read requests\n1883766 - [e2e][automation] Adjust tests for UI changes\n1883768 - [user workload monitoring] The Prometheus operator should discard invalid TLS configurations\n1883773 - opm alpha bundle build fails on win10 home\n1883790 - revert \"force cert rotation every couple days for development\" in 4.7\n1883803 - node pull secret feature is not working as expected\n1883836 - Jenkins imagestream ubi8 and nodejs12 update\n1883847 - The UI does not show checkbox for enable encryption at rest for OCS\n1883853 - go list -m all does not work\n1883905 - race condition in opm index add --overwrite-latest\n1883946 - Understand why trident CSI pods are getting deleted by OCP\n1884035 - Pods are illegally transitioning back to pending\n1884041 - e2e should provide error info when minimum number of pods aren\u0027t ready in kube-system namespace\n1884131 - oauth-proxy repository should run tests\n1884165 - Repos should be disabled in -firstboot.service before OS extensions are applied\n1884221 - IO becomes unhealthy due to a file change\n1884258 - Node network alerts should work on ratio rather than absolute values\n1884270 - Git clone does not support SCP-style ssh locations\n1884334 - CVO marks an upgrade as failed when an operator takes more than 20 minutes to rollout\n1884435 - vsphere - loopback is randomly not being added to resolver\n1884565 - oauth-proxy crashes on invalid usage\n1884584 - Kuryr controller continuously restarting due to unable to clean up Network Policy\n1884613 - Create Instance of Prometheus from operator returns blank page for non cluster-admin users\n1884628 - ovs-configuration service fails when the external network is configured on a tagged vlan on top of a bond device on a baremetal IPI deployment\n1884629 - Visusally impaired user using screen reader not able to select Admin/Developer console options in drop down menu. \n1884632 - Adding BYOK disk encryption through DES\n1884654 - Utilization of a VMI is not populated\n1884655 - KeyError on self._existing_vifs[port_id]\n1884664 - Operator install page shows \"installing...\" instead of going to install status page\n1884672 - Failed to inspect hardware. Reason: unable to start inspection: \u0027idrac\u0027\n1884691 - Installer blocks cloud-credential-operator manual mode on GCP and Azure\n1884724 - Quick Start: Serverless quickstart doesn\u0027t match Operator install steps\n1884739 - Node process segfaulted\n1884824 - Update baremetal-operator libraries to k8s 1.19\n1885002 - network kube-rbac-proxy scripts crashloop rather than non-crash looping\n1885138 - Wrong detection of pending state in VM details\n1885151 - [Cloud Team - Cluster API Provider Azure] Logging is broken due to mix of k8s.io/klog v1 and v2\n1885165 - NoRunningOvnMaster alert falsely triggered\n1885170 - Nil pointer when verifying images\n1885173 - [e2e][automation] Add test for next run configuration feature\n1885179 - oc image append fails on push (uploading a new layer)\n1885213 - Vertical Pod Autoscaler (VPA) not working with DeploymentConfig\n1885218 - [e2e][automation] Add virtctl to gating script\n1885223 - Sync with upstream (fix panicking cluster-capacity binary)\n1885235 - Prometheus: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885241 - kube-rbac-proxy: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885243 - prometheus-adapter: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885244 - prometheus-operator: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885246 - cluster-monitoring-operator: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885249 - openshift-state-metrics: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885308 - Supermicro nodes failed to boot via disk during installation when using IPMI and UEFI\n1885315 - unit tests fail on slow disks\n1885319 - Remove redundant use of group and kind of DataVolumeTemplate\n1885343 - Console doesn\u0027t load in iOS Safari when using self-signed certificates\n1885344 - 4.7 upgrade - dummy bug for 1880591\n1885358 - add p\u0026f configuration to protect openshift traffic\n1885365 - MCO does not respect the install section of systemd files when enabling\n1885376 - failed to initialize the cluster: Cluster operator marketplace is still updating\n1885398 - CSV with only Webhook conversion can\u0027t be installed\n1885403 - Some OLM events hide the underlying errors\n1885414 - Need to disable HTX when not using HTTP/2 in order to preserve HTTP header name case\n1885425 - opm index add cannot batch add multiple bundles that use skips\n1885543 - node tuning operator builds and installs an unsigned RPM\n1885644 - Panic output due to timeouts in openshift-apiserver\n1885676 - [OCP 4.7]UI should fallback to minimal deployment only after total CPU \u003c 30 || totalMemory \u003c 72 GiB for initial deployment\n1885702 - Cypress:  Fix \u0027aria-hidden-focus\u0027 accesibility violations\n1885706 - Cypress:  Fix \u0027link-name\u0027 accesibility violation\n1885761 - DNS fails to resolve in some pods\n1885856 - Missing registry v1 protocol usage metric on telemetry\n1885864 - Stalld service crashed under the worker node\n1885930 - [release 4.7] Collect ServiceAccount statistics\n1885940 - kuryr/demo image ping not working\n1886007 - upgrade test with service type load balancer will never work\n1886022 - Move range allocations to CRD\u0027s\n1886028 - [BM][IPI] Failed to delete node after scale down\n1886111 - UpdatingopenshiftStateMetricsFailed: DeploymentRollout of openshift-monitoring/openshift-state-metrics: got 1 unavailable replicas\n1886134 - Need to set GODEBUG=x509ignoreCN=0 in initrd\n1886154 - System roles are not present while trying to create new role binding through web console\n1886166 - 1885517 Clone - Not needed for 4.7 - upgrade from 4.5-\u003e4.6 causes broadcast storm\n1886168 - Remove Terminal Option for Windows Nodes\n1886200 - greenwave / CVP is failing on bundle validations, cannot stage push\n1886229 - Multipath support for RHCOS sysroot\n1886294 - Unable to schedule a pod due to Insufficient ephemeral-storage\n1886327 - Attempt to add a worker using bad roodDeviceHint: bmh and machine become Provisioned, no error in status\n1886353 - [e2e][automation] kubevirt-gating job fails for a missing virtctl URL\n1886397 - Move object-enum to console-shared\n1886423 - New Affinities don\u0027t contain ID until saving\n1886435 - Azure UPI uses deprecated command \u0027group deployment\u0027\n1886449 - p\u0026f: add configuration to protect oauth server traffic\n1886452 - layout options doesn\u0027t gets selected style on click i.e grey background\n1886462 - IO doesn\u0027t recognize namespaces - 2 resources with the same name in 2 namespaces -\u003e only 1 gets collected\n1886488 - move e2e test off of nfs image from docker.io/gmontero/nfs-server:latest\n1886524 - Change default terminal command for Windows Pods\n1886553 - i/o timeout experienced from build02 when targeting CI test cluster during test execution\n1886600 - panic: assignment to entry in nil map\n1886620 - Application behind service load balancer with PDB is not disrupted\n1886627 - Kube-apiserver pods restarting/reinitializing periodically\n1886635 - CVE-2020-8563 kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider\n1886636 - Panic in machine-config-operator\n1886749 - Removing network policy from namespace causes inability to access pods through loadbalancer. \n1886751 - Gather MachineConfigPools\n1886766 - PVC dropdown has \u0027Persistent Volume\u0027 Label\n1886834 - ovn-cert is mandatory in both master and node daemonsets\n1886848 - [OSP] machine instance-state annotation discrepancy with providerStatus.instanceState\n1886861 - ordered-values.yaml not honored if values.schema.json provided\n1886871 - Neutron ports created for hostNetworking pods\n1886890 - Overwrite jenkins-agent-base imagestream\n1886900 - Cluster-version operator fills logs with \"Manifest: ...\" spew\n1886922 - [sig-network] pods should successfully create sandboxes by getting pod\n1886973 - Local storage operator doesn\u0027t include correctly populate LocalVolumeDiscoveryResult in console\n1886977 - [v2v]Incorrect VM Provider type displayed in UI while importing VMs through VMIO\n1887010 - Imagepruner met error \"Job has reached the specified backoff limit\" which causes image registry degraded\n1887026 - FC volume attach fails with \u201cno fc disk found\u201d error on OCP 4.6 PowerVM cluster\n1887040 - [upgrade] ovs pod crash for rhel worker when upgarde from 4.5 to 4.6\n1887046 - Event for LSO need update to avoid confusion\n1887088 - cluster-node-tuning-operator refers to missing cluster-node-tuned image\n1887375 - User should be able to specify volumeMode when creating pvc from web-console\n1887380 - Unsupported access mode should not be available to select when creating pvc by aws-ebs-csi-driver(gp2-csi) from web-console\n1887392 - openshift-apiserver: delegated authn/z should have ttl \u003e metrics/healthz/readyz/openapi interval\n1887428 - oauth-apiserver service should be monitored by prometheus\n1887441 - ingress misconfiguration may break authentication but ingress operator keeps reporting \"degraded: False\"\n1887454 - [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data\n1887456 - It is impossible to attach the default NIC to a bridge with the latest version of OVN Kubernetes\n1887465 - Deleted project is still referenced\n1887472 - unable to edit application group for KSVC via gestures (shift+Drag)\n1887488 - OCP 4.6:  Topology Manager OpenShift E2E test fails:  gu workload attached to SRIOV networks should let resource-aligned PODs have working SRIOV network interface\n1887509 - Openshift-tests conformance TopologyManager tests run when Machine Config Operator is not installed on cluster\n1887525 - Failures to set master HardwareDetails cannot easily be debugged\n1887545 - 4.5 to 4.6 upgrade fails when external network is configured on a bond device: ovs-configuration service fails and node becomes unreachable\n1887585 - ovn-masters stuck in crashloop after scale test\n1887651 - [Internal Mode] Object gateway (RGW) in unknown state after OCP upgrade. \n1887737 - Test TestImageRegistryRemovedWithImages is failing on e2e-vsphere-operator\n1887740 - cannot install descheduler operator after uninstalling it\n1887745 - API server is throwing 5xx error code for 42.11% of requests for LIST events\n1887750 - `oc explain localvolumediscovery` returns empty description\n1887751 - `oc explain localvolumediscoveryresult` returns empty description\n1887778 - Add ContainerRuntimeConfig gatherer\n1887783 - PVC upload cannot continue after approve the certificate\n1887797 - [CNV][V2V] Default network type is bridge for interface bound to POD network in VMWare migration wizard\n1887799 - User workload monitoring prometheus-config-reloader OOM\n1887850 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install test is flaky\n1887863 - Installer panics on invalid flavor\n1887864 - Clean up dependencies to avoid invalid scan flagging\n1887934 - TestForwardedHeaderPolicyAppend, TestForwardedHeaderPolicyReplace, and TestForwardedHeaderPolicyIfNone consistently fail because of case-sensitive comparison\n1887936 - Kube-scheduler should be able to parse v1beta1 KubeSchedulerConfig\n1888015 - workaround kubelet graceful termination of static pods bug\n1888028 - prevent extra cycle in aggregated apiservers\n1888036 - Operator details shows old CRD versions\n1888041 - non-terminating pods are going from running to pending\n1888072 - Setting Supermicro node to PXE boot via Redfish doesn\u0027t take affect\n1888073 - Operator controller continuously busy looping\n1888118 - Memory requests not specified for image registry operator\n1888150 - Install Operand Form on OperatorHub is displaying unformatted text\n1888172 - PR 209 didn\u0027t update the sample archive, but machineset and pdbs are now namespaced\n1888227 - Failed to deploy some of container image on the recent OCP 4.6 nightly build\n1888292 - Fix CVE-2015-7501 affecting agent-maven-3.5\n1888311 - p\u0026f: make SAR traffic from oauth and openshift apiserver exempt\n1888363 - namespaces crash in dev\n1888378 - [IPI on Azure] errors destroying cluster when Azure resource group was never created\n1888381 - instance:node_network_receive_bytes_excluding_lo:rate1m value twice expected\n1888464 - installer missing permission definitions for TagResources and UntagResources when installing in existing VPC\n1888494 - imagepruner pod is error when image registry storage is not configured\n1888565 - [OSP] machine-config-daemon-firstboot.service failed with \"error reading osImageURL from rpm-ostree\"\n1888595 - cluster-policy-controller logs shows error which reads initial monitor sync has error\n1888601 - The poddisruptionbudgets is using the operator service account, instead of gather\n1888657 - oc doesn\u0027t know its name\n1888663 - sdn starts after kube-apiserver, delay readyz until oauth-apiserver is reachable\n1888671 - Document the Cloud Provider\u0027s ignore-volume-az setting\n1888738 - quay.io/openshift/origin-must-gather:latest is not a multi-arch, manifest-list image\n1888763 - at least one of these parameters (Vendor, DeviceID or PfNames) has to be defined in nicSelector in CR %s\", cr.GetName()\n1888827 - ovnkube-master may segfault when trying to add IPs to a nil address set\n1888861 - need to pass dual-stack service CIDRs to kube-apiserver in dual-stack cluster\n1888866 - AggregatedAPIDown permanently firing after removing APIService\n1888870 - JS error when using autocomplete in YAML editor\n1888874 - hover message are not shown for some properties\n1888900 - align plugins versions\n1888985 - Cypress:  Fix \u0027Ensures buttons have discernible text\u0027 accesibility violation\n1889213 - The error message of uploading failure is not clear enough\n1889267 - Increase the time out for creating template and upload image in the terraform\n1889348 - Project link should be removed from Application Details page, since it is inaccurate (Application Stages)\n1889374 - Kiali feature won\u0027t work on fresh 4.6 cluster\n1889388 - ListBundles returns incorrect replaces/skips when bundles have been added via semver-skippatch mode\n1889420 - OCP failed to add vsphere disk when pod moved to new node during cluster upgrade\n1889515 - Accessibility - The symbols e.g checkmark in the Node \u003e overview page has no text description, label, or other accessible information\n1889529 - [Init-CR annotation] Inline alert shows operand instance was needed still appearing after creating an Operand instance\n1889540 - [4.5 upgrade][alert]CloudCredentialOperatorDown\n1889577 - Resources are not shown on project workloads page\n1889620 - [Azure] - Machineset not scaling when publicIP:true in disconnected Azure enviroment\n1889630 - Scheduling disabled popovers are missing for Node status in Node Overview and Details pages\n1889692 - Selected Capacity is showing wrong size\n1889694 - usbguard fails to install as RHCOS extension due to missing libprotobuf.so.15\n1889698 - When the user clicked cancel at the Create Storage Class confirmation dialog all the data from the Local volume set goes off\n1889710 - Prometheus metrics on disk take more space compared to OCP 4.5\n1889721 - opm index add semver-skippatch mode does not respect prerelease versions\n1889724 - When LocalVolumeDiscovery CR is created form the LSO page User doesn\u0027t see the Disk tab\n1889767 - [vsphere] Remove certificate from upi-installer image\n1889779 - error when destroying a vSphere installation that failed early\n1889787 - OCP is flooding the oVirt engine with auth errors\n1889838 - race in Operator update after fix from bz1888073\n1889852 - support new AWS regions ap-east-1, af-south-1, eu-south-1\n1889863 - Router prints incorrect log message for namespace label selector\n1889891 - Backport timecache LRU fix\n1889912 - Drains can cause high CPU usage\n1889921 - Reported Degraded=False Available=False pair does not make sense\n1889928 - [e2e][automation] Add more tests for golden os\n1889943 - EgressNetworkPolicy does not work when setting Allow rule to a dnsName\n1890038 - Infrastructure status.platform not migrated to status.platformStatus causes warnings\n1890074 - MCO extension kernel-headers is invalid\n1890104 - with Serverless 1.10 version of trigger/subscription/channel/IMC is V1 as latest\n1890130 - multitenant mode consistently fails CI\n1890141 - move off docker.io images for build/image-eco/templates/jenkins e2e\n1890145 - The mismatched of font size for Status Ready and Health Check secondary text\n1890180 - FieldDependency x-descriptor doesn\u0027t support non-sibling fields\n1890182 - DaemonSet with existing owner garbage collected\n1890228 - AWS: destroy stuck on route53 hosted zone not found\n1890235 - e2e: update Protractor\u0027s checkErrors logging\n1890250 - workers may fail to join the cluster during an update from 4.5\n1890256 - Replacing a master node on a baremetal IPI deployment gets stuck when deleting the machine of the unhealthy member\n1890270 - External IP doesn\u0027t work if the IP address is not assigned to a node\n1890361 - s390x: Generate new ostree rpm with fix for rootfs immutability\n1890456 - [vsphere] mapi_instance_create_failed doesn\u0027t work on vsphere\n1890467 - unable to edit an application without a service\n1890472 - [Kuryr] Bulk port creation exception not completely formatted\n1890494 - Error assigning Egress IP on GCP\n1890530 - cluster-policy-controller doesn\u0027t gracefully terminate\n1890630 - [Kuryr] Available port count not correctly calculated for alerts\n1890671 - [SA] verify-image-signature using service account does not work\n1890677 - \u0027oc image info\u0027 claims \u0027does not exist\u0027 for application/vnd.oci.image.manifest.v1+json manifest\n1890808 - New etcd alerts need to be added to the monitoring stack\n1890951 - Mirror of multiarch images together with cluster logging case problems. It doesn\u0027t sync the \"overall\" sha it syncs only the sub arch sha. \n1890984 - Rename operator-webhook-config to sriov-operator-webhook-config\n1890995 - wew-app should provide more insight into why image deployment failed\n1891023 - ovn-kubernetes rbac proxy never starts waiting for an incorrect API call\n1891047 - Helm chart fails to install using developer console because of TLS certificate error\n1891068 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Early] failing due to TargetDown alert from kube-scheduler\n1891080 - [LSO] When Localvolumeset and SC is already created before OCS install Creation of LVD and LVS is skipped when user click created storage cluster from UI\n1891108 - p\u0026f: Increase the concurrency share of workload-low priority level\n1891143 - CVO deadlocked while shutting down, shortly after fresh cluster install (metrics goroutine)\n1891189 - [LSO] max device limit is accepting negative values. PVC is not getting created and no error is shown\n1891314 - Display incompatible helm charts for installation (kubeVersion of cluster doesn\u0027t meet requirements of chart)\n1891362 - Wrong metrics count for openshift_build_result_total\n1891368 - fync should be fsync for etcdHighFsyncDurations alert\u0027s annotations.message\n1891374 - fync should be fsync for etcdHighFsyncDurations critical alert\u0027s annotations.message\n1891376 - Extra text in Cluster Utilization charts\n1891419 - Wrong detail head on network policy detail page. \n1891459 - Snapshot tests should report stderr of failed commands\n1891498 - Other machine config pools do not show during update\n1891543 - OpenShift 4.6/OSP install fails when node flavor has less than 25GB, even with dedicated storage\n1891551 - Clusterautoscaler doesn\u0027t scale up as expected\n1891552 - Handle missing labels as empty. \n1891555 - The windows oc.exe binary does not have version metadata\n1891559 - kuryr-cni cannot start new thread\n1891614 - [mlx] testpmd fails inside OpenShift pod using DevX version 19.11\n1891625 - [Release 4.7] Mutable LoadBalancer Scope\n1891702 - installer get pending when additionalTrustBundle is added into  install-config.yaml\n1891716 - OVN cluster upgrade from 4.6.1 to 4.7 fails\n1891740 - OperatorStatusChanged is noisy\n1891758 - the authentication operator may spam DeploymentUpdated event endlessly\n1891759 - Dockerfile builds cannot change /etc/pki/ca-trust\n1891816 - [UPI] [OSP] control-plane.yml provisioning playbook fails on OSP 16.1\n1891825 - Error message not very informative in case of mode mismatch\n1891898 - The ClusterServiceVersion can define Webhooks that cannot be created. \n1891951 - UI should show warning while creating pools with compression on\n1891952 - [Release 4.7] Apps Domain Enhancement\n1891993 - 4.5 to 4.6 upgrade doesn\u0027t remove deployments created by marketplace\n1891995 - OperatorHub displaying old content\n1891999 - Storage efficiency card showing wrong compression ratio\n1892004 - OCP 4.6 opm on Ubuntu 18.04.4 - error /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28\u0027 not found (required by ./opm)\n1892167 - [SR-IOV] SriovNetworkNodePolicies apply ignoring the spec.nodeSelector. \n1892198 - TypeError in \u0027Performance Profile\u0027 tab displayed for \u0027Performance Addon Operator\u0027\n1892288 - assisted install workflow creates excessive control-plane disruption\n1892338 - HAProxyReloadFail alert only briefly fires in the event of a broken HAProxy config\n1892358 - [e2e][automation] update feature gate for kubevirt-gating job\n1892376 - Deleted netnamespace could not be re-created\n1892390 - TestOverwrite/OverwriteBundle/DefaultBehavior in operator-registry is flaky\n1892393 - TestListPackages is flaky\n1892448 - MCDPivotError alert/metric missing\n1892457 - NTO-shipped stalld needs to use FIFO for boosting. \n1892467 - linuxptp-daemon crash\n1892521 - [AWS] Startup bootstrap machine failed due to ignition file is missing in disconnected UPI env\n1892653 - User is unable to create KafkaSource with v1beta\n1892724 - VFS added to the list of devices of the nodeptpdevice CRD\n1892799 - Mounting additionalTrustBundle in the operator\n1893117 - Maintenance mode on vSphere blocks installation. \n1893351 - TLS secrets are not able to edit on console. \n1893362 - The ovs-xxxxx_openshift-sdn container does not terminate gracefully, slowing down reboots\n1893386 - false-positive ReadyIngressNodes_NoReadyIngressNodes: Auth operator makes risky \"worker\" assumption when guessing about ingress availability\n1893546 - Deploy using virtual media fails on node cleaning step\n1893601 - overview filesystem utilization of OCP is showing the wrong values\n1893645 - oc describe route SIGSEGV\n1893648 - Ironic image building process is not compatible with UEFI secure boot\n1893724 - OperatorHub generates incorrect RBAC\n1893739 - Force deletion doesn\u0027t work for snapshots if snapshotclass is already deleted\n1893776 - No useful metrics for image pull time available, making debugging issues there impossible\n1893798 - Lots of error messages starting with \"get namespace to enqueue Alertmanager instances failed\" in the logs of prometheus-operator\n1893832 - ErrorCount field is missing in baremetalhosts.metal3.io CRD\n1893889 - disabled dropdown items in the pf dropdown component are skipped over and unannounced by JAWS\n1893926 - Some \"Dynamic PV (block volmode)\" pattern storage e2e tests are wrongly skipped\n1893944 - Wrong product name for Multicloud Object Gateway\n1893953 - (release-4.7) Gather default StatefulSet configs\n1893956 - Installation always fails at \"failed to initialize the cluster: Cluster operator image-registry is still updating\"\n1893963 - [Testday] Workloads-\u003e Virtualization is not loading for Firefox browser\n1893972 - Should skip e2e test cases as early as possible\n1894013 - [v2v][Testday] VMware to CNV VM import]VMware URL: It is not clear that only the FQDN/IP address is required without \u0027https://\u0027\n1894020 - User with edit users cannot deploy images from their own namespace from the developer perspective\n1894025 - OCP 4.5 to 4.6 upgrade for \"aws-ebs-csi-driver-operator\" fails when \"defaultNodeSelector\" is set\n1894041 - [v2v][[Testday]VM import from VMware/RHV] VM import wizard: The target storage class name is not displayed if default storage class is used. \n1894065 - tag new packages to enable TLS support\n1894110 - Console shows wrong value for maxUnavailable and maxSurge when set to 0\n1894144 - CI runs of baremetal IPI are failing due to newer libvirt libraries\n1894146 - ironic-api used by metal3 is over provisioned and consumes a lot of RAM\n1894194 - KuryrPorts leftovers from 4.6 GA need to be deleted\n1894210 - Failed to encrypt OSDs on OCS4.6 installation (via UI)\n1894216 - Improve OpenShift Web Console availability\n1894275 - Fix CRO owners file to reflect node owner\n1894278 - \"database is locked\" error when adding bundle to index image\n1894330 - upgrade channels needs to be updated for 4.7\n1894342 - oauth-apiserver logs many \"[SHOULD NOT HAPPEN] failed to update managedFields for ... OAuthClient ... no corresponding type for oauth.openshift.io/v1, Kind=OAuthClient\"\n1894374 - Dont prevent the user from uploading a file with incorrect extension\n1894432 - [oVirt] sometimes installer timeout on tmp_import_vm\n1894477 - bash syntax error in nodeip-configuration.service\n1894503 - add automated test for Polarion CNV-5045\n1894519 - [OSP] External mode cluster creation disabled for Openstack and oVirt platform\n1894539 - [on-prem] Unable to deploy additional machinesets on separate subnets\n1894645 - Cinder volume provisioning crashes on nil cloud provider\n1894677 - image-pruner job is panicking: klog stack\n1894810 - Remove TechPreview Badge from Eventing in Serverless version 1.11.0\n1894860 - \u0027backend\u0027 CI job passing despite failing tests\n1894910 - Update the node to use the real-time kernel fails\n1894992 - All nightly jobs for e2e-metal-ipi failing due to ipa image missing tenacity package\n1895065 - Schema / Samples / Snippets Tabs are all selected at the same time\n1895099 - vsphere-upi and vsphere-upi-serial jobs time out waiting for bootstrap to complete in CI\n1895141 - panic in service-ca injector\n1895147 - Remove memory limits on openshift-dns\n1895169 - VM Template does not properly manage Mount Windows guest tools check box during VM creation\n1895268 - The bundleAPIs should NOT be empty\n1895309 - [OCP v47] The RHEL node scaleup fails due to \"No package matching \u0027cri-o-1.19.*\u0027 found available\" on OCP 4.7 cluster\n1895329 - The infra index filled with warnings \"WARNING: kubernetes.io/cinder built-in volume provider is now deprecated. The Cinder volume provider is deprecated and will be removed in a future release\"\n1895360 - Machine Config Daemon removes a file although its defined in the dropin\n1895367 - Missing image in metadata DB index.db in disconnected Operator Hub installation. OCP 4.6.1\n1895372 - Web console going blank after selecting any operator to install from OperatorHub\n1895385 - Revert KUBELET_LOG_LEVEL back to level 3\n1895423 - unable to edit an application with a custom builder image\n1895430 - unable to edit custom template application\n1895509 - Backup taken on one master cannot be restored on other masters\n1895537 - [sig-imageregistry][Feature:ImageExtract] Image extract should extract content from an image\n1895838 - oc explain description contains \u0027/\u0027\n1895908 - \"virtio\" option is not available when modifying a CD-ROM to disk type\n1895909 - e2e-metal-ipi-ovn-dualstack is failing\n1895919 - NTO fails to load kernel modules\n1895959 - configuring webhook token authentication should prevent cluster upgrades\n1895979 - Unable to get coreos-installer with --copy-network to work\n1896101 - [cnv][automation] Added negative tests for migration from VMWare and RHV\n1896160 - CI: Some cluster operators are not ready: marketplace (missing: Degraded)\n1896188 - [sig-cli] oc debug deployment configs from a build: local-busybox-1-build not completed\n1896218 - Occasional GCP install failures: Error setting IAM policy for project ...: googleapi: Error 400: Service account ... does not exist., badRequest\n1896229 - Current Rate of Bytes Received and Current Rate of Bytes Transmitted data can not be loaded\n1896244 - Found a panic in storage e2e test\n1896296 - Git links should avoid .git as part of the URL and should not link git:// urls in general\n1896302 - [e2e][automation] Fix 4.6 test failures\n1896365 - [Migration]The SDN migration cannot revert under some conditions\n1896384 - [ovirt IPI]: local coredns resolution not working\n1896446 - Git clone from private repository fails after upgrade OCP 4.5 to 4.6\n1896529 - Incorrect instructions in the Serverless operator and application quick starts\n1896645 - documentationBaseURL needs to be updated for 4.7\n1896697 - [Descheduler] policy.yaml param in cluster configmap is empty\n1896704 - Machine API components should honour cluster wide proxy settings\n1896732 - \"Attach to Virtual Machine OS\" button should not be visible on old clusters\n1896866 - File /etc/NetworkManager/system-connections/default_connection.nmconnection  is incompatible with SR-IOV operator\n1896898 - ovs-configuration.service fails when multiple IPv6 default routes are provided via RAs over the same interface and deployment bootstrap fails\n1896918 - start creating new-style Secrets for AWS\n1896923 - DNS pod /metrics exposed on anonymous http port\n1896977 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1897003 - VNC console cannot be connected after visit it in new window\n1897008 - Cypress: reenable check for \u0027aria-hidden-focus\u0027 rule \u0026 checkA11y test for modals\n1897026 - [Migration] With updating optional network operator configuration, migration stucks on MCO\n1897039 - router pod keeps printing log: template \"msg\"=\"router reloaded\"  \"output\"=\"[WARNING] 316/065823 (15) : parsing [/var/lib/haproxy/conf/haproxy.config:52]: option \u0027http-use-htx\u0027 is deprecated and ignored\n1897050 - [IBM Power] LocalVolumeSet provisions boot partition as PV. \n1897073 - [OCP 4.5] wrong netid assigned to Openshift projects/namespaces\n1897138 - oVirt provider uses depricated cluster-api project\n1897142 - When scaling replicas to zero, Octavia loadbalancer pool members are not updated accordingly\n1897252 - Firing alerts are not showing up in console UI after cluster is up for some time\n1897354 - Operator installation showing success, but Provided APIs are missing\n1897361 - The MCO GCP-OP tests fail consistently on containerruntime tests with \"connection refused\"\n1897412 - [sriov]disableDrain did not be updated in CRD of manifest\n1897423 - Max unavailable and Max surge value are not shown on Deployment Config Details page\n1897516 - Baremetal IPI deployment with IPv6 control plane fails when the nodes obtain both SLAAC and DHCPv6 addresses as they set their hostname to \u0027localhost\u0027\n1897520 - After restarting nodes the image-registry co is in degraded true state. \n1897584 - Add casc plugins\n1897603 - Cinder volume attachment detection failure in Kubelet\n1897604 - Machine API deployment fails: Kube-Controller-Manager can\u0027t reach API: \"Unauthorized\"\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1897641 - Baremetal IPI with IPv6 control plane: nodes respond with duplicate packets to ICMP6 echo requests\n1897676 - [CI] [Azure] [UPI] CI failing since 4.6 changes in ignition\n1897830 - [GSS] Unable to deploy OCS 4.5.2 on OCP 4.6.1, cannot `Create OCS Cluster Service`\n1897891 - [RFE][v2v][UI][CNV VM import] Providing error message or/and block migration when vddk-init-image is missing\n1897897 - ptp lose sync openshift 4.6\n1898036 - no network after reboot (IPI)\n1898045 - AWS EBS CSI Driver can not get updated cloud credential secret automatically\n1898097 - mDNS floods the baremetal network\n1898118 - Lack of logs on some image stream tests make hard to find root cause of a problem\n1898134 - Descheduler logs show absolute values instead of percentage when LowNodeUtilization strategy is applied\n1898159 - kcm operator shall pass --allocate-node-cidrs=false to kcm for ovn-kube and openshift-sdn cluster\n1898174 - [OVN] EgressIP does not guard against node IP assignment\n1898194 - GCP: can\u0027t install on custom machine types\n1898238 - Installer validations allow same floating IP for API and Ingress\n1898268 - [OVN]: `make check` broken on 4.6\n1898289 - E2E test: Use KUBEADM_PASSWORD_FILE by default\n1898320 - Incorrect Apostrophe  Translation of  \"it\u0027s\" in Scheduling Disabled Popover\n1898357 - Within the operatorhub details view, long unbroken text strings do not wrap cause breaking display. \n1898407 - [Deployment timing regression] Deployment takes longer with 4.7\n1898417 - GCP: the dns targets in Google Cloud DNS is not updated after recreating loadbalancer service\n1898487 - [oVirt] Node is not removed when VM has been removed from oVirt engine\n1898500 - Failure to upgrade operator when a Service is included in a Bundle\n1898517 - Ironic auto-discovery may result in rogue nodes registered in ironic\n1898532 - Display names defined in specDescriptors not respected\n1898580 - When adding more than one node selector to the sriovnetworknodepolicy, the cni and the device plugin pods are constantly rebooted\n1898613 - Whereabouts should exclude IPv6 ranges\n1898655 - [oVirt] Node deleted in oVirt should cause the Machine to go into a Failed phase\n1898679 - Operand creation form - Required \"type: object\" properties (Accordion component) are missing red asterisk\n1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability\n1898745 - installation failing with CVO reporting openshift-samples not rolled out, samples not setting versions in its ClusterOperator\n1898839 - Wrong YAML in operator metadata\n1898851 - Multiple Pods access the same volume on the same node e2e test cases are missed from aws ebs csi driver e2e test job\n1898873 - Remove TechPreview Badge from Monitoring\n1898954 - Backup script does not take /etc/kubernetes/static-pod-resources on a reliable way\n1899111 - [RFE] Update jenkins-maven-agen to maven36\n1899128 - VMI details screen -\u003e show the warning that it is preferable to have a VM only if the VM actually does not exist\n1899175 - bump the RHCOS boot images for 4.7\n1899198 - Use new packages for ipa ramdisks\n1899200 - In Installed Operators page I cannot search for an Operator by it\u0027s name\n1899220 - Support AWS IMDSv2\n1899350 - configure-ovs.sh doesn\u0027t configure bonding options\n1899433 - When Creating OCS from ocs wizard Step Discover Disks shows Error \"An error occurred Not Found\"\n1899459 - Failed to start monitoring pods once the operator removed from override list of CVO\n1899515 - Passthrough credentials are not immediately re-distributed on update\n1899575 - update discovery burst to reflect lots of CRDs on openshift clusters\n1899582 - update discovery burst to reflect lots of CRDs on openshift clusters\n1899588 - Operator objects are re-created after all other associated resources have been deleted\n1899600 - Increased etcd fsync latency as of OCP 4.6\n1899603 - workers-rhel7 CI jobs failing: Failed to remove rollback: error running rpm-ostree cleanup\n1899627 - Project dashboard Active status using small icon\n1899725 - Pods table does not wrap well with quick start sidebar open\n1899746 - [ovn] error while waiting on flows for pod: OVS sandbox port is no longer active (probably due to a subsequent CNI ADD)\n1899760 - etcd_request_duration_seconds_bucket metric has excessive cardinality\n1899835 - catalog-operator repeatedly crashes with \"runtime error: index out of range [0] with length 0\"\n1899839 - thanosRuler.resources.requests does not take effect in user-workload-monitoring-config confimap\n1899853 - additionalSecurityGroupIDs not working for master nodes\n1899922 - NP changes sometimes influence new pods. \n1899949 - [Platform] Remove restriction on disk type selection for LocalVolumeSet\n1900008 - Fix internationalized sentence fragments in ImageSearch.tsx\n1900010 - Fix internationalized sentence fragments in BuildImageSelector.tsx\n1900020 - Remove \u0026apos; from internationalized keys\n1900022 - Search Page - Top labels field is not applied to selected Pipeline resources\n1900030 - disruption_tests: [sig-imageregistry] Image registry remain available failing consistently\n1900126 - Creating a VM results in suggestion to create a default storage class when one already exists\n1900138 - [OCP on RHV] Remove insecure mode from the installer\n1900196 - stalld is not restarted after crash\n1900239 - Skip \"subPath should be able to unmount\" NFS test\n1900322 - metal3 pod\u0027s toleration for key: node-role.kubernetes.io/master currently matches on exact value matches but should match on Exists\n1900377 - [e2e][automation] create new css selector for active users\n1900496 - (release-4.7) Collect spec config for clusteroperator resources\n1900672 - (s390x) Upgrade from old LUKS to new not working with DASD disks\n1900699 - Impossible to add new Node on OCP 4.6 using large ECKD disks - fdasd issue\n1900759 - include qemu-guest-agent by default\n1900790 - Track all resource counts via telemetry\n1900835 - Multus errors when cachefile is not found\n1900935 - `oc adm release mirror` panic panic: runtime error\n1900989 - accessing the route cannot wake up the idled resources\n1901040 - When scaling down the status of the node is stuck on deleting\n1901057 - authentication operator health check failed when installing a cluster behind proxy\n1901107 - pod donut shows incorrect information\n1901111 - Installer dependencies are broken\n1901200 - linuxptp-daemon crash when enable debug log level\n1901301 - CBO should handle platform=BM without provisioning CR\n1901355 - [Azure][4.7] Invalid vm size from customized compute nodes does not fail properly\n1901363 - High Podready Latency due to timed out waiting for annotations\n1901373 - redundant bracket on snapshot restore button\n1901376 - [on-prem] Upgrade from 4.6 to 4.7 failed with \"timed out waiting for the condition during waitForControllerConfigToBeCompleted: controllerconfig is not completed: ControllerConfig has not completed: completed(false) running(false) failing(true\"\n1901395 - \"Edit virtual machine template\" action link should be removed\n1901472 - [OSP] Bootstrap and master nodes use different keepalived unicast setting\n1901517 - RHCOS 4.6.1 uses a single NetworkManager connection for multiple NICs when using default DHCP\n1901531 - Console returns a blank page while trying to create an operator Custom CR with Invalid Schema\n1901594 - Kubernetes resource CRUD operations.Kubernetes resource CRUD operations Pod \"before all\" hook for \"creates the resource instance\"\n1901604 - CNO blocks editing Kuryr options\n1901675 - [sig-network] multicast when using one of the plugins \u0027redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy\u0027 should allow multicast traffic in namespaces where it is enabled\n1901909 - The device plugin pods / cni pod are restarted every 5 minutes\n1901982 - [sig-builds][Feature:Builds] build can reference a cluster service  with a build being created from new-build should be able to run a build that references a cluster service\n1902019 - when podTopologySpreadConstraint strategy is enabled for descheduler it throws error\n1902059 - Wire a real signer for service accout issuer\n1902091 - `cluster-image-registry-operator` pod leaves connections open when fails connecting S3 storage\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n1902157 - The DaemonSet machine-api-termination-handler couldn\u0027t allocate Pod\n1902253 - MHC status doesnt set RemediationsAllowed = 0\n1902299 - Failed to mirror operator catalog - error: destination registry required\n1902545 - Cinder csi driver node pod should add nodeSelector for Linux\n1902546 - Cinder csi driver node pod doesn\u0027t run on master node\n1902547 - Cinder csi driver controller pod doesn\u0027t run on master node\n1902552 - Cinder csi driver does not use the downstream images\n1902595 - Project workloads list view doesn\u0027t show alert icon and hover message\n1902600 - Container csi-snapshotter in Cinder csi driver needs to use ImagePullPolicy=IfNotPresent\n1902601 - Cinder csi driver pods run as BestEffort qosClass\n1902653 - [BM][IPI] Master deployment failed: No valid host was found. Reason: No conductor service registered which supports driver redfish for conductor group\n1902702 - [sig-auth][Feature:LDAP][Serial] ldap group sync can sync groups from ldap: oc cp over non-existing directory/file fails\n1902746 - [BM][IP] Master deployment failed - Base.1.0.GeneralError: database is locked\n1902824 - failed to generate semver informed package manifest: unable to determine default channel\n1902894 - hybrid-overlay-node crashing trying to get node object during initialization\n1902969 - Cannot load vmi detail page\n1902981 - It should default to current namespace when create vm from template\n1902996 - [AWS] UPI on USGov, bootstrap machine can not fetch ignition file  via s3:// URI\n1903033 - duplicated lines of imageContentSources is seen when mirror release image to local registry\n1903034 - OLM continuously printing debug logs\n1903062 - [Cinder csi driver] Deployment mounted volume have no write access\n1903078 - Deleting VolumeSnapshotClass makes VolumeSnapshot not Ready\n1903107 - Enable vsphere-problem-detector e2e tests\n1903164 - OpenShift YAML editor jumps to top every few seconds\n1903165 - Improve Canary Status Condition handling for e2e tests\n1903172 - Column Management: Fix sticky footer on scroll\n1903186 - [Descheduler] cluster logs should report some info when PodTopologySpreadConstraints strategy is enabled\n1903188 - [Descheduler] cluster log reports failed to validate server configuration\" err=\"unsupported log format:\n1903192 - Role name missing on create role binding form\n1903196 - Popover positioning is misaligned for Overview Dashboard status items\n1903206 - Ingress controller incorrectly routes traffic to non-ready pods/backends. \n1903226 - MutatingWebhookConfiguration pod-identity-webhook does not exclude critical control-plane components\n1903248 - Backport Upstream Static Pod UID patch\n1903277 - Deprovisioning Not Deleting Security Groups [VpcLimitExceeded on e2e-aws tests]\n1903290 - Kubelet repeatedly log the same log line from exited containers\n1903346 - PV backed by FC lun is not being unmounted properly and this leads to IO errors / xfs corruption. \n1903382 - Panic when task-graph is canceled with a TaskNode with no tasks\n1903400 - Migrate a VM which is not running goes to pending state\n1903402 - Nic/Disk on VMI overview should link to VMI\u0027s nic/disk page\n1903414 - NodePort is not working when configuring an egress IP address\n1903424 - mapi_machine_phase_transition_seconds_sum doesn\u0027t work\n1903464 - \"Evaluating rule failed\" for \"record: cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum\" and \"record: cluster:kubelet_volume_stats_used_bytes:provisioner:sum\"\n1903639 - Hostsubnet gatherer produces wrong output\n1903651 - Network Policies are not working as expected with OVN-Kubernetes when traffic hairpins back to the same source through a service\n1903660 - Cannot install with Assisted Installer on top of IPv6 since network provider is not started\n1903674 - [sig-apps] ReplicationController should serve a basic image on each replica with a private image\n1903717 - Handle different Pod selectors for metal3 Deployment\n1903733 - Scale up followed by scale down can delete all running workers\n1903917 - Failed to load \"Developer Catalog\" page\n1903999 - Httplog response code is always zero\n1904026 - The quota controllers should resync on new resources and make progress\n1904064 - Automated cleaning is disabled by default\n1904124 - DHCP to static lease script doesn\u0027t work correctly if starting with infinite leases\n1904125 - Boostrap VM .ign image gets added into \u0027default\u0027 pool instead of \u003ccluster-name\u003e-\u003cid\u003e-bootstrap\n1904131 - kuryr tempest plugin test test_ipblock_network_policy_sg_rules fails\n1904133 - KubeletConfig flooded with failure conditions\n1904161 - AlertmanagerReceiversNotConfigured fires unconditionally on alertmanager restart\n1904243 - RHCOS 4.6.1 missing ISCSI initiatorname.iscsi !\n1904244 - MissingKey errors for two plugins using i18next.t\n1904262 - clusterresourceoverride-operator has version: 1.0.0 every build\n1904296 - VPA-operator has version: 1.0.0 every build\n1904297 - The index image generated by \"opm index prune\" leaves unrelated images\n1904305 - Should have scroll-down bar for the field which the values list has too many results under dashboards\n1904385 - [oVirt] registry cannot mount volume on 4.6.4 -\u003e 4.6.6 upgrade\n1904497 - vsphere-problem-detector: Run on vSphere cloud only\n1904501 - [Descheduler] descheduler does not evict any pod when PodTopologySpreadConstraint strategy is set\n1904502 - vsphere-problem-detector: allow longer timeouts for some operations\n1904503 - vsphere-problem-detector: emit alerts\n1904538 - [sig-arch][Early] Managed cluster should start all core operators: monitoring: container has runAsNonRoot and image has non-numeric user (nobody)\n1904578 - metric scraping for vsphere problem detector is not configured\n1904582 - All application traffic broken due to unexpected load balancer change on 4.6.4 -\u003e 4.6.6 upgrade\n1904663 - IPI pointer customization MachineConfig always generated\n1904679 - [Feature:ImageInfo] Image info should display information about images\n1904683 - `[sig-builds][Feature:Builds] s2i build with a root user image` tests use docker.io image\n1904684 - [sig-cli] oc debug ensure it works with image streams\n1904713 - Helm charts with kubeVersion restriction are filtered incorrectly\n1904776 - Snapshot modal alert is not pluralized\n1904824 - Set vSphere hostname from guestinfo before NM starts\n1904941 - Insights status is always showing a loading icon\n1904973 - KeyError: \u0027nodeName\u0027 on NP deletion\n1904985 - Prometheus and thanos sidecar targets are down\n1904993 - Many ampersand special characters are found in strings\n1905066 - QE - Monitoring test cases - smoke test suite automation\n1905074 - QE -Gherkin linter to maintain standards\n1905100 - Too many haproxy processes in default-router pod causing high load average\n1905104 - Snapshot modal disk items missing keys\n1905115 - CI: dev-scripts fail on 02_configure_host: Failed to start network ostestbm\n1905119 - Race in AWS EBS determining whether custom CA bundle is used\n1905128 - [e2e][automation] e2e tests succeed without actually execute\n1905133 - operator conditions special-resource-operator\n1905141 - vsphere-problem-detector: report metrics through telemetry\n1905146 - Backend Tests: TestHelmRepoGetter_SkipDisabled failures\n1905194 - Detecting broken connections to the Kube API takes up to 15 minutes\n1905221 - CVO transitions from \"Initializing\" to \"Updating\" despite not attempting many manifests\n1905232 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them failing due to inconsistent images between CI and OCP\n1905253 - Inaccurate text at bottom of Events page\n1905298 - openshift-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory\n1905299 - OLM fails to update operator\n1905307 - Provisioning CR is missing from must-gather\n1905319 - cluster-samples-operator containers are not requesting required memory resource\n1905320 - csi-snapshot-webhook is not requesting required memory resource\n1905323 - dns-operator is not requesting required memory resource\n1905324 - ingress-operator is not requesting required memory resource\n1905327 - openshift-kube-scheduler initContainer wait-for-host-port is not requesting required resources: cpu, memory\n1905328 - Changing the bound token service account issuer invalids previously issued bound tokens\n1905329 - openshift-oauth-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory\n1905330 - openshift-monitoring init-textfile is not requesting required resources: cpu, memory\n1905338 - QE -Cypress Automation for Add Flow - Database, Yaml, OperatorBacked, PageDetails\n1905347 - QE - Design Gherkin Scenarios\n1905348 - QE - Design Gherkin Scenarios\n1905362 - [sriov] Error message \u0027Fail to update DaemonSet\u0027 always shown in sriov operator pod\n1905368 - [sriov] net-attach-def generated from sriovnetwork cannot be restored once it was deleted\n1905370 - A-Z/Z-A sorting dropdown on Developer Catalog page is not aligned with filter text input\n1905380 - Default to Red Hat/KubeVirt provider if common template does not have provider annotation\n1905393 - CMO uses rbac.authorization.k8s.io/v1beta1 instead of rbac.authorization.k8s.io/v1\n1905404 - The example of \"Remove the entrypoint on the mysql:latest image\" for `oc image append` does not work\n1905416 - Hyperlink not working from Operator Description\n1905430 - usbguard extension fails to install because of missing correct protobuf dependency version\n1905492 - The stalld service has a higher scheduler priority than ksoftirq and rcu{b, c} threads\n1905502 - Test flake - unable to get https transport for ephemeral-registry\n1905542 - [GSS] The \"External\" mode option is not available when the OCP cluster is deployed using Redhat Cluster Assisted Installer 4.6. \n1905599 - Errant change to lastupdatetime in copied CSV status can trigger runaway csv syncs\n1905610 - Fix typo in export script\n1905621 - Protractor login test fails against a 4.7 (nightly) Power cluster\n1905640 - Subscription manual approval test is flaky\n1905647 - Report physical core valid-for-subscription min/max/cumulative use to telemetry\n1905696 - ClusterMoreUpdatesModal component did not get internationalized\n1905748 - with sharded ingresscontrollers, all shards reload when any endpoint changes\n1905761 - NetworkPolicy with Egress policyType is resulting in SDN errors and improper communication within Project\n1905778 - inconsistent ingresscontroller between fresh installed cluster and upgraded cluster\n1905792 - [OVN]Cannot create egressfirewalll with dnsName\n1905889 - Should create SA for each namespace that the operator scoped\n1905920 - Quickstart exit and restart\n1905941 - Page goes to error after create catalogsource\n1905977 - QE ghaekin design scenaio-pipeline metrics ODC-3711\n1906032 - Canary Controller: Canary daemonset rolls out slowly in large clusters\n1906100 - Disconnected cluster upgrades are failing from the cli, when signature retrieval is being blackholed instead of quickly rejected\n1906105 - CBO annotates an existing Metal3 deployment resource to indicate that it is managing it\n1906118 - OCS feature detection constantly polls storageclusters and storageclasses\n1906120 - \u0027Create Role Binding\u0027 form not setting user or group value when created from a user or group resource\n1906121 - [oc] After new-project creation, the kubeconfig file does not set the project\n1906134 - OLM should not create OperatorConditions for copied CSVs\n1906143 - CBO supports log levels\n1906186 - i18n: Translators are not able to translate `this` without context for alert manager config\n1906228 - tuned and openshift-tuned sometimes do not terminate gracefully, slowing reboots\n1906274 - StorageClass installed by Cinder csi driver operator should enable the allowVolumeExpansion to support volume resize. \n1906276 - `oc image append` can\u0027t work with multi-arch image with  --filter-by-os=\u0027.*\u0027\n1906318 - use proper term for Authorized SSH Keys\n1906335 - The lastTransitionTime, message, reason field of operatorcondition should be optional\n1906356 - Unify Clone PVC boot source flow with URL/Container boot source\n1906397 - IPA has incorrect kernel command line arguments\n1906441 - HorizontalNav and NavBar have invalid keys\n1906448 - Deploy using virtualmedia with provisioning network disabled fails - \u0027Failed to connect to the agent\u0027 in ironic-conductor log\n1906459 - openstack: Quota Validation fails if unlimited quotas are given to a project\n1906496 - [BUG] Thanos having possible memory leak consuming huge amounts of node\u0027s memory and killing them\n1906508 - TestHeaderNameCaseAdjust outputs nil error message on some failures\n1906511 - Root reprovisioning tests flaking often in CI\n1906517 - Validation is not robust enough and may prevent to generate install-confing. \n1906518 - Update snapshot API CRDs to v1\n1906519 - Update LSO CRDs to use v1\n1906570 - Number of disruptions caused by reboots on a cluster cannot be measured\n1906588 - [ci][sig-builds] nodes is forbidden: User \"e2e-test-jenkins-pipeline-xfghs-user\" cannot list resource \"nodes\" in API group \"\" at the cluster scope\n1906650 - Cannot collect network policy, EgressFirewall, egressip logs with gather_network_logs\n1906655 - [SDN]Cannot colloect ovsdb-server.log and ovs-vswitchd.log with gather_network_logs\n1906679 - quick start panel styles are not loaded\n1906683 - Kn resources are not showing in Topology if triggers has KSVC and IMC as subscriber\n1906684 - Event Source creation fails if user selects no app group and switch to yaml and then to form\n1906685 - SinkBinding is shown in topology view if underlying resource along with actual source created\n1906689 - user can pin to nav configmaps and secrets multiple times\n1906691 - Add doc which describes disabling helm chart repository\n1906713 - Quick starts not accesible for a developer user\n1906718 - helm chart \"provided by Redhat\" is misspelled\n1906732 - Machine API proxy support should be tested\n1906745 - Update Helm endpoints to use Helm 3.4.x\n1906760 - performance issues with topology constantly re-rendering\n1906766 - localized `Autoscaled` \u0026 `Autoscaling` pod texts overlap with the pod ring\n1906768 - Virtualization nav item is incorrectly placed in the Admin Workloads section\n1906769 - topology fails to load with non-kubeadmin user\n1906770 - shortcuts on mobiles view occupies a lot of space\n1906798 - Dev catalog customization doesn\u0027t update console-config ConfigMap\n1906806 - Allow installing extra packages in ironic container images\n1906808 - [test-disabled] ServiceAccounts should support OIDC discovery of service account issuer\n1906835 - Topology view shows add page before then showing full project workloads\n1906840 - ClusterOperator should not have status \"Updating\" if operator version is the same as the release version\n1906844 - EndpointSlice and EndpointSliceProxying feature gates should be disabled for openshift-sdn kube-proxy\n1906860 - Bump kube dependencies to v1.20 for Net Edge components\n1906864 - Quick Starts Tour: Need to adjust vertical spacing\n1906866 - Translations of Sample-Utils\n1906871 - White screen when sort by name in monitoring alerts page\n1906872 - Pipeline Tech Preview Badge Alignment\n1906875 - Provide an option to force backup even when API is not available. \n1906877 - Placeholder\u0027 value in search filter do not match column heading in Vulnerabilities\n1906879 - Add missing i18n keys\n1906880 - oidcdiscoveryendpoint controller invalidates all TokenRequest API tokens during install\n1906896 - No Alerts causes odd empty Table (Need no content message)\n1906898 - Missing User RoleBindings in the Project Access Web UI\n1906899 - Quick Start - Highlight Bounding Box Issue\n1906916 - Teach CVO about flowcontrol.apiserver.k8s.io/v1beta1\n1906933 - Cluster Autoscaler should have improved mechanisms for group identifiers\n1906935 - Delete resources when Provisioning CR is deleted\n1906968 - Must-gather should support collecting kubernetes-nmstate resources\n1906986 - Ensure failed pod adds are retried even if the pod object doesn\u0027t change\n1907199 - Need to upgrade machine-api-operator module version under cluster-api-provider-kubevirt\n1907202 - configs.imageregistry.operator.openshift.io cluster does not update its status fields after URL change\n1907211 - beta promotion of p\u0026f switched storage version to v1beta1, making downgrades impossible. \n1907269 - Tooltips data are different when checking stack or not checking stack for the same time\n1907280 - Install tour of OCS not available. \n1907282 - Topology page breaks with white screen\n1907286 - The default mhc machine-api-termination-handler couldn\u0027t watch spot instance\n1907287 - [csi-snapshot-webhook] should support both v1beta1 and v1 version when creating volumesnapshot/volumesnapshotcontent\n1907293 - Increase timeouts in e2e tests\n1907295 - Gherkin script for improve management for helm\n1907299 - Advanced Subscription Badge for KMS and Arbiter not present\n1907303 - Align VM template list items by baseline\n1907304 - Use PF styles for selected template card in VM Wizard\n1907305 - Drop \u0027ISO\u0027 from CDROM boot source message\n1907307 - Support and provider labels should be passed on between templates and sources\n1907310 - Pin action should be renamed to favorite\n1907312 - VM Template source popover is missing info about added date\n1907313 - ClusterOperator objects cannot be overriden with cvo-overrides\n1907328 - iproute-tc package is missing in ovn-kube image\n1907329 - CLUSTER_PROFILE env. variable is not used by the CVO\n1907333 - Node stuck in degraded state, mcp reports \"Failed to remove rollback: error running rpm-ostree cleanup -r: error: Timeout was reached\"\n1907373 - Rebase to kube 1.20.0\n1907375 - Bump to latest available 1.20.x k8s - workloads team\n1907378 - Gather netnamespaces networking info\n1907380 - kube-rbac-proxy exposes tokens, has excessive verbosity\n1907381 - OLM fails to deploy an operator if its deployment template contains a description annotation that doesn\u0027t match the CSV one\n1907390 - prometheus-adapter: panic after k8s 1.20 bump\n1907399 - build log icon link on topology nodes cause app to reload\n1907407 - Buildah version not accessible\n1907421 - [4.6.1]oc-image-mirror command failed on \"error: unable to copy layer\"\n1907453 - Dev Perspective -\u003e running vm details -\u003e resources -\u003e no data\n1907454 - Install PodConnectivityCheck CRD with CNO\n1907459 - \"The Boot source is also maintained by Red Hat.\" is always shown for all boot sources\n1907475 - Unable to estimate the error rate of ingress across the connected fleet\n1907480 - `Active alerts` section throwing forbidden error for users. \n1907518 - Kamelets/Eventsource should be shown to user if they have create access\n1907543 - Korean timestamps are shown when users\u0027 language preferences are set to German-en-en-US\n1907610 - Update kubernetes deps to 1.20\n1907612 - Update kubernetes deps to 1.20\n1907621 - openshift/installer: bump cluster-api-provider-kubevirt version\n1907628 - Installer does not set primary subnet consistently\n1907632 - Operator Registry should update its kubernetes dependencies to 1.20\n1907639 - pass dual-stack node IPs to kubelet in dual-stack clusters\n1907644 - fix up handling of non-critical annotations on daemonsets/deployments\n1907660 - Pod list does not render cell height correctly when pod names are too long (dynamic table rerendering issue?)\n1907670 - CVE-2020-27846 crewjam/saml: authentication bypass in saml authentication\n1907671 - Ingress VIP assigned to two infra nodes simultaneously - keepalived process running in pods seems to fail\n1907767 - [e2e][automation]update test suite for kubevirt plugin\n1907770 - Recent RHCOS 47.83 builds (from rhcos-47.83.202012072210-0 on) don\u0027t allow master and worker nodes to boot\n1907792 - The `overrides` of the OperatorCondition cannot block the operator upgrade\n1907793 - Surface support info in VM template details\n1907812 - 4.7 to 4.6 downgrade stuck in clusteroperator storage\n1907822 - [OCP on OSP] openshift-install panic when checking quota with install-config have no flavor set\n1907863 - Quickstarts status not updating when starting the tour\n1907872 - dual stack with an ipv6 network fails on bootstrap phase\n1907874 - QE - Design Gherkin Scenarios for epic ODC-5057\n1907875 - No response when try to expand pvc with an invalid size\n1907876 - Refactoring record package to make gatherer configurable\n1907877 - QE - Automation- pipelines builder scripts\n1907883 - Fix Pipleine creation without namespace issue\n1907888 - Fix pipeline list page loader\n1907890 - Misleading and incomplete alert message shown in pipeline-parameters and pipeline-resources form\n1907892 - Unable to edit application deployed using \"From Devfile\" option\n1907893 - navSortUtils.spec.ts unit test failure\n1907896 - When a workload is added, Topology does not place the new items well\n1907908 - VM Wizard always uses VirtIO for the VM rootdisk regardless what is defined in common-template\n1907924 - Enable madvdontneed in OpenShift Images\n1907929 - Enable madvdontneed in OpenShift System Components Part 2\n1907936 - NTO is not reporting nto_profile_set_total metrics correctly after reboot\n1907947 - The kubeconfig saved in tenantcluster shouldn\u0027t include anything that is not related to the current context\n1907948 - OCM-O bump to k8s 1.20\n1907952 - bump to k8s 1.20\n1907972 - Update OCM link to open Insights tab\n1907989 - DataVolumes was intorduced in common templates - VM creation fails in the UI\n1907998 - Gather kube_pod_resource_request/limit metrics as exposed in upstream KEP 1916\n1908001 - [CVE-2020-10749] Update github.com/containernetworking/plugins to v.0.8.6 in egress-router-cni\n1908014 - e2e-aws-ansible and e2e-aws-helm are broken in ocp-release-operator-sdk\n1908035 - dynamic-demo-plugin build does not generate dist directory\n1908135 - quick search modal is not centered over topology\n1908145 - kube-scheduler-recovery-controller container crash loop when router pod is co-scheduled\n1908159 - [AWS C2S] MCO fails to sync cloud config\n1908171 - GCP: Installation fails when installing cluster with n1-custom-4-16384custom type (n1-custom-4-16384)\n1908180 - Add source for template is stucking in preparing pvc\n1908217 - CI: Server-Side Apply should work for oauth.openshift.io/v1: has no tokens\n1908231 - [Migration] The pods ovnkube-node are in  CrashLoopBackOff after SDN to OVN\n1908277 - QE - Automation- pipelines actions scripts\n1908280 - Documentation describing `ignore-volume-az` is incorrect\n1908296 - Fix pipeline builder form yaml switcher validation issue\n1908303 - [CVE-2020-28367 CVE-2020-28366] Remove CGO flag from rhel Dockerfile in Egress-Router-CNI\n1908323 - Create button missing for PLR in the search page\n1908342 - The new pv_collector_total_pv_count is not reported via telemetry\n1908344 - [vsphere-problem-detector] CheckNodeProviderID and CheckNodeDiskUUID have the same name\n1908347 - CVO overwrites ValidatingWebhookConfiguration for snapshots\n1908349 - Volume snapshot tests are failing after 1.20 rebase\n1908353 - QE - Automation- pipelines runs scripts\n1908361 - bump to k8s 1.20\n1908367 - QE - Automation- pipelines triggers scripts\n1908370 - QE - Automation- pipelines secrets scripts\n1908375 - QE - Automation- pipelines workspaces scripts\n1908381 - Go Dependency Fixes for Devfile Lib\n1908389 - Loadbalancer Sync failing on Azure\n1908400 - Tests-e2e, increase timeouts, re-add TestArchiveUploadedAndResultsReceived\n1908407 - Backport Upstream 95269 to fix potential crash in kubelet\n1908410 - Exclude Yarn from VSCode search\n1908425 - Create Role Binding form subject type and name are undefined when All Project is selected\n1908431 - When the marketplace-operator pod get\u0027s restarted, the custom catalogsources are gone, as well as the pods\n1908434 - Remove \u0026apos from metal3-plugin internationalized strings\n1908437 - Operator backed with no icon has no badge associated with the CSV tag\n1908459 - bump to k8s 1.20\n1908461 - Add bugzilla component to OWNERS file\n1908462 - RHCOS 4.6 ostree removed dhclient\n1908466 - CAPO AZ Screening/Validating\n1908467 - Zoom in and zoom out in topology package should be sentence case\n1908468 - [Azure][4.7] Installer can\u0027t properly parse instance type with non integer memory size\n1908469 - nbdb failed to come up while bringing up OVNKubernetes cluster\n1908471 - OLM should bump k8s dependencies to 1.20\n1908484 - oc adm release extract --cloud=aws --credentials-requests dumps all manifests\n1908493 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM\n1908545 - VM clone dialog does not open\n1908557 - [e2e][automation]Miss css id on bootsource and reviewcreate step on wizard\n1908562 - Pod readiness is not being observed in real world cases\n1908565 - [4.6] Cannot filter the platform/arch of the index image\n1908573 - Align the style of flavor\n1908583 - bootstrap does not run on additional networks if configured for master in install-config\n1908596 - Race condition on operator installation\n1908598 - Persistent Dashboard shows events for all provisioners\n1908641 - Go back to Catalog Page link on Virtual Machine page vanishes on empty state\n1908648 - Skip TestKernelType test on OKD, adjust TestExtensions\n1908650 - The title of customize wizard is inconsistent\n1908654 - cluster-api-provider: volumes and disks names shouldn\u0027t change by machine-api-operator\n1908675 - Reenable [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default [Suite:openshift/conformance/parallel] [Suite:k8s]\n1908687 - Option to save user settings separate when using local bridge (affects console developers only)\n1908697 - Show `kubectl diff ` command in the oc diff help page\n1908715 - Pressing the arrow up key when on topmost quick-search list item it should loop back to bottom\n1908716 - UI breaks on click of sidebar of ksvc (if revisions not up) in topology on 4.7 builds\n1908717 - \"missing unit character in duration\" error in some network dashboards\n1908746 - [Safari] Drop Shadow doesn\u0027t works as expected on hover on workload\n1908747 - stale S3 CredentialsRequest in CCO manifest\n1908758 - AWS: NLB timeout value is rejected by AWS cloud provider after 1.20 rebase\n1908830 - RHCOS 4.6 - Missing Initiatorname\n1908868 - Update empty state message for EventSources and Channels tab\n1908880 - 4.7 aws-serial CI: NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes\n1908883 - CVE-2020-29652 golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference\n1908888 - Dualstack does not work with multiple gateways\n1908889 - Bump CNO to k8s 1.20\n1908891 - TestDNSForwarding DNS operator e2e test is failing frequently\n1908914 - CNO: upgrade nodes before masters\n1908918 - Pipeline builder yaml view sidebar is not responsive\n1908960 - QE - Design Gherkin Scenarios\n1908971 - Gherkin Script for pipeline debt 4.7\n1908983 - i18n: Add Horizontal Pod Autoscaler action menu is not translated\n1908997 - Unsupported access mode should not be available when creating pvc by cinder-csi-driver/gcp-pd-csi-driver from web-console\n1908998 - [cinder-csi-driver] doesn\u0027t detect the credentials change\n1909004 - \"No datapoints found\" for RHEL node\u0027s filesystem graph\n1909005 - i18n: workloads list view heading is not translated\n1909012 - csi snapshot webhook does not block any invalid update for volumesnapshot and volumesnapshotcontent objects\n1909027 - Disks option of Sectected capacity chart shows HDD disk even on selection of SDD disk type\n1909043 - OCP + OCS 4.7 Internal - Storage cluster creation throws warning when zone=0 in VMware\n1909067 - Web terminal should keep latest output when connection closes\n1909070 - PLR and TR Logs component is not streaming as fast as tkn\n1909092 - Error Message should not confuse user on Channel form\n1909096 - OCP 4.7+OCS 4.7 - The Requested Cluster Capacity field needs to include the selected capacity in calculation in Review and Create Page\n1909108 - Machine API components should use 1.20 dependencies\n1909116 - Catalog Sort Items dropdown is not aligned on Firefox\n1909198 - Move Sink action option is not working\n1909207 - Accessibility Issue on monitoring page\n1909236 - Remove pinned icon overlap on resource name\n1909249 - Intermittent packet drop from pod to pod\n1909276 - Accessibility Issue on create project modal\n1909289 - oc debug of an init container no longer works\n1909290 - Logging may be broken due to mix of k8s.io/klog v1 and v2\n1909358 - registry.redhat.io/redhat/community-operator-index:latest only have hyperfoil-bundle\n1909453 - Boot disk RAID can corrupt ESP if UEFI firmware writes to it\n1909455 - Boot disk RAID will not boot if the primary disk enumerates but fails I/O\n1909464 - Build operator-registry with golang-1.15\n1909502 - NO_PROXY is not matched between bootstrap and global cluster setting which lead to desired master machineconfig is not found\n1909521 - Add kubevirt cluster type for e2e-test workflow\n1909527 - [IPI Baremetal] After upgrade from 4.6 to 4.7 metal3 pod does not get created\n1909587 - [OCP4] all of the OCP master nodes with soft-anti-affinity run on the same OSP node\n1909610 - Fix available capacity when no storage class selected\n1909678 - scale up / down buttons available on pod details side panel\n1909723 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder \u0026 base images to be consistent with ART\n1909730 - unbound variable error if EXTRA_PKGS_LIST is not defined\n1909739 - Arbiter request data changes\n1909744 - cluster-api-provider-openstack: Bump gophercloud\n1909790 - PipelineBuilder yaml view cannot be used for editing a pipeline\n1909791 - Update standalone kube-proxy config for EndpointSlice\n1909792 - Empty states for some details page subcomponents are not i18ned\n1909815 - Perspective switcher is only half-i18ned\n1909821 - OCS 4.7 LSO installation blocked because of Error \"Invalid value: \"integer\": spec.flexibleScaling in body\n1909836 - operator-install-global Cypress test was failing in OLM as it depends on an operator that isn\u0027t installed in CI\n1909864 - promote-release-openshift-machine-os-content-e2e-aws-4.5 is perm failing\n1909911 - [OVN]EgressFirewall caused a segfault\n1909943 - Upgrade from 4.6 to 4.7 stuck due to write /sys/devices/xxxx/block/sda/queue/scheduler: invalid argument\n1909958 - Support Quick Start Highlights Properly\n1909978 - ignore-volume-az = yes not working on standard storageClass\n1909981 - Improve statement in template select step\n1909992 - Fail to pull the bundle image when using the private index image\n1910024 - Reload issue in latest(4.7) UI code on 4.6 cluster locally in dev\n1910036 - QE - Design Gherkin Scenarios ODC-4504\n1910049 - UPI: ansible-galaxy is not supported\n1910127 - [UPI on oVirt]:  Improve UPI Documentation\n1910140 - fix the api dashboard with changes in upstream kube 1.20\n1910160 - If two OperatorConditions include the same deployments they will keep updating the deployment\u0027s containers with the OPERATOR_CONDITION_NAME Environment Variable\n1910165 - DHCP to static lease script doesn\u0027t handle multiple addresses\n1910305 - [Descheduler] - The minKubeVersion should be 1.20.0\n1910409 - Notification drawer is not localized for i18n\n1910459 - Could not provision gcp volume if delete secret gcp-pd-cloud-credentials\n1910492 - KMS details are auto-populated on the screen in next attempt at Storage cluster creation\n1910501 - Installed Operators-\u003eOperand required: Clicking on cancel in Storage cluster page takes back to the Install Operator page\n1910533 - [OVN] It takes about 5 minutes for EgressIP failover to work\n1910581 - library-go: proxy ENV is not injected into csi-driver-controller which lead to storage operator never get ready\n1910666 - Creating a Source Secret from type SSH-Key should use monospace font for better usability\n1910738 - OCP 4.7 Installation fails on VMWare due to 1 worker that is degraded\n1910739 - Redfish-virtualmedia (idrac) deploy fails on \"The Virtual Media image server is already connected\"\n1910753 - Support Directory Path to Devfile\n1910805 - Missing translation for Pipeline status and breadcrumb text\n1910829 - Cannot delete a PVC if the dv\u0027s phase is WaitForFirstConsumer\n1910840 - Show Nonexistent  command info in the `oc rollback -h` help page\n1910859 - breadcrumbs doesn\u0027t use last namespace\n1910866 - Unify templates string\n1910870 - Unify template dropdown action\n1911016 - Prometheus unable to mount NFS volumes after upgrading to 4.6\n1911129 - Monitoring charts renders nothing when switching from a Deployment to \"All workloads\"\n1911176 - [MSTR-998] Wrong text shown when hovering on lines of charts in API Performance dashboard\n1911212 - [MSTR-998] API Performance Dashboard \"Period\" drop-down has a choice \"$__auto_interval_period\" which can bring \"1:154: parse error: missing unit character in duration\"\n1911213 - Wrong and misleading warning for VMs that were created manually (not from template)\n1911257 - [aws-c2s] failed to create cluster, kube-cloud-config was not created\n1911269 - waiting for the build message present when build exists\n1911280 - Builder images are not detected for Dotnet, Httpd, NGINX\n1911307 - Pod Scale-up requires extra privileges in OpenShift web-console\n1911381 - \"Select Persistent Volume Claim project\" shows in customize wizard when select a source available template\n1911382 - \"source volumeMode (Block) and target volumeMode (Filesystem) do not match\" shows in VM Error\n1911387 - Hit error - \"Cannot read property \u0027value\u0027 of undefined\" while creating VM from template\n1911408 - [e2e][automation] Add auto-clone cli tests and new flow of VM creation\n1911418 - [v2v] The target storage class name is not displayed if default storage class is used\n1911434 - git ops empty state page displays icon with watermark\n1911443 - SSH Cretifiaction field should be validated\n1911465 - IOPS display wrong unit\n1911474 - Devfile Application Group Does Not Delete Cleanly (errors)\n1911487 - Pruning Deployments should use ReplicaSets instead of ReplicationController\n1911574 - Expose volume mode  on Upload Data form\n1911617 - [CNV][UI] Failure to add source to VM template when no default storage class is defined\n1911632 - rpm-ostree command fail due to wrong options when updating ocp-4.6 to 4.7 on worker nodes with rt-kernel\n1911656 - using \u0027operator-sdk run bundle\u0027 to install operator successfully, but the command output said \u0027Failed to run bundle\u0027\u0027\n1911664 - [Negative Test] After deleting metal3 pod, scaling worker stuck on provisioning state\n1911782 - Descheduler should not evict pod used local storage by the PVC\n1911796 - uploading flow being displayed before submitting the form\n1912066 - The ansible type operator\u0027s manager container is not stable when managing the CR\n1912077 - helm operator\u0027s default rbac forbidden\n1912115 - [automation] Analyze job keep failing because of \u0027JavaScript heap out of memory\u0027\n1912237 - Rebase CSI sidecars for 4.7\n1912381 - [e2e][automation] Miss css ID on Create Network Attachment Definition page\n1912409 - Fix flow schema deployment\n1912434 - Update guided tour modal title\n1912522 - DNS Operator e2e test: TestCoreDNSImageUpgrade is fundamentally broken\n1912523 - Standalone pod status not updating in topology graph\n1912536 - Console Plugin CR for console-demo-plugin has wrong apiVersion\n1912558 - TaskRun list and detail screen doesn\u0027t show Pending status\n1912563 - p\u0026f: carry 97206: clean up executing request on panic\n1912565 - OLM macOS local build broken by moby/term dependency\n1912567 - [OCP on RHV] Node becomes to \u0027NotReady\u0027 status when shutdown vm from RHV UI only on the second deletion\n1912577 - 4.1/4.2-\u003e4.3-\u003e...-\u003e 4.7 upgrade is stuck during 4.6-\u003e4.7 with co/openshift-apiserver Degraded, co/network not Available and several other components pods CrashLoopBackOff\n1912590 - publicImageRepository not being populated\n1912640 - Go operator\u0027s controller pods is forbidden\n1912701 - Handle dual-stack configuration for NIC IP\n1912703 - multiple queries can\u0027t be plotted in the same graph under some conditons\n1912730 - Operator backed: In-context should support visual connector if SBO is not installed\n1912828 - Align High Performance VMs with High Performance in RHV-UI\n1912849 - VM from wizard - default flavor does not match the actual flavor set by common templates\n1912852 - VM from wizard - available VM templates - \"storage\" field is \"0 B\"\n1912888 - recycler template should be moved to KCM operator\n1912907 - Helm chart repository index can contain unresolvable relative URL\u0027s\n1912916 - Set external traffic policy to cluster for IBM platform\n1912922 - Explicitly specifying the operator generated default certificate for an ingress controller breaks the ingress controller\n1912938 - Update confirmation modal for quick starts\n1912942 - cluster-storage-operator: proxy ENV is not injected into vsphere-problem-detector deployment\n1912944 - cluster-storage-operator: proxy ENV is not injected into Manila CSI driver operator deployment\n1912945 - aws-ebs-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912946 - gcp-pd-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912947 - openstack-cinder-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912948 - csi-driver-manila-operator: proxy ENV is not injected into the CSI driver\n1912949 - ovirt-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912977 - rebase upstream static-provisioner\n1913006 - Remove etcd v2 specific alerts with etcd_http* metrics\n1913011 - [OVN] Pod\u0027s external traffic not use egressrouter macvlan ip as a source ip\n1913037 - update static-provisioner base image\n1913047 - baremetal clusteroperator progressing status toggles between true and false when cluster is in a steady state\n1913085 - Regression OLM uses scoped client for CRD installation\n1913096 - backport: cadvisor machine metrics are missing in k8s 1.19\n1913132 - The installation of Openshift Virtualization reports success early before it \u0027s succeeded eventually\n1913154 - Upgrading to 4.6.10 nightly failed with RHEL worker nodes: Failed to find /dev/disk/by-label/root\n1913196 - Guided Tour doesn\u0027t handle resizing of browser\n1913209 - Support modal should be shown for community supported templates\n1913226 - [Migration] The SDN migration rollback failed if customize vxlanPort\n1913249 - update info alert this template is not aditable\n1913285 - VM list empty state should link to virtualization quick starts\n1913289 - Rebase AWS EBS CSI driver for 4.7\n1913292 - OCS 4.7 Installation failed over vmware when arbiter was enabled, as flexibleScaling is also getting enabled\n1913297 - Remove restriction of taints for arbiter node\n1913306 - unnecessary scroll bar is present on quick starts panel\n1913325 - 1.20 rebase for openshift-apiserver\n1913331 - Import from git: Fails to detect Java builder\n1913332 - Pipeline visualization breaks the UI when multiple taskspecs are used\n1913343 - (release-4.7) Added changelog file for insights-operator\n1913356 - (release-4.7) Implemented gathering specific logs from openshift apiserver operator\n1913371 - Missing i18n key \"Administrator\" in namespace \"console-app\" and language \"en.\"\n1913386 - users can see metrics of namespaces for which they don\u0027t have rights when monitoring own services with prometheus user workloads\n1913420 - Time duration setting of resources is not being displayed\n1913536 - 4.6.9 -\u003e 4.7 upgrade hangs.  RHEL 7.9 worker stuck on \"error enabling unit: Failed to execute operation: File exists\\\\n\\\"\n1913554 - Recording rule for ingress error fraction SLI is incorrect, uses irate instead of increase\n1913560 - Normal user cannot load template on the new wizard\n1913563 - \"Virtual Machine\" is not on the same line in create button when logged with normal user\n1913567 - Tooltip data should be same for line chart or stacked chart, display data value same as the table\n1913568 - Normal user cannot create template\n1913582 - [Migration]SDN to OVN migration stucks on MCO for rhel worker\n1913585 - Topology descriptive text fixes\n1913608 - Table data contains data value None after change time range in graph and change back\n1913651 - Improved Red Hat image and crashlooping OpenShift pod collection\n1913660 - Change location and text of Pipeline edit flow alert\n1913685 - OS field not disabled when creating a VM from a template\n1913716 - Include additional use of existing libraries\n1913725 - Refactor Insights Operator Plugin states\n1913736 - Regression: fails to deploy computes when using root volumes\n1913747 - Update operator to kubernetes 1.20.1 to pickup upstream fixes\n1913751 - add third-party network plugin test suite to openshift-tests\n1913783 - QE-To fix the merging pr issue, commenting the afterEach() block\n1913807 - Template support badge should not be shown for community supported templates\n1913821 - Need definitive steps about uninstalling descheduler operator\n1913851 - Cluster Tasks are not sorted in pipeline builder\n1913864 - BuildConfig YAML template references ruby ImageStreamTag that no longer exists\n1913951 - Update the Devfile Sample Repo to an Official Repo Host\n1913960 - Cluster Autoscaler should use 1.20 dependencies\n1913969 - Field dependency descriptor can sometimes cause an exception\n1914060 - Disk created from \u0027Import via Registry\u0027 cannot be used as boot disk\n1914066 - [sriov] sriov dp pod crash when delete ovs HW offload policy\n1914090 - Grafana - The resulting dataset is too large to graph (OCS RBD volumes being counted as disks)\n1914119 - vsphere problem detector operator has no permission to update storages.operator.openshift.io instances\n1914125 - Still using /dev/vde as default device path when create localvolume\n1914183 - Empty NAD page is missing link to quickstarts\n1914196 - target port in `from dockerfile` flow does nothing\n1914204 - Creating VM from dev perspective may fail with template not found error\n1914209 - Associate image secret name to pipeline serviceaccount imagePullSecrets\n1914212 - [e2e][automation] Add test to validate bootable disk souce\n1914250 - ovnkube-node fails on master nodes when both DHCPv6 and SLAAC addresses are configured on nodes\n1914284 - Upgrade to OCP 4.6.9 results in cluster-wide DNS and connectivity issues due to bad NetworkPolicy flows\n1914287 - Bring back selfLink\n1914301 - User VM Template source should show the same provider as template itself\n1914303 - linuxptp-daemon is not forwarding ptp4l stderr output to openshift logs\n1914309 - /terminal page when WTO not installed shows nonsensical error\n1914334 - order of getting started samples is arbitrary\n1914343 - [sig-imageregistry][Feature:ImageTriggers] Annotation trigger reconciles after the image is overwritten [Suite:openshift/conformance/parallel]  timeout on s390x\n1914349 - Increase and decrease buttons in max and min pods in HPA page has distorted UI\n1914405 - Quick search modal should be opened when coming back from a selection\n1914407 - Its not clear that node-ca is running as non-root\n1914427 - Count of pods on the dashboard is incorrect\n1914439 - Typo in SRIOV port create command example\n1914451 - cluster-storage-operator pod running as root\n1914452 - oc image append, oc image extract outputs wrong suggestion to use --keep-manifest-list=true\n1914642 - Customize Wizard Storage tab does not pass validation\n1914723 - SamplesTBRInaccessibleOnBoot Alert has a misspelling\n1914793 - device names should not be translated\n1914894 - Warn about using non-groupified api version\n1914926 - webdriver-manager pulls incorrect version of ChomeDriver due to a bug\n1914932 - Put correct resource name in relatedObjects\n1914938 - PVC disk is not shown on customization wizard general tab\n1914941 - VM Template rootdisk is not deleted after fetching default disk bus\n1914975 - Collect logs from openshift-sdn namespace\n1915003 - No estimate of average node readiness during lifetime of a cluster\n1915027 - fix MCS blocking iptables rules\n1915041 - s3:ListMultipartUploadParts is relied on implicitly\n1915079 - Canary controller should not periodically rotate the canary route endpoint for performance reasons\n1915080 - Large number of tcp connections with shiftstack ocp cluster in about 24 hours\n1915085 - Pods created and rapidly terminated get stuck\n1915114 - [aws-c2s] worker machines are not create during install\n1915133 - Missing default pinned nav items in dev perspective\n1915176 - Update snapshot API CRDs to v1 in web-console when creating volumesnapshot related resource\n1915187 - Remove the \"Tech preview\" tag in web-console for volumesnapshot\n1915188 - Remove HostSubnet anonymization\n1915200 - [OCP 4.7+ OCS 4.6]Arbiter related Note should not show up during UI deployment\n1915217 - OKD payloads expect to be signed with production keys\n1915220 - Remove dropdown workaround for user settings\n1915235 - Failed to upgrade to 4.7 from 4.6 due to the machine-config failure\n1915262 - When deploying with assisted install the CBO operator is installed and enabled without metal3 pod\n1915277 - [e2e][automation]fix cdi upload form test\n1915295 - [BM][IP][Dualstack] Installation failed - operators report dial tcp 172.30.0.1:443: i/o timeout\n1915304 - Updating scheduling component builder \u0026 base images to be consistent with ART\n1915312 - Prevent schedule Linux openshift-network-diagnostics pod on Windows node\n1915318 - [Metal] bareMetal IPI - cannot interact with toolbox container after first execution only in parallel from different connection\n1915348 - [RFE] linuxptp operator needs to expose the uds_address_socket to be used by an application pod\n1915357 - Dev Catalog doesn\u0027t load anything if virtualization operator is installed\n1915379 - New template wizard should require provider and make support input a dropdown type\n1915408 - Failure in operator-registry kind e2e test\n1915416 - [Descheduler] descheduler evicts pod which does not have any ownerRef or descheduler evict annotation\n1915460 - Cluster name size might affect installations\n1915500 - [aws c2s] kube-controller-manager crash loops trying to fetch the AWS instance\n1915540 - Silent 4.7 RHCOS install failure on ppc64le\n1915579 - [Metal] redhat-support-tool became unavailable after tcpdump usage (BareMetal IPI)\n1915582 - p\u0026f: carry upstream pr 97860\n1915594 - [e2e][automation] Improve test for disk validation\n1915617 - Bump bootimage for various fixes\n1915624 - \"Please fill in the following field: Template provider\" blocks customize wizard\n1915627 - Translate Guided Tour text. \n1915643 - OCP4.6 to 4.7 upgrade failed due to manila csi driver operator sync error\n1915647 - Intermittent White screen when the connector dragged to revision\n1915649 - \"Template support\" pop up is not a warning; checkbox text should be rephrased\n1915654 - [e2e][automation] Add a verification for Afinity modal should hint \"Matching node found\"\n1915661 - Can\u0027t run the \u0027oc adm prune\u0027 command in a pod\n1915672 - Kuryr doesn\u0027t work with selfLink disabled. \n1915674 - Golden image PVC creation - storage size should be taken from the template\n1915685 - Message for not supported template is not clear enough\n1915760 - Need to increase timeout to wait rhel worker get ready\n1915793 - quick starts panel syncs incorrectly across browser windows\n1915798 - oauth connection errors for openshift console pods on an OVNKube OCP 4.7 cluster\n1915818 - vsphere-problem-detector: use \"_totals\" in metrics\n1915828 - Latest Dell firmware (04.40.00.00) fails to install IPI on BM using idrac-virtualmedia protocol\n1915859 - vsphere-problem-detector: does not report ESXi host version nor VM HW version\n1915871 - operator-sdk version in new downstream image should be v1.2.0-ocp not v4.7.0\n1915879 - Pipeline Dashboard tab Rename to Pipeline Metrics\n1915885 - Kuryr doesn\u0027t support workers running on multiple subnets\n1915898 - TaskRun log output shows \"undefined\" in streaming\n1915907 - test/cmd/builds.sh uses docker.io\n1915912 - sig-storage-csi-snapshotter image not available\n1915926 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder \u0026 base images to be consistent with ART\n1915929 - A11y Violation: svg-img-alt for time axis of Utilization Card on Cluster Dashboard\n1915939 - Resizing the browser window removes Web Terminal Icon\n1915945 - [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]\n1915959 - Baremetal cluster operator is included in a ROKS installation of 4.7\n1915962 - ROKS: manifest with machine health check fails to apply in 4.7\n1915972 - Global configuration breadcrumbs do not work as expected\n1915981 - Install ethtool and conntrack in container for debugging\n1915995 - \"Edit RoleBinding Subject\" action under RoleBinding list page kebab actions causes unhandled exception\n1915998 - Installer bootstrap node setting of additional subnets inconsistent with additional security groups\n1916021 - OLM enters infinite loop if Pending CSV replaces itself\n1916056 - Need Visual Web Terminal metric enabled for OCP monitoring telemetry\n1916081 - non-existant should be non-existent in CloudCredentialOperatorTargetNamespaceMissing alert\u0027s annotations\n1916099 - VM creation - customization wizard - user should be allowed to delete and re-create root disk\n1916126 - [e2e][automation] Help fix tests for vm guest-agent and next-run-configuration\n1916145 - Explicitly set minimum versions of python libraries\n1916164 - Update csi-driver-nfs builder \u0026 base images to be consistent with ART\n1916221 - csi-snapshot-controller-operator: bump dependencies for 4.7\n1916271 - Known issues should mention failure to apply soft-anti-affinity to masters beyond the third\n1916363 - [OVN] ovs-configuration.service reports as failed within all nodes using version 4.7.0-fc.2\n1916379 - error metrics from vsphere-problem-detector should be gauge\n1916382 - Can\u0027t create ext4 filesystems with Ignition\n1916384 - 4.5.15 and later cluster-version operator does not sync ClusterVersion status before exiting, leaving \u0027verified: false\u0027 even for verified updates\n1916401 - Deleting an ingress controller with a bad DNS Record hangs\n1916417 - [Kuryr] Must-gather does not have all Custom Resources information\n1916419 - [sig-devex][Feature:ImageEcosystem][Slow] openshift images should be SCL enabled returning s2i usage when running the image\n1916454 - teach CCO about upgradeability from 4.6 to 4.7\n1916486 - [OCP RHV] [Docs] Update RHV CSI provisioning section in OCP documenation\n1916502 - Boot disk mirroring fails with mdadm error\n1916524 - Two rootdisk shows on storage step\n1916580 - Default yaml is broken for VM and VM template\n1916621 - oc adm node-logs examples are wrong\n1916642 - [zh_CN] Redundant period in Secrets - Create drop down menu - Key value secret. \n1916692 - Possibly fails to destroy LB and thus cluster\n1916711 - Update Kube dependencies in MCO to 1.20.0\n1916747 - remove links to quick starts if virtualization operator isn\u0027t updated to 2.6\n1916764 - editing a workload with no application applied, will auto fill the app\n1916834 - Pipeline Metrics - Text Updates\n1916843 - collect logs from openshift-sdn-controller pod\n1916853 - cluster will not gracefully recover if openshift-etcd namespace is removed\n1916882 - OCS 4.7 LSO : wizard (Discover disks and create storageclass) does not show zone when topology.kubernetes.io/zone are added manually\n1916888 - OCS wizard Donor chart does not get updated when `Device Type` is edited\n1916938 - Using 4.6 install-config.yaml file with lbFloatingIP results in validation error \"Forbidden: cannot specify lbFloatingIP and apiFloatingIP together\"\n1916949 - ROKS: manifests in openshift-oauth-apiserver ns fails to create with non-existent namespace\n1917101 - [UPI on oVirt] - \u0027RHCOS image\u0027 topic isn\u0027t located in the right place in UPI document\n1917114 - Upgrade from 4.5.9 to 4.7 fails as authentication operator is Degraded due to \u0027\"ProxyConfigController\" controller failed to sync \"key\"\u0027 error\n1917117 - Common templates - disks screen: invalid disk name\n1917124 - Custom template - clone existing PVC - the name of the target VM\u0027s data volume is hard-coded; only one VM can be created\n1917146 - [oVirt] Consume 23-10 ovirt sdk- csi operator\n1917147 - [oVirt] csi operator panics if ovirt-engine suddenly becomes unavailable. \n1917148 - [oVirt] Consume 23-10 ovirt sdk\n1917239 - Monitoring time options overlaps monitoring tab navigation when Quickstart panel is opened\n1917272 - Should update the default minSize to 1Gi when create localvolumeset on web console\n1917303 - [automation][e2e] make kubevirt-plugin gating job mandatory\n1917315 - localvolumeset-local-provisoner-xxx pods are not killed after upgrading from 4.6 to 4.7\n1917327 - annotations.message maybe wrong for NTOPodsNotReady alert\n1917367 - Refactor periodic.go\n1917371 - Add docs on how to use the built-in profiler\n1917372 - Application metrics are shown on Metrics dashboard but not in linked Prometheus UI in OCP management console\n1917395 - pv-pool backing store name restriction should be at 43 characters from the ocs ui\n1917484 - [BM][IPI] Failed to scale down machineset\n1917522 - Deprecate --filter-by-os in oc adm catalog mirror\n1917537 - controllers continuously busy reconciling operator\n1917551 - use min_over_time for vsphere prometheus alerts\n1917585 - OLM Operator install page missing i18n\n1917587 - Manila CSI operator becomes degraded if user doesn\u0027t have permissions to list share types\n1917605 - Deleting an exgw causes pods to no longer route to other exgws\n1917614 - [aws c2s] ingress operator uses unavailable resourcegrouptaggings API\n1917656 - Add to Project/application for eventSources from topology shows 404\n1917658 - Show TP badge for sources powered by camel connectors in create flow\n1917660 - Editing parallelism of job get error info\n1917678 - Could not provision pv when no symlink and target found on rhel worker\n1917679 - Hide double CTA in admin pipelineruns tab\n1917683 - `NodeTextFileCollectorScrapeError` alert in OCP 4.6 cluster. \n1917759 - Console operator panics after setting plugin that does not exists to the console-operator config\n1917765 - ansible-operator version in downstream image should be v1.3.0 not v4.7.0\n1917770 - helm-operator version in downstream image should be v1.3.0 not v4.7.0\n1917799 - Gather s list of names and versions of installed OLM operators\n1917803 - [sig-storage] Pod Disks should be able to delete a non-existent PD without error\n1917814 - Show Broker create option in eventing under admin perspective\n1917838 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types\n1917872 - [oVirt] rebase on latest SDK 2021-01-12\n1917911 - network-tools needs ovnkube-trace binary from ovn-kubernetes image\n1917938 - upgrade version of dnsmasq package\n1917942 - Canary controller causes panic in ingress-operator\n1918019 - Undesired scrollbars in markdown area of QuickStart\n1918068 - Flaky olm integration tests\n1918085 - reversed name of job and namespace in cvo log\n1918112 - Flavor is not editable if a customize VM is created from cli\n1918129 - Update IO sample archive with missing resources \u0026 remove IP anonymization from clusteroperator resources\n1918132 - i18n: Volume Snapshot Contents menu is not translated\n1918133 - [e2e][automation] Fix ocp 4.7 existing tests - part2\n1918140 - Deployment openstack-cinder-csi-driver-controller and openstack-manila-csi-controllerplugin doesn\u0027t be installed on OSP\n1918153 - When `\u0026` character is set as an environment variable in a build config it is getting converted as `\\u0026`\n1918185 - Capitalization on PLR details page\n1918287 - [ovirt] ovirt csi driver is flooding RHV with API calls and spam the event UI with new connections\n1918318 - Kamelet connector\u0027s are not shown in eventing section under Admin perspective\n1918351 - Gather SAP configuration (SCC \u0026 ClusterRoleBinding)\n1918375 - [calico] rbac-proxy container in kube-proxy fails to create tokenreviews\n1918395 - [ovirt] increase livenessProbe period\n1918415 - MCD nil pointer on dropins\n1918438 - [ja_JP, zh_CN] Serverless i18n misses\n1918440 - Kernel Arguments get reapplied even when no new kargs has been added in MachineConfig\n1918471 - CustomNoUpgrade Feature gates are not working correctly\n1918558 - Supermicro nodes boot to PXE upon reboot after successful deployment to disk\n1918622 - Updating ose-jenkins-agent-maven builder \u0026 base images to be consistent with ART\n1918623 - Updating ose-jenkins-agent-nodejs-12 builder \u0026 base images to be consistent with ART\n1918625 - Updating ose-jenkins-agent-nodejs-10 builder \u0026 base images to be consistent with ART\n1918635 - Updating openshift-jenkins-2 builder \u0026 base images to be consistent with ART #1197\n1918639 - Event listener with triggerRef crashes the console\n1918648 - Subscription page doesn\u0027t show InstallPlan correctly\n1918716 - Manilacsi becomes degraded even though it is not available with the underlying Openstack\n1918748 - helmchartrepo is not http(s)_proxy-aware\n1918757 - Consistant fallures of features/project-creation.feature Cypress test in CI\n1918803 - Need dedicated details page w/ global config breadcrumbs for \u0027KnativeServing\u0027 plugin\n1918826 - Insights popover icons are not horizontally aligned\n1918879 - need better debug for bad pull secrets\n1918958 - The default NMstate instance from the operator is incorrect\n1919097 - Close bracket \")\" missing at the end of the sentence in the UI\n1919231 - quick search modal cut off on smaller screens\n1919259 - Make \"Add x\" singular in Pipeline Builder\n1919260 - VM Template list actions should not wrap\n1919271 - NM prepender script doesn\u0027t support systemd-resolved\n1919341 - Updating ose-jenkins-agent-maven builder \u0026 base images to be consistent with ART\n1919360 - Need managed-cluster-info metric enabled for OCP monitoring telemetry\n1919379 - dotnet logo out of date\n1919387 - Console login fails with no error when it can\u0027t write to localStorage\n1919396 - A11y Violation: svg-img-alt on Pod Status ring\n1919407 - OpenStack IPI has three-node control plane limitation, but InstallConfigs aren\u0027t verified\n1919750 - Search InstallPlans got Minified React error\n1919778 - Upgrade is stuck in insights operator Degraded with \"Source clusterconfig could not be retrieved\" until insights operator pod is manually deleted\n1919823 - OCP 4.7 Internationalization Chinese tranlate issue\n1919851 - Visualization does not render when Pipeline \u0026 Task share same name\n1919862 - The tip information for `oc new-project  --skip-config-write` is wrong\n1919876 - VM created via customize wizard cannot inherit template\u0027s PVC attributes\n1919877 - Click on KSVC breaks with white screen\n1919879 - The toolbox container name is changed from \u0027toolbox-root\u0027  to \u0027toolbox-\u0027 in a chroot environment\n1919945 - user entered name value overridden by default value when selecting a git repository\n1919968 - [release-4.7] Undiagnosed panic detected in pod runtime.go:76: invalid memory address or nil pointer dereference\n1919970 - NTO does not update when the tuned profile is updated. \n1919999 - Bump Cluster Resource Operator Golang Versions\n1920027 - machine-config-operator consistently failing during 4.6 to 4.7 upgrades and clusters do not install successfully with proxy configuration\n1920200 - user-settings network error results in infinite loop of requests\n1920205 - operator-registry e2e tests not working properly\n1920214 - Bump golang to 1.15 in cluster-resource-override-admission\n1920248 - re-running the pipelinerun with pipelinespec crashes the UI\n1920320 - VM template field is \"Not available\" if it\u0027s created from common template\n1920367 - When creating localvolumeset instance from the web console, the title for setting volumeMode is `Disk Mode`\n1920368 - Fix containers creation issue resulting in runc running on Guaranteed Pod CPUs\n1920390 - Monitoring \u003e Metrics graph shifts to the left when clicking the \"Stacked\" option and when toggling data series lines on / off\n1920426 - Egress Router CNI OWNERS file should have ovn-k team members\n1920427 - Need to update `oc login` help page since we don\u0027t support prompt interactively for the username\n1920430 - [V2V] [UI] Browser window becomes empty when running import wizard for the first time\n1920438 - openshift-tuned panics on turning debugging on/off. \n1920445 - e2e-gcp-ovn-upgrade job is actually using openshift-sdn\n1920481 - kuryr-cni pods using unreasonable amount of CPU\n1920509 - wait for port 6443 to be open in the kube-scheduler container; use ss instead of lsof\n1920524 - Topology graph crashes adding Open Data Hub operator\n1920526 - catalog operator causing CPU spikes and bad etcd performance\n1920551 - Boot Order is not editable for Templates in \"openshift\" namespace\n1920555 - bump cluster-resource-override-admission api dependencies\n1920571 - fcp multipath will not recover failed paths automatically\n1920619 - Remove default scheduler profile value\n1920655 - Console should not show the Create Autoscaler link in cluster settings when the CRD is not present\n1920674 - MissingKey errors in bindings namespace\n1920684 - Text in language preferences modal is misleading\n1920695 - CI is broken because of bad image registry reference in the Makefile\n1920756 - update generic-admission-server library to get the system:masters authorization optimization\n1920769 - [Upgrade] OCP upgrade from 4.6.13 to 4.7.0-fc.4 for \"network-check-target\" failed when \"defaultNodeSelector\" is set\n1920771 - i18n: Delete persistent volume claim drop down is not translated\n1920806 - [OVN]Nodes lost network connection after reboot on the vSphere UPI\n1920912 - Unable to power off BMH from console\n1920981 - When OCS was deployed with arbiter mode enable add capacity is increasing the count by \"2\"\n1920984 - [e2e][automation] some menu items names are out dated\n1921013 - Gather PersistentVolume definition (if any) used in image registry config\n1921023 - Do not enable Flexible Scaling to true for Internal mode clusters(revert to 4.6 behavior)\n1921087 - \u0027start next quick start\u0027 link doesn\u0027t work and is unintuitive\n1921088 - test-cmd is failing on volumes.sh pretty consistently\n1921248 - Clarify the kubelet configuration cr description\n1921253 - Text filter default placeholder text not internationalized\n1921258 - User Preferences: Active perspective and project change in the current window when selected in a different window\n1921275 - Panic in authentication-operator in (*deploymentController).updateOperatorDeploymentInfo\n1921277 - Fix Warning and Info log statements to handle arguments\n1921281 - oc get -o yaml --export returns \"error: unknown flag: --export\"\n1921458 - [SDK] Gracefully handle the `run bundle-upgrade` if the lower version operator doesn\u0027t exist\n1921556 - [OCS with Vault]: OCS pods didn\u0027t comeup after deploying with Vault details from UI\n1921572 - For external source (i.e GitHub Source) form view as well shows yaml\n1921580 - [e2e][automation]Test VM detail view actions dropdown does not pass\n1921610 - Pipeline metrics font size inconsistency\n1921644 - [e2e][automation] tests errors with wrong cloudInit new line syntax\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1921655 - [OSP] Incorrect error handling during cloudinfo generation\n1921713 - [e2e][automation]  fix failing VM migration tests\n1921762 - Serving and Eventing breadcrumbs should direct users back to tabbed page view\n1921774 - delete application modal errors when a resource cannot be found\n1921806 - Explore page APIResourceLinks aren\u0027t i18ned\n1921823 - CheckBoxControls not internationalized\n1921836 - AccessTableRows don\u0027t internationalize \"User\" or \"Group\"\n1921857 - Test flake when hitting router in e2e tests due to one router not being up to date\n1921880 - Dynamic plugins are not initialized on console load in production mode\n1921911 - Installer PR #4589 is causing leak of IAM role policy bindings\n1921921 - \"Global Configuration\" breadcrumb does not use sentence case\n1921949 - Console bug - source code URL broken for gitlab self-hosted repositories\n1921954 - Subscription-related constraints in ResolutionFailed events are misleading\n1922015 - buttons in modal header are invisible on Safari\n1922021 - Nodes terminal page \u0027Expand\u0027 \u0027Collapse\u0027 button not translated\n1922050 - [e2e][automation] Improve vm clone tests\n1922066 - Cannot create VM from custom template which has extra disk\n1922098 - Namespace selection dialog is not closed after select a namespace\n1922099 - Updated Readme documentation for QE code review and setup\n1922146 - Egress Router CNI doesn\u0027t have logging support. \n1922267 - Collect specific ADFS error\n1922292 - Bump RHCOS boot images for 4.7\n1922454 - CRI-O doesn\u0027t enable pprof by default\n1922473 - reconcile LSO images for 4.8\n1922573 - oc returns an error while using -o jsonpath when there is no resource found in the namespace\n1922782 - Source registry missing docker:// in yaml\n1922907 - Interop UI Tests - step implementation for updating feature files\n1922911 - Page crash when click the \"Stacked\" checkbox after clicking the data series toggle buttons\n1922991 - \"verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build\" test fails on OKD\n1923003 - WebConsole Insights widget showing \"Issues pending\" when the cluster doesn\u0027t report anything\n1923098 - [vsphere-problem-detector-operator] Need permission to access replicasets.apps resources\n1923102 - [vsphere-problem-detector-operator] pod\u0027s version is not correct\n1923245 - [Assisted-4.7] [Staging][Minimal-ISO] nodes fails to boot\n1923674 - k8s 1.20 vendor dependencies\n1923721 - PipelineRun running status icon is not rotating\n1923753 - Increase initialDelaySeconds for ovs-daemons container in the ovs-node daemonset for upgrade scenarios\n1923774 - Docker builds failing for openshift/cluster-resource-override-admission-operator\n1923802 - ci/prow/e2e-aws-olm build failing for openshift/cluster-resource-override-admission-operator\n1923874 - Unable to specify values with % in kubeletconfig\n1923888 - Fixes error metadata gathering\n1923892 - Update arch.md after refactor. \n1923894 - \"installed\" operator status in operatorhub page does not reflect the real status of operator\n1923895 - Changelog generation. \n1923911 - [e2e][automation] Improve tests for vm details page and list filter\n1923945 - PVC Name and Namespace resets when user changes os/flavor/workload\n1923951 - EventSources shows `undefined` in project\n1923973 - Dynamic plugin demo README does not contain info how to enable the ConsolePlugins\n1924046 - Localhost: Refreshing on a Project removes it from nav item urls\n1924078 - Topology quick search View all results footer should be sticky. \n1924081 - NTO should ship the latest Tuned daemon release 2.15\n1924084 - backend tests incorrectly hard-code artifacts dir\n1924128 - [sig-builds][Feature:Builds] verify /run filesystem contents  do not have unexpected content using a simple Docker Strategy Build\n1924135 - Under sufficient load, CRI-O may segfault\n1924143 - Code Editor Decorator url is broken for Bitbucket repos\n1924188 - Language selector dropdown doesn\u0027t always pre-select the language\n1924365 - Add extra disk for VM which use boot source PXE\n1924383 - Degraded network operator during upgrade to 4.7.z\n1924387 - [ja_JP][zh_CN] Incorrect warning message for deleting namespace on Delete Pod dialog box. \n1924480 - non cluster admin can not take VM snapshot: An error occurred, cannot set blockOwnerDeletion if an ownerReference refers to a resource you can\u0027t set finalizers on\n1924583 - Deprectaed templates are listed in the Templates screen\n1924870 - pick upstream pr#96901: plumb context with request deadline\n1924955 - Images from Private external registry not working in deploy Image\n1924961 - k8sutil.TrimDNS1123Label creates invalid values\n1924985 - Build egress-router-cni for both RHEL 7 and 8\n1925020 - Console demo plugin deployment image shoult not point to dockerhub\n1925024 - Remove extra validations on kafka source form view net section\n1925039 - [e2e] Fix Test - ID(CNV-5327) Change Custom Flavor while VM is running\n1925072 - NTO needs to ship the current latest stalld v1.7.0\n1925163 - Missing info about dev catalog in boot source template column\n1925200 - Monitoring Alert icon is missing on the workload in Topology view\n1925262 - apiserver getting 2 SIGTERM signals which was immediately making it exit code 1\n1925319 - bash syntax error in configure-ovs.sh script\n1925408 - Remove StatefulSet gatherer and replace it with gathering corresponding config map data\n1925516 - Pipeline Metrics Tooltips are overlapping data\n1925562 - Add new ArgoCD link from GitOps application environments page\n1925596 - Gitops details page image and commit id text overflows past card boundary\n1926556 - \u0027excessive etcd leader changes\u0027 test case failing in serial job because prometheus data is wiped by machine set test\n1926588 - The tarball of operator-sdk is not ready for ocp4.7\n1927456 - 4.7 still points to 4.6 catalog images\n1927500 - API server exits non-zero on 2 SIGTERM signals\n1929278 - Monitoring workloads using too high a priorityclass\n1929645 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n1929920 - Cluster monitoring documentation link is broken - 404 not found\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-10103\nhttps://access.redhat.com/security/cve/CVE-2018-10105\nhttps://access.redhat.com/security/cve/CVE-2018-14461\nhttps://access.redhat.com/security/cve/CVE-2018-14462\nhttps://access.redhat.com/security/cve/CVE-2018-14463\nhttps://access.redhat.com/security/cve/CVE-2018-14464\nhttps://access.redhat.com/security/cve/CVE-2018-14465\nhttps://access.redhat.com/security/cve/CVE-2018-14466\nhttps://access.redhat.com/security/cve/CVE-2018-14467\nhttps://access.redhat.com/security/cve/CVE-2018-14468\nhttps://access.redhat.com/security/cve/CVE-2018-14469\nhttps://access.redhat.com/security/cve/CVE-2018-14470\nhttps://access.redhat.com/security/cve/CVE-2018-14553\nhttps://access.redhat.com/security/cve/CVE-2018-14879\nhttps://access.redhat.com/security/cve/CVE-2018-14880\nhttps://access.redhat.com/security/cve/CVE-2018-14881\nhttps://access.redhat.com/security/cve/CVE-2018-14882\nhttps://access.redhat.com/security/cve/CVE-2018-16227\nhttps://access.redhat.com/security/cve/CVE-2018-16228\nhttps://access.redhat.com/security/cve/CVE-2018-16229\nhttps://access.redhat.com/security/cve/CVE-2018-16230\nhttps://access.redhat.com/security/cve/CVE-2018-16300\nhttps://access.redhat.com/security/cve/CVE-2018-16451\nhttps://access.redhat.com/security/cve/CVE-2018-16452\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2019-3884\nhttps://access.redhat.com/security/cve/CVE-2019-5018\nhttps://access.redhat.com/security/cve/CVE-2019-6977\nhttps://access.redhat.com/security/cve/CVE-2019-6978\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9455\nhttps://access.redhat.com/security/cve/CVE-2019-9458\nhttps://access.redhat.com/security/cve/CVE-2019-11068\nhttps://access.redhat.com/security/cve/CVE-2019-12614\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13225\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15165\nhttps://access.redhat.com/security/cve/CVE-2019-15166\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-15917\nhttps://access.redhat.com/security/cve/CVE-2019-15925\nhttps://access.redhat.com/security/cve/CVE-2019-16167\nhttps://access.redhat.com/security/cve/CVE-2019-16168\nhttps://access.redhat.com/security/cve/CVE-2019-16231\nhttps://access.redhat.com/security/cve/CVE-2019-16233\nhttps://access.redhat.com/security/cve/CVE-2019-16935\nhttps://access.redhat.com/security/cve/CVE-2019-17450\nhttps://access.redhat.com/security/cve/CVE-2019-17546\nhttps://access.redhat.com/security/cve/CVE-2019-18197\nhttps://access.redhat.com/security/cve/CVE-2019-18808\nhttps://access.redhat.com/security/cve/CVE-2019-18809\nhttps://access.redhat.com/security/cve/CVE-2019-19046\nhttps://access.redhat.com/security/cve/CVE-2019-19056\nhttps://access.redhat.com/security/cve/CVE-2019-19062\nhttps://access.redhat.com/security/cve/CVE-2019-19063\nhttps://access.redhat.com/security/cve/CVE-2019-19068\nhttps://access.redhat.com/security/cve/CVE-2019-19072\nhttps://access.redhat.com/security/cve/CVE-2019-19221\nhttps://access.redhat.com/security/cve/CVE-2019-19319\nhttps://access.redhat.com/security/cve/CVE-2019-19332\nhttps://access.redhat.com/security/cve/CVE-2019-19447\nhttps://access.redhat.com/security/cve/CVE-2019-19524\nhttps://access.redhat.com/security/cve/CVE-2019-19533\nhttps://access.redhat.com/security/cve/CVE-2019-19537\nhttps://access.redhat.com/security/cve/CVE-2019-19543\nhttps://access.redhat.com/security/cve/CVE-2019-19602\nhttps://access.redhat.com/security/cve/CVE-2019-19767\nhttps://access.redhat.com/security/cve/CVE-2019-19770\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-19956\nhttps://access.redhat.com/security/cve/CVE-2019-20054\nhttps://access.redhat.com/security/cve/CVE-2019-20218\nhttps://access.redhat.com/security/cve/CVE-2019-20386\nhttps://access.redhat.com/security/cve/CVE-2019-20387\nhttps://access.redhat.com/security/cve/CVE-2019-20388\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20636\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-20812\nhttps://access.redhat.com/security/cve/CVE-2019-20907\nhttps://access.redhat.com/security/cve/CVE-2019-20916\nhttps://access.redhat.com/security/cve/CVE-2020-0305\nhttps://access.redhat.com/security/cve/CVE-2020-0444\nhttps://access.redhat.com/security/cve/CVE-2020-1716\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-1751\nhttps://access.redhat.com/security/cve/CVE-2020-1752\nhttps://access.redhat.com/security/cve/CVE-2020-1971\nhttps://access.redhat.com/security/cve/CVE-2020-2574\nhttps://access.redhat.com/security/cve/CVE-2020-2752\nhttps://access.redhat.com/security/cve/CVE-2020-2922\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3898\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-6405\nhttps://access.redhat.com/security/cve/CVE-2020-7595\nhttps://access.redhat.com/security/cve/CVE-2020-7774\nhttps://access.redhat.com/security/cve/CVE-2020-8177\nhttps://access.redhat.com/security/cve/CVE-2020-8492\nhttps://access.redhat.com/security/cve/CVE-2020-8563\nhttps://access.redhat.com/security/cve/CVE-2020-8566\nhttps://access.redhat.com/security/cve/CVE-2020-8619\nhttps://access.redhat.com/security/cve/CVE-2020-8622\nhttps://access.redhat.com/security/cve/CVE-2020-8623\nhttps://access.redhat.com/security/cve/CVE-2020-8624\nhttps://access.redhat.com/security/cve/CVE-2020-8647\nhttps://access.redhat.com/security/cve/CVE-2020-8648\nhttps://access.redhat.com/security/cve/CVE-2020-8649\nhttps://access.redhat.com/security/cve/CVE-2020-9327\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-10029\nhttps://access.redhat.com/security/cve/CVE-2020-10732\nhttps://access.redhat.com/security/cve/CVE-2020-10749\nhttps://access.redhat.com/security/cve/CVE-2020-10751\nhttps://access.redhat.com/security/cve/CVE-2020-10763\nhttps://access.redhat.com/security/cve/CVE-2020-10773\nhttps://access.redhat.com/security/cve/CVE-2020-10774\nhttps://access.redhat.com/security/cve/CVE-2020-10942\nhttps://access.redhat.com/security/cve/CVE-2020-11565\nhttps://access.redhat.com/security/cve/CVE-2020-11668\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-12465\nhttps://access.redhat.com/security/cve/CVE-2020-12655\nhttps://access.redhat.com/security/cve/CVE-2020-12659\nhttps://access.redhat.com/security/cve/CVE-2020-12770\nhttps://access.redhat.com/security/cve/CVE-2020-12826\nhttps://access.redhat.com/security/cve/CVE-2020-13249\nhttps://access.redhat.com/security/cve/CVE-2020-13630\nhttps://access.redhat.com/security/cve/CVE-2020-13631\nhttps://access.redhat.com/security/cve/CVE-2020-13632\nhttps://access.redhat.com/security/cve/CVE-2020-14019\nhttps://access.redhat.com/security/cve/CVE-2020-14040\nhttps://access.redhat.com/security/cve/CVE-2020-14381\nhttps://access.redhat.com/security/cve/CVE-2020-14382\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-14422\nhttps://access.redhat.com/security/cve/CVE-2020-15157\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-15862\nhttps://access.redhat.com/security/cve/CVE-2020-15999\nhttps://access.redhat.com/security/cve/CVE-2020-16166\nhttps://access.redhat.com/security/cve/CVE-2020-24490\nhttps://access.redhat.com/security/cve/CVE-2020-24659\nhttps://access.redhat.com/security/cve/CVE-2020-25211\nhttps://access.redhat.com/security/cve/CVE-2020-25641\nhttps://access.redhat.com/security/cve/CVE-2020-25658\nhttps://access.redhat.com/security/cve/CVE-2020-25661\nhttps://access.redhat.com/security/cve/CVE-2020-25662\nhttps://access.redhat.com/security/cve/CVE-2020-25681\nhttps://access.redhat.com/security/cve/CVE-2020-25682\nhttps://access.redhat.com/security/cve/CVE-2020-25683\nhttps://access.redhat.com/security/cve/CVE-2020-25684\nhttps://access.redhat.com/security/cve/CVE-2020-25685\nhttps://access.redhat.com/security/cve/CVE-2020-25686\nhttps://access.redhat.com/security/cve/CVE-2020-25687\nhttps://access.redhat.com/security/cve/CVE-2020-25694\nhttps://access.redhat.com/security/cve/CVE-2020-25696\nhttps://access.redhat.com/security/cve/CVE-2020-26160\nhttps://access.redhat.com/security/cve/CVE-2020-27813\nhttps://access.redhat.com/security/cve/CVE-2020-27846\nhttps://access.redhat.com/security/cve/CVE-2020-28362\nhttps://access.redhat.com/security/cve/CVE-2020-29652\nhttps://access.redhat.com/security/cve/CVE-2021-2007\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYDZ+bNzjgjWX9erEAQghXg//awGwjQxJ5LEZWBTdgyuCa8mHEi2rop5T\nlmebolBMNRSbo9gI8LMSHlvIBBFiV4CuFvfxE0AVLNentfzOTH11TxNWe1KQYt4H\nEmcGHPeHWTxKDkvAHtVcWXy9WN3y5d4lHSaq6AR1nHRPcj/k1upyx22kotpnYxN8\n4d49PjFTO3YbmdYpNLVJ9nY8izqUpTfM7YSyj6ANZSlaYc5Z215o6TPo6e3wobf4\nmWu+VfDS0v+/AbGhQhO2sQ7r2ysJ85MB7c62cxck4a51KiA0NKd4xr0TAA4KHnNL\nISHFzi5QYXu+meE+9wYRo1ZjJ5fbPj41+1TJbR6O4CbP0xQiFpcUSipNju3rGSGy\nAe5G/QGT8J7HzOjlKVvY3SFu/odENR6c+xUIr7IB/FBlu7DdPF2XxMZDQD4DKHEk\n4aiDbuiEL3Yf78Ic1RqPPmrj9plIwprVFQz+k3JaQXKD+1dBxO6tk+nVu2/5xNbM\nuR03hrthYYIpdXLSWU4lzq8j3kQ9wZ4j/m2o6/K6eHNl9PyqAG5jfQv9bVf8E3oG\nkrzc/JLvOfHNEQ/oJs/v/DFDmnAxshCCtGWlpLJ5J0pcD3EePsrPNs1QtQurVrMv\nRjfBCWKOij53+BinrMKHdsHxfur7GCFCIQCVaLIv6GUjX2NWI0voIVA8JkrFNNp6\nMcvuEaxco7U=\n=sw8i\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2020-05-26-10 iCloud for Windows 7.19\n\niCloud for Windows 7.19 is now available and addresses the following:\n\nImageIO\nAvailable for: Windows 7 and later\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2020-9789: Wenchao Li of VARAS@IIE\nCVE-2020-9790: Xingwei Lin of Ant-financial Light-Year Security Lab\n\nImageIO\nAvailable for: Windows 7 and later\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2020-3878: Samuel Gro\u00df of Google Project Zero\n\nSQLite\nAvailable for: Windows 7 and later\nImpact: A malicious application may cause a denial of service or\npotentially disclose memory contents\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2020-9794\n\nWebKit\nAvailable for: Windows 7 and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A logic issue was addressed with improved restrictions. \nCVE-2020-9802: Samuel Gro\u00df of Google Project Zero\n\nWebKit\nAvailable for: Windows 7 and later\nImpact: Processing maliciously crafted web content may lead to\nuniversal cross site scripting\nDescription: A logic issue was addressed with improved restrictions. \nCVE-2020-9805: an anonymous researcher\n\nWebKit\nAvailable for: Windows 7 and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A type confusion issue was addressed with improved\nmemory handling. \nCVE-2020-9800: Brendan Draper (@6r3nd4n) working with Trend Micro\nZero Day Initiative\n\nWebKit\nAvailable for: Windows 7 and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2020-9806: Wen Xu of SSLab at Georgia Tech\nCVE-2020-9807: Wen Xu of SSLab at Georgia Tech\n\nWebKit\nAvailable for: Windows 7 and later\nImpact: A remote attacker may be able to cause arbitrary code\nexecution\nDescription: A logic issue was addressed with improved restrictions. \nCVE-2020-9850: @jinmo123, @setuid0x0_, and @insu_yun_en of\n@SSLab_Gatech working with Trend Micro\u2019s Zero Day Initiative\n\nWebKit\nAvailable for: Windows 7 and later\nImpact: Processing maliciously crafted web content may lead to a\ncross site scripting attack\nDescription: An input validation issue was addressed with improved\ninput validation. \nCVE-2020-9843: Ryan Pickren (ryanpickren.com)\n\nWebKit\nAvailable for: Windows 7 and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory corruption issue was addressed with improved\nvalidation. \nCVE-2020-9803: Wen Xu of SSLab at Georgia Tech\n\nAdditional recognition\n\nImageIO\nWe would like to acknowledge Lei Sun for their assistance. \n\nWebKit\nWe would like to acknowledge Aidan Dunlap of UT Austin for their\nassistance. \n\nThis advisory provides the following updates among others:\n\n* Enhances profile parsing time. \n* Fixes excessive resource consumption from the Operator. \n* Fixes default content image. \n* Fixes outdated remediation handling. Bugs fixed (https://bugzilla.redhat.com/):\n\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1918990 - ComplianceSuite scans use quay content image for initContainer\n1919135 - [OCP v46] The autoApplyRemediation pauses the machineConfigPool if there is outdated complianceRemediation object present\n1919846 - After remediation applied, the compliancecheckresults still reports Failed status for some rules\n1920999 - Compliance operator is not displayed when disconnected mode is selected in the OpenShift Web-Console. \n\nBug Fix(es):\n\n* Aggregator pod tries to parse ConfigMaps without results (BZ#1899479)\n\n* The compliancesuite object returns error with ocp4-cis tailored profile\n(BZ#1902251)\n\n* The compliancesuite does not trigger when there are multiple rhcos4\nprofiles added in scansettingbinding object (BZ#1902634)\n\n* [OCP v46] Not all remediations get applied through machineConfig although\nthe status of all rules shows Applied in ComplianceRemediations object\n(BZ#1907414)\n\n* The profile parser pod deployment and associated profiles should get\nremoved after upgrade the compliance operator (BZ#1908991)\n\n* Applying the \"rhcos4-moderate\" compliance profile leads to Ignition error\n\"something else exists at that path\" (BZ#1909081)\n\n* [OCP v46] Always update the default profilebundles on Compliance operator\nstartup (BZ#1909122)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1899479 - Aggregator pod tries to parse ConfigMaps without results\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n1902251 - The compliancesuite object returns error with ocp4-cis tailored profile\n1902634 - The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object\n1907414 - [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object\n1908991 - The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator\n1909081 - Applying the \"rhcos4-moderate\" compliance profile leads to Ignition error \"something else exists at that path\"\n1909122 - [OCP v46] Always update the default profilebundles on Compliance operator startup\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1732329 - Virtual Machine is missing documentation of its properties in yaml editor\n1783192 - Guest kernel panic when start RHEL6.10 guest with q35 machine type and virtio disk in cnv\n1791753 - [RFE] [SSP] Template validator should check validations in template\u0027s parent template\n1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic\n1848954 - KMP missing CA extensions  in cabundle of mutatingwebhookconfiguration\n1848956 - KMP  requires downtime for CA stabilization during certificate rotation\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1853911 - VM with dot in network name fails to start with unclear message\n1854098 - NodeNetworkState on workers doesn\u0027t have \"status\" key due to nmstate-handler pod failure to run \"nmstatectl show\"\n1856347 - SR-IOV : Missing network name for sriov during vm setup\n1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS\n1859235 - Common Templates - after upgrade there are 2  common templates per each os-workload-flavor combination\n1860714 - No API information from `oc explain`\n1860992 - CNV upgrade - users are not removed from privileged  SecurityContextConstraints\n1864577 - [v2v][RHV to CNV non migratable source VM fails to import to Ceph-rbd / File system due to overhead required for Filesystem\n1866593 - CDI is not handling vm disk clone\n1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs\n1868817 - Container-native Virtualization 2.6.0 Images\n1873771 - Improve the VMCreationFailed error message caused by VM low memory\n1874812 - SR-IOV: Guest Agent  expose link-local ipv6 address  for sometime and then remove it\n1878499 - DV import doesn\u0027t recover from scratch space PVC deletion\n1879108 - Inconsistent naming of \"oc virt\" command in help text\n1881874 - openshift-cnv namespace is getting stuck if the user tries to delete it while CNV is running\n1883232 - Webscale: kubevirt/CNV datavolume importer pod inability to disable sidecar injection if namespace has sidecar injection enabled but VM Template does NOT\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1885153 - [v2v][RHV to CNv VM import] Wrong Network mapping do not show a relevant error message\n1885418 - [openshift-cnv] issues with memory overhead calculation when limits are used\n1887398 - [openshift-cnv][CNV] nodes need to exist and be labeled first, *before* the NodeNetworkConfigurationPolicy is applied\n1889295 - [v2v][VMware to CNV VM import API] diskMappings: volumeMode Block is not passed on to PVC request. Bugs fixed (https://bugzilla.redhat.com/):\n\n1823765 - nfd-workers crash under an ipv6 environment\n1838802 - mysql8 connector from operatorhub does not work with metering operator\n1838845 - Metering operator can\u0027t connect to postgres DB from Operator Hub\n1841883 - namespace-persistentvolumeclaim-usage  query returns unexpected values\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1868294 - NFD operator does not allow customisation of nfd-worker.conf\n1882310 - CVE-2020-24750 jackson-databind: Serialization gadgets in com.pastdev.httpcomponents.configuration.JndiConfiguration\n1890672 - NFD is missing a build flag to build correctly\n1890741 - path to the CA trust bundle ConfigMap is broken in report operator\n1897346 - NFD worker pods not scheduler on a 3 node master/worker cluster\n1898373 - Metering operator failing upgrade from 4.4 to 4.6 channel\n1900125 - FIPS error while generating RSA private key for CA\n1906129 - OCP 4.7:  Node Feature Discovery (NFD) Operator in CrashLoopBackOff when deployed from OperatorHub\n1908492 - OCP 4.7:  Node Feature Discovery (NFD) Operator Custom Resource Definition file in olm-catalog is not in sync with the one in manifests dir leading to failed deployment from OperatorHub\n1913837 - The CI and ART 4.7 metering images are not mirrored\n1914869 - OCP 4.7 NFD - Operand configuration options for NodeFeatureDiscovery are empty, no supported image for ppc64le\n1916010 - olm skip range is set to the wrong range\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1923998 - NFD Operator is failing to update and remains in Replacing state\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1808240 - Always return metrics value for pods under the user\u0027s namespace\n1815189 - feature flagged UI does not always become available after operator installation\n1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters\n1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly\n1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal\n1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered\n1878925 - \u0027oc adm upgrade --to ...\u0027 rejects versions which occur only in history, while the cluster-version operator supports history fallback\n1880738 - origin e2e test deletes original worker\n1882983 - oVirt csi driver should refuse to provision RWX and ROX PV\n1886450 - Keepalived router id check not documented for RHV/VMware IPI\n1889488 - The metrics endpoint for the Scheduler is not protected by RBAC\n1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom\n1896474 - Path based routing is broken for some combinations\n1897431 - CIDR support for  additional network attachment with the bridge CNI plug-in\n1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes\n1907433 - Excessive logging in image operator\n1909906 - The router fails with PANIC error when stats port already in use\n1911173 - [MSTR-998] Many charts\u0027 legend names show {{}} instead of words\n1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. \n1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)\n1917893 - [ovirt] install fails: due to terraform error \"Cannot attach Virtual Disk: Disk is locked\" on vm resource\n1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1926522 - oc adm catalog does not clean temporary files\n1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. \n1928141 - kube-storage-version-migrator constantly reporting type \"Upgradeable\" status Unknown\n1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it\u0027s storageclass is not yet finished, confusing users\n1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x\n1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade\n1937085 - RHV UPI inventory playbook missing guarantee_memory\n1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion\n1938236 - vsphere-problem-detector does not support overriding log levels via storage CR\n1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods\n1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer\n1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]\n1942913 - ThanosSidecarUnhealthy isn\u0027t resilient to WAL replays. \n1943363 - [ovn] CNO should gracefully terminate ovn-northd\n1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17\n1948080 - authentication should not set Available=False APIServices_Error with 503s\n1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set\n1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0\n1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer\n1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs\n1953264 - \"remote error: tls: bad certificate\" logs in prometheus-operator container\n1955300 - Machine config operator reports unavailable for 23m during upgrade\n1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set\n1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set\n1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters\n1956496 - Needs SR-IOV Docs Upstream\n1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret\n1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid\n1956964 - upload a boot-source to OpenShift virtualization using the console\n1957547 - [RFE]VM name is not auto filled in dev console\n1958349 - ovn-controller doesn\u0027t release the memory after cluster-density run\n1959352 - [scale] failed to get pod annotation: timed out waiting for annotations\n1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not\n1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]\n1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects\n1961391 - String updates\n1961509 - DHCP daemon pod should have CPU and memory requests set but not limits\n1962066 - Edit machine/machineset specs not working\n1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1963053 - `oc whoami --show-console` should show the web console URL, not the server api URL\n1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1964327 - Support containers with name:tag@digest\n1964789 - Send keys and disconnect does not work for VNC console\n1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7\n1966445 - Unmasking a service doesn\u0027t work if it masked using MCO\n1966477 - Use GA version in KAS/OAS/OauthAS to avoid: \"audit.k8s.io/v1beta1\" is deprecated and will be removed in a future release, use \"audit.k8s.io/v1\" instead\n1966521 - kube-proxy\u0027s userspace implementation consumes excessive CPU\n1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up\n1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount\n1970218 - MCO writes incorrect file contents if compression field is specified\n1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]\n1970805 - Cannot create build when docker image url contains dir structure\n1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io\n1972827 - image registry does not remain available during upgrade\n1972962 - Should set the minimum value for the `--max-icsp-size` flag of `oc adm catalog mirror`\n1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run\n1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established\n1976301 - [ci] e2e-azure-upi is permafailing\n1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. \n2007379 - Events are not generated for master offset  for ordinary clock\n2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace\n2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address\n2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error\n2007522 - No new local-storage-operator-metadata-container is build for 4.10\n2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10\n2007580 - Azure cilium installs are failing e2e tests\n2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10\n2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes\n2007692 - 4.9 \"old-rhcos\" jobs are permafailing with storage test failures\n2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow\n2007757 - must-gather extracts imagestreams in the \"openshift\" namespace, but not Templates\n2007802 - AWS machine actuator get stuck if machine is completely missing\n2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator\n2008119 - The serviceAccountIssuer field on Authentication CR is reseted to \u201c\u201d when installation process\n2008151 - Topology breaks on clicking in empty state\n2008185 - Console operator go.mod should use go 1.16.version\n2008201 - openstack-az job is failing on haproxy idle test\n2008207 - vsphere CSI driver doesn\u0027t set resource limits\n2008223 - gather_audit_logs: fix oc command line to get the current audit profile\n2008235 - The Save button in the Edit DC form remains disabled\n2008256 - Update Internationalization README with scope info\n2008321 - Add correct documentation link for MON_DISK_LOW\n2008462 - Disable PodSecurity feature gate for 4.10\n2008490 - Backing store details page does not contain all the kebab actions. \n2010181 - Environment variables not getting reset on reload on deployment edit form\n2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2010341 - OpenShift Alerting Rules Style-Guide Compliance\n2010342 - Local console builds can have out of memory errors\n2010345 - OpenShift Alerting Rules Style-Guide Compliance\n2010348 - Reverts PIE build mode for K8S components\n2010352 - OpenShift Alerting Rules Style-Guide Compliance\n2010354 - OpenShift Alerting Rules Style-Guide Compliance\n2010359 - OpenShift Alerting Rules Style-Guide Compliance\n2010368 - OpenShift Alerting Rules Style-Guide Compliance\n2010376 - OpenShift Alerting Rules Style-Guide Compliance\n2010662 - Cluster is unhealthy after image-registry-operator tests\n2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)\n2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API\n2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address\n2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing\n2010864 - Failure building EFS operator\n2010910 - ptp worker events unable to identify interface for multiple interfaces\n2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24\n2010921 - Azure Stack Hub does not handle additionalTrustBundle\n2010931 - SRO CSV uses non default category \"Drivers and plugins\"\n2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. \n2011038 - optional operator conditions are confusing\n2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass\n2011171 - diskmaker-manager constantly redeployed by LSO when creating LV\u0027s\n2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image\n2011368 - Tooltip in pipeline visualization shows misleading data\n2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels\n2011411 - Managed Service\u0027s Cluster overview page contains link to missing Storage dashboards\n2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster\n2011513 - Kubelet rejects pods that use resources that should be freed by completed pods\n2011668 - Machine stuck in deleting phase in VMware \"reconciler failed to Delete machine\"\n2011693 - (release-4.10) \"insightsclient_request_recvreport_total\" metric is always incremented\n2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn\u0027t export namespace labels anymore\n2011733 - Repository README points to broken documentarion link\n2011753 - Ironic resumes clean before raid configuration job is actually completed\n2011809 - The nodes page in the openshift console doesn\u0027t work. You just get a blank page\n2011822 - Obfuscation doesn\u0027t work at clusters with OVN\n2011882 - SRO helm charts not synced with templates\n2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot\n2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages\n2011903 - vsphere-problem-detector: session leak\n2011927 - OLM should allow users to specify a proxy for GRPC connections\n2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods\n2011960 - [tracker] Storage operator is not available after reboot cluster instances\n2011971 - ICNI2 pods are stuck in ContainerCreating state\n2011972 - Ingress operator not creating wildcard route for hypershift  clusters\n2011977 - SRO bundle references non-existent image\n2012069 - Refactoring Status controller\n2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI\n2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group\n2012233 - [IBMCLOUD] IPI: \"Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)\"\n2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig\n2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off\n2012407 - [e2e][automation] improve vm tab console tests\n2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don\u0027t have namespace label\n2012562 - migration condition is not detected in list view\n2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written\n2012780 - The port 50936 used by haproxy is occupied by kube-apiserver\n2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working\n2012902 - Neutron Ports assigned to Completed Pods are not reused Edit\n2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack\n2012971 - Disable operands deletes\n2013034 - Cannot install to openshift-nmstate namespace\n2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)\n2013199 - post reboot of node SRIOV policy taking huge time\n2013203 - UI breaks when trying to create block pool before storage cluster/system creation\n2013222 - Full breakage for nightly payload promotion\n2013273 - Nil pointer exception when phc2sys options are missing\n2013321 - TuneD: high CPU utilization of the TuneD daemon. \n2013416 - Multiple assets emit different content to the same filename\n2013431 - Application selector dropdown has incorrect font-size and positioning\n2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2013545 - Service binding created outside topology is not visible\n2013599 - Scorecard support storage is not included in ocp4.9\n2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)\n2013646 - fsync controller will show false positive if gaps in metrics are observed. to user and tries to just load a blank screen on \u0027Add Capacity\u0027 button click\n2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu\n2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. \n2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart \u0027x% used\u0027 is in English\n2015549 - Observe - Metrics: Column heading and pagination text is in English\n2015557 - Workloads - DeploymentConfigs :  Error message is in English\n2015568 - Compute - Nodes : CPU column\u0027s values are in English\n2015635 - Storage operator fails causing installation to fail on ASH\n2015660 - \"Finishing boot source customization\" screen should not use term \"patched\"\n2015793 - [hypershift] The collect-profiles job\u0027s pods should run on the control-plane node\n2015806 - Metrics view in Deployment reports \"Forbidden\" when not cluster-admin\n2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning\n2015837 - OS_CLOUD overwrites install-config\u0027s platform.openstack.cloud\n2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch\n2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail\n2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)\n2016008 - [4.10] Bootimage bump tracker\n2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver\n2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator\n2016054 - No e2e CI presubmit configured for release component cluster-autoscaler\n2016055 - No e2e CI presubmit configured for release component console\n2016058 - openshift-sync does not synchronise in \"ose-jenkins:v4.8\"\n2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager\n2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers\n2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. \n2016179 - Add Sprint 208 translations\n2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager\n2016235 - should update to 7.5.11 for grafana resources version label\n2016296 - Openshift virtualization  : Create Windows Server 2019 VM using template : Fails\n2016334 - shiftstack: SRIOV nic reported as not supported\n2016352 - Some pods start before CA resources are present\n2016367 - Empty task box is getting created for a pipeline without finally task\n2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts\n2016438 - Feature flag gating is missing in few extensions contributed via knative plugin\n2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc\n2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets\n2016453 - Complete i18n for GaugeChart defaults\n2016479 - iface-id-ver is not getting updated for existing lsp\n2016925 - Dashboards with All filter, change to a specific value and change back to All,  data will disappear\n2016951 - dynamic actions list is not disabling \"open console\" for stopped vms\n2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available\n2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances\n2017016 - [REF] Virtualization menu\n2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn\n2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly\n2017130 - t is not a function error navigating to details page\n2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue\n2017244 - ovirt csi operator static files creation is in the wrong order\n2017276 - [4.10] Volume mounts not created with the correct security context\n2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. \n2022447 - ServiceAccount in manifests conflicts with OLM\n2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. \n2025821 - Make \"Network Attachment Definitions\" available to regular user\n2025823 - The console nav bar ignores plugin separator in existing sections\n2025830 - CentOS capitalizaion is wrong\n2025837 - Warn users that the RHEL URL expire\n2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-*\n2025903 - [UI] RoleBindings tab doesn\u0027t show correct rolebindings\n2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2026178 - OpenShift Alerting Rules Style-Guide Compliance\n2026209 - Updation of task is getting failed (tekton hub integration)\n2026223 - Internal error occurred: failed calling webhook \"ptpconfigvalidationwebhook.openshift.io\"\n2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates\n2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct\n2026352 - Kube-Scheduler revision-pruner fail during install of new cluster\n2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment\n2026383 - Error when rendering custom Grafana dashboard through ConfigMap\n2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation\n2026396 - Cachito Issues: sriov-network-operator Image build failure\n2026488 - openshift-controller-manager - delete event is repeating pathologically\n2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. \n2039359 - `oc adm prune deployments` can\u0027t prune the RS  where the associated Deployment no longer exists\n2039382 - gather_metallb_logs does not have execution permission\n2039406 - logout from rest session after vsphere operator sync is finished\n2039408 - Add GCP region northamerica-northeast2 to allowed regions\n2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration\n2039425 - No need to set KlusterletAddonConfig CR applicationManager-\u003eenabled: true in RAN ztp deployment\n2039491 - oc - git:// protocol used in unit tests\n2039516 - Bump OVN to ovn21.12-21.12.0-25\n2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate\n2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled\n2039541 - Resolv-prepender script duplicating entries\n2039586 - [e2e] update centos8 to centos stream8\n2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty\n2039619 - [AWS] In tree provisioner storageclass aws disk type should contain \u0027gp3\u0027 and csi provisioner storageclass default aws disk type should be \u0027gp3\u0027\n2039670 - Create PDBs for control plane components\n2039678 - Page goes blank when create image pull secret\n2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported\n2039743 - React missing key warning when open operator hub detail page (and maybe others as well)\n2039756 - React missing key warning when open KnativeServing details\n2039770 - Observe dashboard doesn\u0027t react on time-range changes after browser reload when perspective is changed in another tab\n2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard\n2039781 - [GSS] OBC is not visible by admin of a Project on Console\n2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector\n2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled\n2039880 - Log level too low for control plane metrics\n2039919 - Add E2E test for router compression feature\n2039981 - ZTP for standard clusters installs stalld on master nodes\n2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. \n2043117 - Recommended operators links are erroneously treated as external\n2043130 - Update CSI sidecars to the latest release for 4.10\n2043234 - Missing validation when creating several BGPPeers with the same peerAddress\n2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler\n2043254 - crio does not bind the security profiles directory\n2043296 - Ignition fails when reusing existing statically-keyed LUKS volume\n2043297 - [4.10] Bootimage bump tracker\n2043316 - RHCOS VM fails to boot on Nutanix AOS\n2043446 - Rebase aws-efs-utils to the latest upstream version. \n2043556 - Add proper ci-operator configuration to ironic and ironic-agent images\n2043577 - DPU network operator\n2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator\n2043675 - Too many machines deleted by cluster autoscaler when scaling down\n2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation\n2043709 - Logging flags no longer being bound to command line\n2043721 - Installer bootstrap hosts using outdated kubelet containing bugs\n2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather\n2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23\n2043780 - Bump router to k8s.io/api 1.23\n2043787 - Bump cluster-dns-operator to k8s.io/api 1.23\n2043801 - Bump CoreDNS to k8s.io/api 1.23\n2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown\n2043961 - [OVN-K] If pod creation fails, retry doesn\u0027t work as expected. \n2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052598 - kube-scheduler should use configmap lease\n2052599 - kube-controller-manger should use configmap lease\n2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n\nCVE-2020-13753\n\n    Milan Crha discovered that an attacker may be able to execute\n    commands outside the bubblewrap sandbox. \n\nFor the stable distribution (buster), these problems have been fixed in\nversion 2.28.3-2~deb10u1. \n\nWe recommend that you upgrade your webkit2gtk packages. \n\nFor the detailed security status of webkit2gtk please refer to\nits security tracker page at:\nhttps://security-tracker.debian.org/tracker/webkit2gtk\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAl8PaKUACgkQEMKTtsN8\nTjaf3hAAjZCKrikC4P1I/xiuL6kRepTjURi3zeZl3YywPCFLi/irWXX+5U+ejZoM\nkek3wtdJc1thN8w9BbXhOVyLferb/xfnt5jUVtZ6JBNDKKWGXoTY0Qfdu2lH0vw5\nIV1lf5bvOdawrw/tVS9Uy3dTN1kXEBZ3q3XCpRXrBWEkrXtWG/yznGy0duebnI5h\nPM7D6R4PIKiCB3HBe9rszCIQrYcGQ/U3x8a/FPnPUO2TCRfVZG918M9yO1fN1v2R\n+08h0DcOU8ggIJQwJA9hm/V3mJWpTayHh/ouTI8PrIcwG0T2/qbtUm/9cj0wvmXW\nId+RgXtQAyKeXQoXD5oP9jzVDgmm7rn03Rn2FX5hzAdTJAqdvT/Mr4IDNcOgdS8O\nwXmGprdRvMzx0gXO5YpeTuhjQiCZS1fB9ByIOMq/7lIjpiBctrhTZQvlSMMyauIQ\nP7tTTT8zCZo0DIQc/c2KyCXlD9/ORZm801U5wpXwPXT9Zq8wRAp5PodK/4plOzKc\nJyJiPI6BR41+31C438nl3wifO/wLh8+6nHAb2rkRQSe6Tu9SKyqOmYT9Ev6JZi/1\nR8NMBFSmTYM/XUv5ECsTeL3uLvDnCpKAR0EnWz5z1Cqy2AYzEthEf+1dwXAooYvO\n2johOWMaUrSWJMsYdZjTFEahaCSO5oPTDlbZCB2yIgpc6P70irU=\n=L5lA\n-----END PGP SIGNATURE-----\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-9850"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-006153"
      },
      {
        "db": "ZDI",
        "id": "ZDI-20-672"
      },
      {
        "db": "VULHUB",
        "id": "VHN-187975"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-9850"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "157881"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "168878"
      }
    ],
    "trust": 3.33
  },
  "exploit_availability": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/exploit_availability#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "reference": "https://www.scap.org.cn/vuln/vhn-187975",
        "trust": 0.1,
        "type": "unknown"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-187975"
      }
    ]
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2020-9850",
        "trust": 4.3
      },
      {
        "db": "ZDI",
        "id": "ZDI-20-672",
        "trust": 1.3
      },
      {
        "db": "PACKETSTORM",
        "id": "159447",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU98042162",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-006153",
        "trust": 0.8
      },
      {
        "db": "ZDI_CAN",
        "id": "ZDI-CAN-10773",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "158572",
        "trust": 0.7
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202005-1256",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "157881",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.2403",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.2610",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0099",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.4513",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.2509",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.2419",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1025",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0864",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0584",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.1870",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0234",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3893",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0691",
        "trust": 0.6
      },
      {
        "db": "NSFOCUS",
        "id": "49308",
        "trust": 0.6
      },
      {
        "db": "OPENWALL",
        "id": "OSS-SECURITY/2020/07/10/1",
        "trust": 0.6
      },
      {
        "db": "VULHUB",
        "id": "VHN-187975",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-9850",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "160624",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "160889",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161546",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161429",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161016",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161742",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161536",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "166279",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168878",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "ZDI",
        "id": "ZDI-20-672"
      },
      {
        "db": "VULHUB",
        "id": "VHN-187975"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-9850"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-006153"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "157881"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "168878"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202005-1256"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-9850"
      }
    ]
  },
  "id": "VAR-202006-1640",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-187975"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2024-07-23T20:21:37.869000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "HT211179",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/ht211179"
      },
      {
        "title": "HT211181",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/ht211181"
      },
      {
        "title": "HT211168",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/ht211168"
      },
      {
        "title": "HT211171",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/ht211171"
      },
      {
        "title": "HT211175",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/ht211175"
      },
      {
        "title": "HT211177",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/ht211177"
      },
      {
        "title": "HT211178",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/ht211178"
      },
      {
        "title": "HT211181",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/ht211181"
      },
      {
        "title": "HT211168",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/ht211168"
      },
      {
        "title": "HT211171",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/ht211171"
      },
      {
        "title": "HT211175",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/ht211175"
      },
      {
        "title": "HT211177",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/ht211177"
      },
      {
        "title": "HT211178",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/ht211178"
      },
      {
        "title": "HT211179",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/ht211179"
      },
      {
        "title": "",
        "trust": 0.7,
        "url": "https://support.apple.com/en-gb/ht211177"
      },
      {
        "title": "Multiple Apple product WebKit Fixes for component security vulnerabilities",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqbyid.tag?id=121015"
      },
      {
        "title": "The Register",
        "trust": 0.2,
        "url": "https://www.theregister.co.uk/2020/05/28/apple_may_updates/"
      },
      {
        "title": "Arch Linux Issues: ",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2020-9850 log"
      },
      {
        "title": "Debian Security Advisories: DSA-4724-1 webkit2gtk -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=dea2e0f2e732c4316e7997209f1f239a"
      },
      {
        "title": "Check Point Security Alerts: Apple Multiple Products Type Confusion (CVE-2020-9850)",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=check_point_security_alerts\u0026qid=f9eb75698518ca07b897b5f7cc7b0724"
      },
      {
        "title": "Red Hat: Moderate: GNOME security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20204451 - security advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Quay v3.3.3 bug fix and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210050 - security advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210436 - security advisory"
      },
      {
        "title": "Red Hat: Important: Service Telemetry Framework 1.4 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225924 - security advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210190 - security advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.10.3 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220056 - security advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20205605 - security advisory"
      }
    ],
    "sources": [
      {
        "db": "ZDI",
        "id": "ZDI-20-672"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-9850"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-006153"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202005-1256"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "NVD-CWE-Other",
        "trust": 1.0
      },
      {
        "problemtype": "CWE-Other",
        "trust": 0.8
      },
      {
        "problemtype": "CWE-noinfo",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-006153"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-9850"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211168"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211171"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211175"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211177"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211178"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211179"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211181"
      },
      {
        "trust": 1.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9850"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2020-9850"
      },
      {
        "trust": 0.8,
        "url": "http://jvn.jp/vu/jvnvu98042162/index.html"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-13050"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-9925"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-9802"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-9895"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8625"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8812"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-3899"
      },
      {
        "trust": 0.8,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8819"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-3867"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8720"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-9893"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8808"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-3902"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-3900"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-9805"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8820"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-9807"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8769"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8710"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8813"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-9850"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8811"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-9803"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-9862"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-3885"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-15503"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-10018"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8835"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8764"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8844"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-3865"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-1730"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-3864"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-19906"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-14391"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-3862"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-3901"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8823"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-15903"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-3895"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-11793"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-20454"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2018-20843"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-9894"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8816"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-9843"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-13627"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8771"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-3897"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-9806"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8814"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-14889"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8743"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-9915"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8815"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8783"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-20807"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8766"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8846"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-3868"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-3894"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8782"
      },
      {
        "trust": 0.7,
        "url": "https://support.apple.com/en-gb/ht211177"
      },
      {
        "trust": 0.7,
        "url": "https://packetstormsecurity.com/files/159447/safari-type-confusion-sandbox-escape.html"
      },
      {
        "trust": 0.7,
        "url": "https://www.debian.org/security/2020/dsa-4724"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-20907"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-20218"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-20388"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-15165"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-14382"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-19221"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-1751"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-7595"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-16168"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9327"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-16935"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-20916"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-5018"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-19956"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-14422"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-20387"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-1752"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-8492"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-6405"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-13632"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-10029"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-13630"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-13631"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-1971"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-24659"
      },
      {
        "trust": 0.6,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/ger2atkzxdhm7ffyjh67zpnzzx5vouvm/"
      },
      {
        "trust": 0.6,
        "url": "https://security.gentoo.org/glsa/202007-11"
      },
      {
        "trust": 0.6,
        "url": "https://usn.ubuntu.com/4422-1/"
      },
      {
        "trust": 0.6,
        "url": "http://lists.opensuse.org/opensuse-security-announce/2020-07/msg00074.html"
      },
      {
        "trust": 0.6,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/jdbxq2xa6x4dp4ytpxbomkslwued2kar/"
      },
      {
        "trust": 0.6,
        "url": "http://www.openwall.com/lists/oss-security/2020/07/10/1"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1025"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.1870/"
      },
      {
        "trust": 0.6,
        "url": "http://www.nsfocus.net/vulndb/49308"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0864"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht211179"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht211178"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/webkitgtk-multiple-vulnerabilities-32802"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.2403/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0691"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.2610/"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/kb/ht211178"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/kb/ht211177"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.2419/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.4513/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0099/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0234/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0584"
      },
      {
        "trust": 0.6,
        "url": "https://www.zerodayinitiative.com/advisories/zdi-20-672/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/158572/gentoo-linux-security-advisory-202007-11.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.2509/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/157881/apple-security-advisory-2020-05-26-10.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3893/"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11068"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-18197"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-11068"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
      },
      {
        "trust": 0.4,
        "url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-8177"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-14040"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20907"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5018"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19956"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20387"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20916"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19221"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15165"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16935"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20388"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16168"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20218"
      },
      {
        "trust": 0.4,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-17450"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-3121"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16300"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14466"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-10105"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-15166"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16230"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14467"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10103"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14469"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16229"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14465"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14882"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16227"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14461"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14881"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14464"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14463"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16228"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14879"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14469"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10105"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14880"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-1551"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14461"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14468"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14466"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14882"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16227"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14464"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16452"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16230"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14468"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14467"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14462"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14880"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14881"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16300"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14462"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16229"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16451"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-10103"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16228"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14463"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16451"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14879"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14470"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14470"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14465"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16452"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-8624"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-8623"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-15999"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-8622"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-28362"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-8619"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-27813"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-16845"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25660"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15586"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-14019"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25684"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.7/updating/updating-cluster"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-26160"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-13225"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8566"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25683"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25211"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29652"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15157"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25658"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-20386"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25682"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-17546"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-3884"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25685"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25686"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25687"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25681"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-3898"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9807"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9806"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9843"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9805"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9803"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9802"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18197"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-1551"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.6/updating/updating-cluster"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17450"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://security.archlinux.org/cve-2020-9850"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18609"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_openshift_container_s"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5605"
      },
      {
        "trust": 0.1,
        "url": "https://bugzilla.redhat.com/show_bug.cgi?id=1885700]"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7720"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8237"
      },
      {
        "trust": 0.1,
        "url": "https://issues.jboss.org/):"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0050"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27831"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27832"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19770"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11668"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25662"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24490"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-2007"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19072"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8649"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12655"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9458"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13249"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27846"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19068"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20636"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15925"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18808"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18809"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14553"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20054"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12826"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15862"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19602"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10773"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25661"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10749"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25641"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6977"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8647"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15917"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16166"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10774"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7774"
      },
      {
        "trust": 0.1,
        "url": "https://\u0027"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-0305"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12659"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-1716"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20812"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5633"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6978"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-0444"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-16233"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25694"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14553"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-2752"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19543"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-2574"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10763"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10942"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19062"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19046"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12465"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19447"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25696"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-16231"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14381"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19056"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19524"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8648"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12770"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19767"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19533"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19537"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-2922"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-16167"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9455"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11565"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19332"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-12614"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19063"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19319"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8563"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10732"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5634"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9800"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht204283"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9789"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3878"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9794"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9790"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20386"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0436"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0190"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25705"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-6829"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12403"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3156"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20206"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14351"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12321"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-14559"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29661"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12400"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0799"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9283"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhea-2020:5633"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5635"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3884"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13225"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17546"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24750"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30762"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33938"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8927"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44716"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3450"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43813"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33930"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25215"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30761"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33928"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3537"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3449"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-37750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27781"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0055"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22947"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27618"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-25013"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3749"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3326"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41190"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2017-14502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3733"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3520"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15358"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21684"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0056"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-39226"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3518"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13434"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44717"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0532"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33929"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36222"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9169"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3516"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29361"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9952"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2016-10228"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3517"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20305"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22946"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21673"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25677"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30666"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3521"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/faq"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13753"
      },
      {
        "trust": 0.1,
        "url": "https://security-tracker.debian.org/tracker/webkit2gtk"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/"
      }
    ],
    "sources": [
      {
        "db": "ZDI",
        "id": "ZDI-20-672"
      },
      {
        "db": "VULHUB",
        "id": "VHN-187975"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-9850"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-006153"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "157881"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "168878"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202005-1256"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-9850"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "ZDI",
        "id": "ZDI-20-672"
      },
      {
        "db": "VULHUB",
        "id": "VHN-187975"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-9850"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-006153"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "157881"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "168878"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202005-1256"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-9850"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-05-27T00:00:00",
        "db": "ZDI",
        "id": "ZDI-20-672"
      },
      {
        "date": "2020-06-09T00:00:00",
        "db": "VULHUB",
        "id": "VHN-187975"
      },
      {
        "date": "2020-06-09T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-9850"
      },
      {
        "date": "2020-07-02T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-006153"
      },
      {
        "date": "2020-12-18T19:14:41",
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "date": "2021-01-11T16:29:48",
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "date": "2021-02-25T15:29:25",
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "date": "2020-05-29T19:06:06",
        "db": "PACKETSTORM",
        "id": "157881"
      },
      {
        "date": "2021-02-16T15:44:48",
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "date": "2021-01-19T14:45:45",
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "date": "2021-03-10T16:02:43",
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "date": "2021-02-25T15:26:54",
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "date": "2022-03-11T16:38:38",
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "date": "2020-07-28T19:12:00",
        "db": "PACKETSTORM",
        "id": "168878"
      },
      {
        "date": "2020-05-26T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202005-1256"
      },
      {
        "date": "2020-06-09T17:15:15.143000",
        "db": "NVD",
        "id": "CVE-2020-9850"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-05-27T00:00:00",
        "db": "ZDI",
        "id": "ZDI-20-672"
      },
      {
        "date": "2023-01-09T00:00:00",
        "db": "VULHUB",
        "id": "VHN-187975"
      },
      {
        "date": "2023-01-09T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-9850"
      },
      {
        "date": "2020-07-02T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-006153"
      },
      {
        "date": "2022-03-11T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202005-1256"
      },
      {
        "date": "2023-01-09T16:41:59.350000",
        "db": "NVD",
        "id": "CVE-2020-9850"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202005-1256"
      }
    ],
    "trust": 0.7
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "plural  Apple Product logic vulnerabilities",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-006153"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "other",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202005-1256"
      }
    ],
    "trust": 0.6
  }
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading...

Loading...

Loading...

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
  • Confirmed: The vulnerability is confirmed from an analyst perspective.
  • Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
  • Patched: This vulnerability was successfully patched by the user reporting the sighting.
  • Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
  • Not confirmed: The user expresses doubt about the veracity of the vulnerability.
  • Not patched: This vulnerability was not successfully patched by the user reporting the sighting.