Vulnerabilites related to Tenable - Nessus Network Monitor
var-202012-1527
Vulnerability from variot
The X.509 GeneralName type is a generic type for representing different types of names. One of those name types is known as EDIPartyName. OpenSSL provides a function GENERAL_NAME_cmp which compares different instances of a GENERAL_NAME to see if they are equal or not. This function behaves incorrectly when both GENERAL_NAMEs contain an EDIPARTYNAME. A NULL pointer dereference and a crash may occur leading to a possible denial of service attack. OpenSSL itself uses the GENERAL_NAME_cmp function for two purposes: 1) Comparing CRL distribution point names between an available CRL and a CRL distribution point embedded in an X509 certificate 2) When verifying that a timestamp response token signer matches the timestamp authority name (exposed via the API functions TS_RESP_verify_response and TS_RESP_verify_token) If an attacker can control both items being compared then that attacker could trigger a crash. For example if the attacker can trick a client or server into checking a malicious certificate against a malicious CRL then this may occur. Note that some applications automatically download CRLs based on a URL embedded in a certificate. This checking happens prior to the signatures on the certificate and CRL being verified. OpenSSL's s_server, s_client and verify tools have support for the "-crl_download" option which implements automatic CRL downloading and this attack has been demonstrated to work against those tools. Note that an unrelated bug means that affected versions of OpenSSL cannot parse or construct correct encodings of EDIPARTYNAME. However it is possible to construct a malformed EDIPARTYNAME that OpenSSL's parser will accept and hence trigger this attack. All OpenSSL 1.1.1 and 1.0.2 versions are affected by this issue. Other OpenSSL releases are out of support and have not been checked. Fixed in OpenSSL 1.1.1i (Affected 1.1.1-1.1.1h). Fixed in OpenSSL 1.0.2x (Affected 1.0.2-1.0.2w). OpenSSL Project Than, OpenSSL Security Advisory [08 December 2020] Has been published. Severity - high (Severity: High)EDIPARTYNAME NULL pointer reference - CVE-2020-1971OpenSSL of GENERAL_NAME_cmp() the function is X.509 This function compares data such as the host name included in the certificate. GENERAL_NAME_cmp() Both arguments to be compared in the function are EDIPartyName If it was of type GENERAL_NAME_cmp() in a function NULL pointer reference (CWE-476) may occur and crash the server or client application calling the function.Crafted X.509 Denial of service by performing certificate verification processing (DoS) You may be attacked. The product supports a variety of encryption algorithms, including symmetric ciphers, hash algorithms, secure hash algorithms, etc. A denial of service security issue exists in OpenSSL prior to 1.1.1i. This could
prevent installations to flavors detected as baremetal
, which might have
the required capacity to complete the installation. This is usually caused
by OpenStack administrators not setting the appropriate metadata on their
bare metal flavors. Validations are now skipped on flavors detected as
baremetal
, to prevent incorrect failures from being reported.
(BZ#1889416)
- Previously, there was a broken link on the OperatorHub install page of the web console, which was intended to reference the cluster monitoring documentation. Bugs fixed (https://bugzilla.redhat.com/):
1885442 - Console doesn't load in iOS Safari when using self-signed certificates 1885946 - console-master-e2e-gcp-console test periodically fail due to no Alerts found 1887551 - Unsupported access mode should not be available to select when creating pvc by aws-ebs-csi-driver(gp2-csi) from web-console 1888165 - [release 4.6] IO doesn't recognize namespaces - 2 resources with the same name in 2 namespaces -> only 1 gets collected 1888650 - Fix CVE-2015-7501 affecting agent-maven-3.5 1888717 - Cypress: Fix 'link-name' accesibility violation 1888721 - ovn-masters stuck in crashloop after scale test 1890993 - Selected Capacity is showing wrong size 1890994 - When the user clicked cancel at the Create Storage Class confirmation dialog all the data from the Local volume set goes off 1891427 - CLI does not save login credentials as expected when using the same username in multiple clusters 1891454 - EgressNetworkPolicy does not work when setting Allow rule to a dnsName 1891499 - Other machine config pools do not show during update 1891891 - Wrong detail head on network policy detail page. Relevant releases/architectures:
Red Hat Enterprise Linux Server AUS (v. 7.4) - x86_64 Red Hat Enterprise Linux Server E4S (v. 7.4) - ppc64le, x86_64 Red Hat Enterprise Linux Server Optional AUS (v. 7.4) - x86_64 Red Hat Enterprise Linux Server Optional E4S (v. 7.4) - ppc64le, x86_64 Red Hat Enterprise Linux Server Optional TUS (v. 7.4) - x86_64 Red Hat Enterprise Linux Server TUS (v. 7.4) - x86_64
- Description:
OpenSSL is a toolkit that implements the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols, as well as a full-strength general-purpose cryptography library.
Security Fix(es):
- openssl: EDIPARTYNAME NULL pointer de-reference (CVE-2020-1971)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
For the update to take effect, all services linked to the OpenSSL library must be restarted, or the system rebooted. Package List:
Red Hat Enterprise Linux Server AUS (v. 7.4):
Source: openssl-1.0.2k-9.el7_4.src.rpm
x86_64: openssl-1.0.2k-9.el7_4.x86_64.rpm openssl-debuginfo-1.0.2k-9.el7_4.i686.rpm openssl-debuginfo-1.0.2k-9.el7_4.x86_64.rpm openssl-devel-1.0.2k-9.el7_4.i686.rpm openssl-devel-1.0.2k-9.el7_4.x86_64.rpm openssl-libs-1.0.2k-9.el7_4.i686.rpm openssl-libs-1.0.2k-9.el7_4.x86_64.rpm
Red Hat Enterprise Linux Server E4S (v. 7.4):
Source: openssl-1.0.2k-9.el7_4.src.rpm
ppc64le: openssl-1.0.2k-9.el7_4.ppc64le.rpm openssl-debuginfo-1.0.2k-9.el7_4.ppc64le.rpm openssl-devel-1.0.2k-9.el7_4.ppc64le.rpm openssl-libs-1.0.2k-9.el7_4.ppc64le.rpm
x86_64: openssl-1.0.2k-9.el7_4.x86_64.rpm openssl-debuginfo-1.0.2k-9.el7_4.i686.rpm openssl-debuginfo-1.0.2k-9.el7_4.x86_64.rpm openssl-devel-1.0.2k-9.el7_4.i686.rpm openssl-devel-1.0.2k-9.el7_4.x86_64.rpm openssl-libs-1.0.2k-9.el7_4.i686.rpm openssl-libs-1.0.2k-9.el7_4.x86_64.rpm
Red Hat Enterprise Linux Server TUS (v. 7.4):
Source: openssl-1.0.2k-9.el7_4.src.rpm
x86_64: openssl-1.0.2k-9.el7_4.x86_64.rpm openssl-debuginfo-1.0.2k-9.el7_4.i686.rpm openssl-debuginfo-1.0.2k-9.el7_4.x86_64.rpm openssl-devel-1.0.2k-9.el7_4.i686.rpm openssl-devel-1.0.2k-9.el7_4.x86_64.rpm openssl-libs-1.0.2k-9.el7_4.i686.rpm openssl-libs-1.0.2k-9.el7_4.x86_64.rpm
Red Hat Enterprise Linux Server Optional AUS (v. 7.4):
x86_64: openssl-debuginfo-1.0.2k-9.el7_4.i686.rpm openssl-debuginfo-1.0.2k-9.el7_4.x86_64.rpm openssl-perl-1.0.2k-9.el7_4.x86_64.rpm openssl-static-1.0.2k-9.el7_4.i686.rpm openssl-static-1.0.2k-9.el7_4.x86_64.rpm
Red Hat Enterprise Linux Server Optional E4S (v. 7.4):
ppc64le: openssl-debuginfo-1.0.2k-9.el7_4.ppc64le.rpm openssl-perl-1.0.2k-9.el7_4.ppc64le.rpm openssl-static-1.0.2k-9.el7_4.ppc64le.rpm
x86_64: openssl-debuginfo-1.0.2k-9.el7_4.i686.rpm openssl-debuginfo-1.0.2k-9.el7_4.x86_64.rpm openssl-perl-1.0.2k-9.el7_4.x86_64.rpm openssl-static-1.0.2k-9.el7_4.i686.rpm openssl-static-1.0.2k-9.el7_4.x86_64.rpm
Red Hat Enterprise Linux Server Optional TUS (v. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update Advisory ID: RHSA-2020:5633-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2020:5633 Issue date: 2021-02-24 CVE Names: CVE-2018-10103 CVE-2018-10105 CVE-2018-14461 CVE-2018-14462 CVE-2018-14463 CVE-2018-14464 CVE-2018-14465 CVE-2018-14466 CVE-2018-14467 CVE-2018-14468 CVE-2018-14469 CVE-2018-14470 CVE-2018-14553 CVE-2018-14879 CVE-2018-14880 CVE-2018-14881 CVE-2018-14882 CVE-2018-16227 CVE-2018-16228 CVE-2018-16229 CVE-2018-16230 CVE-2018-16300 CVE-2018-16451 CVE-2018-16452 CVE-2018-20843 CVE-2019-3884 CVE-2019-5018 CVE-2019-6977 CVE-2019-6978 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9455 CVE-2019-9458 CVE-2019-11068 CVE-2019-12614 CVE-2019-13050 CVE-2019-13225 CVE-2019-13627 CVE-2019-14889 CVE-2019-15165 CVE-2019-15166 CVE-2019-15903 CVE-2019-15917 CVE-2019-15925 CVE-2019-16167 CVE-2019-16168 CVE-2019-16231 CVE-2019-16233 CVE-2019-16935 CVE-2019-17450 CVE-2019-17546 CVE-2019-18197 CVE-2019-18808 CVE-2019-18809 CVE-2019-19046 CVE-2019-19056 CVE-2019-19062 CVE-2019-19063 CVE-2019-19068 CVE-2019-19072 CVE-2019-19221 CVE-2019-19319 CVE-2019-19332 CVE-2019-19447 CVE-2019-19524 CVE-2019-19533 CVE-2019-19537 CVE-2019-19543 CVE-2019-19602 CVE-2019-19767 CVE-2019-19770 CVE-2019-19906 CVE-2019-19956 CVE-2019-20054 CVE-2019-20218 CVE-2019-20386 CVE-2019-20387 CVE-2019-20388 CVE-2019-20454 CVE-2019-20636 CVE-2019-20807 CVE-2019-20812 CVE-2019-20907 CVE-2019-20916 CVE-2020-0305 CVE-2020-0444 CVE-2020-1716 CVE-2020-1730 CVE-2020-1751 CVE-2020-1752 CVE-2020-1971 CVE-2020-2574 CVE-2020-2752 CVE-2020-2922 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3898 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-6405 CVE-2020-7595 CVE-2020-7774 CVE-2020-8177 CVE-2020-8492 CVE-2020-8563 CVE-2020-8566 CVE-2020-8619 CVE-2020-8622 CVE-2020-8623 CVE-2020-8624 CVE-2020-8647 CVE-2020-8648 CVE-2020-8649 CVE-2020-9327 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-10018 CVE-2020-10029 CVE-2020-10732 CVE-2020-10749 CVE-2020-10751 CVE-2020-10763 CVE-2020-10773 CVE-2020-10774 CVE-2020-10942 CVE-2020-11565 CVE-2020-11668 CVE-2020-11793 CVE-2020-12465 CVE-2020-12655 CVE-2020-12659 CVE-2020-12770 CVE-2020-12826 CVE-2020-13249 CVE-2020-13630 CVE-2020-13631 CVE-2020-13632 CVE-2020-14019 CVE-2020-14040 CVE-2020-14381 CVE-2020-14382 CVE-2020-14391 CVE-2020-14422 CVE-2020-15157 CVE-2020-15503 CVE-2020-15862 CVE-2020-15999 CVE-2020-16166 CVE-2020-24490 CVE-2020-24659 CVE-2020-25211 CVE-2020-25641 CVE-2020-25658 CVE-2020-25661 CVE-2020-25662 CVE-2020-25681 CVE-2020-25682 CVE-2020-25683 CVE-2020-25684 CVE-2020-25685 CVE-2020-25686 CVE-2020-25687 CVE-2020-25694 CVE-2020-25696 CVE-2020-26160 CVE-2020-27813 CVE-2020-27846 CVE-2020-28362 CVE-2020-29652 CVE-2021-2007 CVE-2021-3121 =====================================================================
- Summary:
Red Hat OpenShift Container Platform release 4.7.0 is now available.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.7.0. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHSA-2020:5634
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-x86_64
The image digest is sha256:d74b1cfa81f8c9cc23336aee72d8ae9c9905e62c4874b071317a078c316f8a70
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-s390x
The image digest is sha256:a68ca03d87496ddfea0ac26b82af77231583a58a7836b95de85efe5e390ad45d
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-ppc64le
The image digest is sha256:bc7b04e038c8ff3a33b827f4ee19aa79b26e14c359a7dcc1ced9f3b58e5f1ac6
All OpenShift Container Platform 4.7 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor.
Security Fix(es):
-
crewjam/saml: authentication bypass in saml authentication (CVE-2020-27846)
-
golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference (CVE-2020-29652)
-
gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)
-
nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)
-
kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider (CVE-2020-8563)
-
containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters (CVE-2020-10749)
-
heketi: gluster-block volume password details available in logs (CVE-2020-10763)
-
golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash (CVE-2020-14040)
-
jwt-go: access restriction bypass vulnerability (CVE-2020-26160)
-
golang-github-gorilla-websocket: integer overflow leads to denial of service (CVE-2020-27813)
-
golang: math/big: panic during recursive division of very large numbers (CVE-2020-28362)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
For OpenShift Container Platform 4.7, see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -cli.html.
- Bugs fixed (https://bugzilla.redhat.com/):
1620608 - Restoring deployment config with history leads to weird state
1752220 - [OVN] Network Policy fails to work when project label gets overwritten
1756096 - Local storage operator should implement must-gather spec
1756173 - /etc/udev/rules.d/66-azure-storage.rules missing from initramfs
1768255 - installer reports 100% complete but failing components
1770017 - Init containers restart when the exited container is removed from node.
1775057 - [MSTR-485] Cluster is abnormal after etcd backup/restore when the backup is conducted during etcd encryption is migrating
1775444 - RFE: k8s cpu manager does not restrict /usr/bin/pod cpuset
1777038 - Cluster scaled beyond host subnet limits does not fire alert or cleanly report why it cannot scale
1777224 - InfraID in metadata.json and .openshift_install_state.json is not consistent when repeating create
commands
1784298 - "Displaying with reduced resolution due to large dataset." would show under some conditions
1785399 - Under condition of heavy pod creation, creation fails with 'error reserving pod name ...: name is reserved"
1797766 - Resource Requirements" specDescriptor fields - CPU and Memory injects empty string YAML editor
1801089 - [OVN] Installation failed and monitoring pod not created due to some network error.
1805025 - [OSP] Machine status doesn't become "Failed" when creating a machine with invalid image
1805639 - Machine status should be "Failed" when creating a machine with invalid machine configuration
1806000 - CRI-O failing with: error reserving ctr name
1806915 - openshift-service-ca: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be
1806917 - openshift-service-ca-operator: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be
1810438 - Installation logs are not gathered from OCP nodes
1812085 - kubernetes-networking-namespace-pods dashboard doesn't exist
1812412 - Monitoring Dashboard: on restricted cluster, query timed out in expression evaluation
1813012 - EtcdDiscoveryDomain no longer needed
1813949 - openshift-install doesn't use env variables for OS_* for some of API endpoints
1816812 - OpenShift test suites are not resilient to rate limited registries (like docker.io) and cannot control their dependencies for offline use
1819053 - loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
1819457 - Package Server is in 'Cannot update' status despite properly working
1820141 - [RFE] deploy qemu-quest-agent on the nodes
1822744 - OCS Installation CI test flaking
1824038 - Integration Tests: StaleElementReferenceError in OLM single-installmode scenario
1825892 - StorageClasses and PVs are not cleaned completely after running the csi verification tool
1826301 - Wrong NodeStatus reports in file-integrity scan when configuration error in aide.conf file
1829723 - User workload monitoring alerts fire out of the box
1832968 - oc adm catalog mirror does not mirror the index image itself
1833012 - Lower OVNKubernetes HTTP E/W performance compared with OpenShiftSDN
1833220 - CVE-2020-10749 containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters
1834995 - olmFull suite always fails once th suite is run on the same cluster
1836017 - vSphere UPI: Both Internal and External load balancers for kube-apiserver should use /readyz
1837953 - Replacing masters doesn't work for ovn-kubernetes 4.4
1838352 - OperatorExited, Pending marketplace-operator-... pod for several weeks
1838751 - [oVirt][Tracker] Re-enable skipped network tests
1839239 - csi-snapshot-controller flickers Degraded=True on etcd hiccups
1840759 - [aws-ebs-csi-driver] The volume created by aws ebs csi driver can not be deleted when the cluster is destroyed
1841039 - authentication-operator: Add e2e test for password grants to Keycloak being set as OIDC IdP
1841119 - Get rid of config patches and pass flags directly to kcm
1841175 - When an Install Plan gets deleted, OLM does not create a new one
1841381 - Issue with memoryMB validation
1841885 - oc adm catalog mirror command attempts to pull from registry.redhat.io when using --from-dir option
1844727 - Etcd container leaves grep and lsof zombie processes
1845387 - CVE-2020-10763 heketi: gluster-block volume password details available in logs
1847074 - Filter bar layout issues at some screen widths on search page
1848358 - CRDs with preserveUnknownFields:true don't reflect in status that they are non-structural
1849543 - [4.5]kubeletconfig's description will show multiple lines for finalizers when upgrade from 4.4.8->4.5
1851103 - Use of NetworkManager-wait-online.service in rhcos-growpart.service
1851203 - [GSS] [RFE] Need a simpler representation of capactiy breakdown in total usage and per project breakdown in OCS 4 dashboard
1851351 - OCP 4.4.9: EtcdMemberIPMigratorDegraded: rpc error: code = Canceled desc = grpc: the client connection is closing
1851693 - The oc apply
should return errors instead of hanging there when failing to create the CRD
1852289 - Upgrade testsuite fails on ppc64le environment - Unsupported LoadBalancer service
1853115 - the restriction of --cloud option should be shown in help text.
1853116 - --to
option does not work with --credentials-requests
flag.
1853352 - [v2v][UI] Storage Class fields Should Not be empty in VM disks view
1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash
1854567 - "Installed Operators" list showing "duplicated" entries during installation
1855325 - [Feature:Prometheus][Conformance] Prometheus when installed on the cluster [Top Level] [Feature:Prometheus][Conformance] Prometheus when installed on the cluster should report telemetry if a cloud.openshift.com token is present
1855351 - Inconsistent Installer reactions to Ctrl-C during user input process
1855408 - OVN cluster unstable after running minimal scale test
1856351 - Build page should show metrics for when the build ran, not the last 30 minutes
1856354 - New APIServices missing from OpenAPI definitions
1857446 - ARO/Azure: excessive pod memory allocation causes node lockup
1857877 - Operator upgrades can delete existing CSV before completion
1858578 - [v2v] [ui] VM import RHV to CNV Target VM Name longer than 63 chars should not be allowed
1859174 - [IPI][OSP] Having errors from 4.3 to 4.6 about Security group rule already created
1860136 - default ingress does not propagate annotations to route object on update
1860322 - [OCPv4.5.2] after unexpected shutdown one of RHV Hypervisors, OCP worker nodes machine are marked as "Failed"
1860518 - unable to stop a crio pod
1861383 - Route with haproxy.router.openshift.io/timeout: 365d
kills the ingress controller
1862430 - LSO: PV creation lock should not be acquired in a loop
1862489 - LSO autoprovisioning should exclude top level disks that are part of LVM volume group.
1862608 - Virtual media does not work on hosts using BIOS, only UEFI
1862918 - [v2v] User should only select SRIOV network when importin vm with SRIOV network
1865743 - Some pods are stuck in ContainerCreating and some sdn pods are in CrashLoopBackOff
1865839 - rpm-ostree fails with "System transaction in progress" when moving to kernel-rt
1866043 - Configurable table column headers can be illegible
1866087 - Examining agones helm chart resources results in "Oh no!"
1866261 - Need to indicate the intentional behavior for Ansible in the create api
help info
1866298 - [RHOCS Usability Study][Installation] Labeling the namespace should be a part of the installation flow or be clearer as a requirement
1866320 - [RHOCS Usability Study][Dashboard] Users were confused by Available Capacity and the Total Capacity
1866334 - [RHOCS Usability Study][Installation] On the Operator installation page, there’s no indication on which labels offer tooltip/help
1866340 - [RHOCS Usability Study][Dashboard] It was not clear why “No persistent storage alerts” was prominently displayed
1866343 - [RHOCS Usability Study][Dashboard] User wanted to know the time frame for Data Consumption, e.g I/O Operations
1866445 - kola --basic-qemu-scenarios scenario fail on ppc64le & s390x
1866482 - Few errors are seen when oc adm must-gather is run
1866605 - No metadata.generation set for build and buildconfig objects
1866873 - MCDDrainError "Drain failed on , updates may be blocked" missing rendered node name
1866901 - Deployment strategy for BMO allows multiple pods to run at the same time
1866925 - openshift-install destroy cluster should fail quickly when provided with invalid credentials on Azure.
1867165 - Cannot assign static address to baremetal install bootstrap vm
1867380 - When using webhooks in OCP 4.5 fails to rollout latest deploymentconfig
1867400 - [OCs 4.5]UI should not allow creation of second storagecluster of different mode in a single OCS
1867477 - HPA monitoring cpu utilization fails for deployments which have init containers
1867518 - [oc] oc should not print so many goroutines when ANY command fails
1867608 - ds/machine-config-daemon takes 100+ minutes to rollout on 250 node cluster
1867965 - OpenShift Console Deployment Edit overwrites deployment yaml
1868004 - opm index add appears to produce image with wrong registry server binary
1868065 - oc -o jsonpath prints possible warning / bug "Unable to decode server response into a Table"
1868104 - Baremetal actuator should not delete Machine objects
1868125 - opm index add is not creating an index with valid images when --permissive flag is added, the index is empty instead
1868384 - CLI does not save login credentials as expected when using the same username in multiple clusters
1868527 - OpenShift Storage using VMWare vSAN receives error "Failed to add disk 'scsi0:2'" when mounted pod is created on separate node
1868645 - After a disaster recovery pods a stuck in "NodeAffinity" state and not running
1868748 - ClusterProvisioningIP in baremetal platform has wrong JSON annotation
1868765 - [vsphere][ci] could not reserve an IP address: no available addresses
1868770 - catalogSource named "redhat-operators" deleted in a disconnected cluster
1868976 - Prometheus error opening query log file on EBS backed PVC
1869293 - The configmap name looks confusing in aide-ds pod logs
1869606 - crio's failing to delete a network namespace
1870337 - [sig-storage] Managed cluster should have no crashlooping recycler pods over four minutes
1870342 - [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]
1870373 - Ingress Operator reports available when DNS fails to provision
1870467 - D/DC Part of Helm / Operator Backed should not have HPA
1870728 - openshift-install creates expired ignition files from stale .openshift_install_state.json
1870800 - [4.6] Managed Column not appearing on Pods Details page
1871170 - e2e tests are needed to validate the functionality of the etcdctl container
1872001 - EtcdDiscoveryDomain no longer needed
1872095 - content are expanded to the whole line when only one column in table on Resource Details page
1872124 - Could not choose device type as "disk" or "part" when create localvolumeset from web console
1872128 - Can't run container with hostPort on ipv6 cluster
1872166 - 'Silences' link redirects to unexpected 'Alerts' view after creating a silence in the Developer perspective
1872251 - [aws-ebs-csi-driver] Verify job in CI doesn't check for vendor dir sanity
1872786 - Rules in kube-apiserver.rules are taking too long and consuming too much memory for Prometheus to evaluate them
1872821 - [DOC] Typo in Ansible Operator Tutorial
1872907 - Fail to create CR from generated Helm Base Operator
1872923 - Click "Cancel" button on the "initialization-resource" creation form page should send users to the "Operator details" page instead of "Install Operator" page (previous page)
1873007 - [downstream] failed to read config when running the operator-sdk in the home path
1873030 - Subscriptions without any candidate operators should cause resolution to fail
1873043 - Bump to latest available 1.19.x k8s
1873114 - Nodes goes into NotReady state (VMware)
1873288 - Changing Cluster-Wide Pull Secret Does Not Trigger Updates In Kubelet Filesystem
1873305 - Failed to power on /inspect node when using Redfish protocol
1873326 - Accessibility - The symbols e.g checkmark in the overview page has no text description, label, or other accessible information
1873480 - Accessibility - No text description, alt text, label, or other accessible information associated with the help icon: “?” button/icon in Developer Console ->Navigation
1873556 - [Openstack] HTTP_PROXY setting for NetworkManager-resolv-prepender not working
1873593 - MCO fails to cope with ContainerRuntimeConfig thas has a name > 63 characters
1874057 - Pod stuck in CreateContainerError - error msg="container_linux.go:348: starting container process caused \"chdir to cwd (\\"/mount-point\\") set in config.json failed: permission denied\""
1874074 - [CNV] Windows 2019 Default Template Not Defaulting to Proper NIC/Storage Driver
1874192 - [RFE] "Create Backing Store" page doesn't allow to select already defined k8s secret as target bucket credentials when Google Cloud Storage is selected as a provider
1874240 - [vsphere] unable to deprovision - Runtime error list attached objects
1874248 - Include validation for vcenter host in the install-config
1874340 - vmware: NodeClockNotSynchronising alert is triggered in openshift cluster after upgrading form 4.4.16 to 4.5.6
1874583 - apiserver tries and fails to log an event when shutting down
1874584 - add retry for etcd errors in kube-apiserver
1874638 - Missing logging for nbctl daemon
1874736 - [downstream] no version info for the helm-operator
1874901 - add utm_source parameter to Red Hat Marketplace URLs for attribution
1874968 - Accessibility: The project selection drop down is a keyboard trap
1875247 - Dependency resolution error "found more than one head for channel" is unhelpful for users
1875516 - disabled scheduling is easy to miss in node page of OCP console
1875598 - machine status is Running for a master node which has been terminated from the console
1875806 - When creating a service of type "LoadBalancer" (Kuryr,OVN) communication through this loadbalancer failes after 2-5 minutes.
1876166 - need to be able to disable kube-apiserver connectivity checks
1876469 - Invalid doc link on yaml template schema description
1876701 - podCount specDescriptor change doesn't take effect on operand details page
1876815 - Installer uses the environment variable OS_CLOUD for manifest generation despite explicit prompt
1876935 - AWS volume snapshot is not deleted after the cluster is destroyed
1877071 - vSphere IPI - Nameserver limits were exceeded, some nameservers have been omitted
1877105 - add redfish to enabled_bios_interfaces
1877116 - e2e aws calico tests fail with rpc error: code = ResourceExhausted
1877273 - [OVN] EgressIP cannot fail over to available nodes after one egressIP node shutdown
1877648 - [sriov]VF from allocatable and capacity of node is incorrect when the policy is only 'rootDevices'
1877681 - Manually created PV can not be used
1877693 - dnsrecords specify recordTTL as 30 but the value is null in AWS Route 53
1877740 - RHCOS unable to get ip address during first boot
1877812 - [ROKS] IBM cloud failed to terminate OSDs when upgraded between internal builds of OCS 4.5
1877919 - panic in multus-admission-controller
1877924 - Cannot set BIOS config using Redfish with Dell iDracs
1878022 - Met imagestreamimport error when import the whole image repository
1878086 - OCP 4.6+OCS 4.6(multiple SC) Internal Mode- UI should populate the default "Filesystem Name" instead of providing a textbox, & the name should be validated
1878301 - [4.6] [UI] Unschedulable used to always be displayed when Node is Ready status
1878701 - After deleting and recreating a VM with same name, the VM events contain the events from the old VM
1878766 - CPU consumption on nodes is higher than the CPU count of the node.
1878772 - On the nodes there are up to 547 zombie processes caused by thanos and Prometheus.
1878823 - "oc adm release mirror" generating incomplete imageContentSources when using "--to" and "--to-release-image"
1878845 - 4.5 to 4.6.rc.4 upgrade failure: authentication operator health check connection refused for multitenant mode
1878900 - Installer complains about not enough vcpu for the baremetal flavor where generic bm flavor is being used
1878953 - RBAC error shows when normal user access pvc upload page
1878956 - oc api-resources
does not include API version
1878972 - oc adm release mirror removes the architecture information
1879013 - [RFE]Improve CD-ROM interface selection
1879056 - UI should allow to change or unset the evictionStrategy
1879057 - [CSI Certificate Test] Test failed for CSI certification tests for CSIdriver openshift-storage.rbd.csi.ceph.com with RWX enabled
1879094 - RHCOS dhcp kernel parameters not working as expected
1879099 - Extra reboot during 4.5 -> 4.6 upgrade
1879244 - Error adding container to network "ipvlan-host-local": "master" field is required
1879248 - OLM Cert Dir for Webhooks does not align SDK/Kubebuilder
1879282 - Update OLM references to point to the OLM's new doc site
1879283 - panic after nil pointer dereference in pkg/daemon/update.go
1879365 - Overlapping, divergent openshift-cluster-storage-operator manifests
1879419 - [RFE]Improve boot source description for 'Container' and ‘URL’
1879430 - openshift-object-counts quota is not dynamically updating as the resource is deleted.
1879565 - IPv6 installation fails on node-valid-hostname
1879777 - Overlapping, divergent openshift-machine-api namespace manifests
1879878 - Messages flooded in thanos-querier pod- oauth-proxy container: Authorization header does not start with 'Basic', skipping basic authentication in Log message in thanos-querier pod the oauth-proxy
1879930 - Annotations shouldn't be removed during object reconciliation
1879976 - No other channel visible from console
1880068 - image pruner is not aware of image policy annotation, StatefulSets, etc.
1880148 - dns daemonset rolls out slowly in large clusters
1880161 - Actuator Update calls should have fixed retry time
1880259 - additional network + OVN network installation failed
1880389 - Pipeline Runs with skipped Tasks incorrectly show Tasks as "Failed"
1880410 - Convert Pipeline Visualization node to SVG
1880417 - [vmware] Fail to boot with Secure Boot enabled, kernel lockdown denies iopl access to afterburn
1880443 - broken machine pool management on OpenStack
1880450 - Host failed to install because its installation stage joined took longer than expected 20m0s.
1880473 - IBM Cloudpak operators installation stuck "UpgradePending" with InstallPlan status updates failing due to size limitation
1880680 - [4.3] [Tigera plugin] - openshift-kube-proxy fails - Failed to execute iptables-restore: exit status 4 (iptables-restore v1.8.4 (nf_tables)
1880785 - CredentialsRequest missing description in oc explain
1880787 - No description for Provisioning CRD for oc explain
1880902 - need dnsPlocy set in crd ingresscontrollers
1880913 - [DeScheduler] - change loglevel from Info to Error when priority class given in the descheduler params is not present in the cluster
1881027 - Cluster installation fails at with error : the container name \"assisted-installer\" is already in use
1881046 - [OSP] openstack-cinder-csi-driver-operator doesn't contain required manifests and assets
1881155 - operator install authentication: Authentication require functional ingress which requires at least one schedulable and ready node
1881268 - Image uploading failed but wizard claim the source is available
1881322 - kube-scheduler not scheduling pods for certificates not renewed automatically after nodes restoration
1881347 - [v2v][ui]VM Import Wizard does not call Import provider cleanup
1881881 - unable to specify target port manually resulting in application not reachable
1881898 - misalignment of sub-title in quick start headers
1882022 - [vsphere][ipi] directory path is incomplete, terraform can't find the cluster
1882057 - Not able to select access modes for snapshot and clone
1882140 - No description for spec.kubeletConfig
1882176 - Master recovery instructions don't handle IP change well
1882191 - Installation fails against external resources which lack DNS Subject Alternative Name
1882209 - [ BateMetal IPI ] local coredns resolution not working
1882210 - [release 4.7] insights-operator: Fix bug in reflector not recovering from "Too large resource version"
1882268 - [e2e][automation]Add Integration Test for Snapshots
1882361 - Retrieve and expose the latest report for the cluster
1882485 - dns-node-resolver corrupts /etc/hosts if internal registry is not in use
1882556 - git:// protocol in origin tests is not currently proxied
1882569 - CNO: Replacing masters doesn't work for ovn-kubernetes 4.4
1882608 - Spot instance not getting created on AzureGovCloud
1882630 - Fstype is changed after deleting pv provisioned by localvolumeset instance
1882649 - IPI installer labels all images it uploads into glance as qcow2
1882653 - The Approval should display the Manual after the APPROVAL changed to Manual from the Automatic
1882658 - [RFE] Volume Snapshot is not listed under inventory in Project Details page
1882660 - Operators in a namespace should be installed together when approve one
1882667 - [ovn] br-ex Link not found when scale up RHEL worker
1882723 - [vsphere]Suggested mimimum value for providerspec not working
1882730 - z systems not reporting correct core count in recording rule
1882750 - [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully
1882781 - nameserver= option to dracut creates extra NM connection profile
1882785 - Multi-Arch CI Jobs destroy libvirt network but occasionally leave it defined
1882844 - [IPI on vsphere] Executing 'openshift-installer destroy cluster' leaves installer tag categories in vsphere
1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability
1883388 - Bare Metal Hosts Details page doesn't show Mainitenance and Power On/Off status
1883422 - operator-sdk cleanup fail after installing operator with "run bundle" without installmode and og with ownnamespace
1883425 - Gather top installplans and their count
1883502 - Logging is broken due to mix of k8s.io/klog v1 and v2
1883523 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel]
1883538 - must gather report "cannot file manila/aws ebs/ovirt csi related namespaces and objects" error
1883560 - operator-registry image needs clean up in /tmp
1883563 - Creating duplicate namespace from create namespace modal breaks the UI
1883614 - [OCP 4.6] [UI] UI should not describe power cycle as "graceful"
1883642 - [sig-imageregistry][Feature:ImageTriggers][Serial] ImageStream admission TestImageStreamAdmitSpecUpdate
1883660 - e2e-metal-ipi CI job consistently failing on 4.4
1883765 - [user workload monitoring] improve latency of Thanos sidecar when streaming read requests
1883766 - [e2e][automation] Adjust tests for UI changes
1883768 - [user workload monitoring] The Prometheus operator should discard invalid TLS configurations
1883773 - opm alpha bundle build fails on win10 home
1883790 - revert "force cert rotation every couple days for development" in 4.7
1883803 - node pull secret feature is not working as expected
1883836 - Jenkins imagestream ubi8 and nodejs12 update
1883847 - The UI does not show checkbox for enable encryption at rest for OCS
1883853 - go list -m all does not work
1883905 - race condition in opm index add --overwrite-latest
1883946 - Understand why trident CSI pods are getting deleted by OCP
1884035 - Pods are illegally transitioning back to pending
1884041 - e2e should provide error info when minimum number of pods aren't ready in kube-system namespace
1884131 - oauth-proxy repository should run tests
1884165 - Repos should be disabled in -firstboot.service before OS extensions are applied
1884221 - IO becomes unhealthy due to a file change
1884258 - Node network alerts should work on ratio rather than absolute values
1884270 - Git clone does not support SCP-style ssh locations
1884334 - CVO marks an upgrade as failed when an operator takes more than 20 minutes to rollout
1884435 - vsphere - loopback is randomly not being added to resolver
1884565 - oauth-proxy crashes on invalid usage
1884584 - Kuryr controller continuously restarting due to unable to clean up Network Policy
1884613 - Create Instance of Prometheus from operator returns blank page for non cluster-admin users
1884628 - ovs-configuration service fails when the external network is configured on a tagged vlan on top of a bond device on a baremetal IPI deployment
1884629 - Visusally impaired user using screen reader not able to select Admin/Developer console options in drop down menu.
1884632 - Adding BYOK disk encryption through DES
1884654 - Utilization of a VMI is not populated
1884655 - KeyError on self._existing_vifs[port_id]
1884664 - Operator install page shows "installing..." instead of going to install status page
1884672 - Failed to inspect hardware. Reason: unable to start inspection: 'idrac'
1884691 - Installer blocks cloud-credential-operator manual mode on GCP and Azure
1884724 - Quick Start: Serverless quickstart doesn't match Operator install steps
1884739 - Node process segfaulted
1884824 - Update baremetal-operator libraries to k8s 1.19
1885002 - network kube-rbac-proxy scripts crashloop rather than non-crash looping
1885138 - Wrong detection of pending state in VM details
1885151 - [Cloud Team - Cluster API Provider Azure] Logging is broken due to mix of k8s.io/klog v1 and v2
1885165 - NoRunningOvnMaster alert falsely triggered
1885170 - Nil pointer when verifying images
1885173 - [e2e][automation] Add test for next run configuration feature
1885179 - oc image append fails on push (uploading a new layer)
1885213 - Vertical Pod Autoscaler (VPA) not working with DeploymentConfig
1885218 - [e2e][automation] Add virtctl to gating script
1885223 - Sync with upstream (fix panicking cluster-capacity binary)
1885235 - Prometheus: Logging is broken due to mix of k8s.io/klog v1 and v2
1885241 - kube-rbac-proxy: Logging is broken due to mix of k8s.io/klog v1 and v2
1885243 - prometheus-adapter: Logging is broken due to mix of k8s.io/klog v1 and v2
1885244 - prometheus-operator: Logging is broken due to mix of k8s.io/klog v1 and v2
1885246 - cluster-monitoring-operator: Logging is broken due to mix of k8s.io/klog v1 and v2
1885249 - openshift-state-metrics: Logging is broken due to mix of k8s.io/klog v1 and v2
1885308 - Supermicro nodes failed to boot via disk during installation when using IPMI and UEFI
1885315 - unit tests fail on slow disks
1885319 - Remove redundant use of group and kind of DataVolumeTemplate
1885343 - Console doesn't load in iOS Safari when using self-signed certificates
1885344 - 4.7 upgrade - dummy bug for 1880591
1885358 - add p&f configuration to protect openshift traffic
1885365 - MCO does not respect the install section of systemd files when enabling
1885376 - failed to initialize the cluster: Cluster operator marketplace is still updating
1885398 - CSV with only Webhook conversion can't be installed
1885403 - Some OLM events hide the underlying errors
1885414 - Need to disable HTX when not using HTTP/2 in order to preserve HTTP header name case
1885425 - opm index add cannot batch add multiple bundles that use skips
1885543 - node tuning operator builds and installs an unsigned RPM
1885644 - Panic output due to timeouts in openshift-apiserver
1885676 - [OCP 4.7]UI should fallback to minimal deployment only after total CPU < 30 || totalMemory < 72 GiB for initial deployment
1885702 - Cypress: Fix 'aria-hidden-focus' accesibility violations
1885706 - Cypress: Fix 'link-name' accesibility violation
1885761 - DNS fails to resolve in some pods
1885856 - Missing registry v1 protocol usage metric on telemetry
1885864 - Stalld service crashed under the worker node
1885930 - [release 4.7] Collect ServiceAccount statistics
1885940 - kuryr/demo image ping not working
1886007 - upgrade test with service type load balancer will never work
1886022 - Move range allocations to CRD's
1886028 - [BM][IPI] Failed to delete node after scale down
1886111 - UpdatingopenshiftStateMetricsFailed: DeploymentRollout of openshift-monitoring/openshift-state-metrics: got 1 unavailable replicas
1886134 - Need to set GODEBUG=x509ignoreCN=0 in initrd
1886154 - System roles are not present while trying to create new role binding through web console
1886166 - 1885517 Clone - Not needed for 4.7 - upgrade from 4.5->4.6 causes broadcast storm
1886168 - Remove Terminal Option for Windows Nodes
1886200 - greenwave / CVP is failing on bundle validations, cannot stage push
1886229 - Multipath support for RHCOS sysroot
1886294 - Unable to schedule a pod due to Insufficient ephemeral-storage
1886327 - Attempt to add a worker using bad roodDeviceHint: bmh and machine become Provisioned, no error in status
1886353 - [e2e][automation] kubevirt-gating job fails for a missing virtctl URL
1886397 - Move object-enum to console-shared
1886423 - New Affinities don't contain ID until saving
1886435 - Azure UPI uses deprecated command 'group deployment'
1886449 - p&f: add configuration to protect oauth server traffic
1886452 - layout options doesn't gets selected style on click i.e grey background
1886462 - IO doesn't recognize namespaces - 2 resources with the same name in 2 namespaces -> only 1 gets collected
1886488 - move e2e test off of nfs image from docker.io/gmontero/nfs-server:latest
1886524 - Change default terminal command for Windows Pods
1886553 - i/o timeout experienced from build02 when targeting CI test cluster during test execution
1886600 - panic: assignment to entry in nil map
1886620 - Application behind service load balancer with PDB is not disrupted
1886627 - Kube-apiserver pods restarting/reinitializing periodically
1886635 - CVE-2020-8563 kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider
1886636 - Panic in machine-config-operator
1886749 - Removing network policy from namespace causes inability to access pods through loadbalancer.
1886751 - Gather MachineConfigPools
1886766 - PVC dropdown has 'Persistent Volume' Label
1886834 - ovn-cert is mandatory in both master and node daemonsets
1886848 - [OSP] machine instance-state annotation discrepancy with providerStatus.instanceState
1886861 - ordered-values.yaml not honored if values.schema.json provided
1886871 - Neutron ports created for hostNetworking pods
1886890 - Overwrite jenkins-agent-base imagestream
1886900 - Cluster-version operator fills logs with "Manifest: ..." spew
1886922 - [sig-network] pods should successfully create sandboxes by getting pod
1886973 - Local storage operator doesn't include correctly populate LocalVolumeDiscoveryResult in console
1886977 - [v2v]Incorrect VM Provider type displayed in UI while importing VMs through VMIO
1887010 - Imagepruner met error "Job has reached the specified backoff limit" which causes image registry degraded
1887026 - FC volume attach fails with “no fc disk found” error on OCP 4.6 PowerVM cluster
1887040 - [upgrade] ovs pod crash for rhel worker when upgarde from 4.5 to 4.6
1887046 - Event for LSO need update to avoid confusion
1887088 - cluster-node-tuning-operator refers to missing cluster-node-tuned image
1887375 - User should be able to specify volumeMode when creating pvc from web-console
1887380 - Unsupported access mode should not be available to select when creating pvc by aws-ebs-csi-driver(gp2-csi) from web-console
1887392 - openshift-apiserver: delegated authn/z should have ttl > metrics/healthz/readyz/openapi interval
1887428 - oauth-apiserver service should be monitored by prometheus
1887441 - ingress misconfiguration may break authentication but ingress operator keeps reporting "degraded: False"
1887454 - [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data
1887456 - It is impossible to attach the default NIC to a bridge with the latest version of OVN Kubernetes
1887465 - Deleted project is still referenced
1887472 - unable to edit application group for KSVC via gestures (shift+Drag)
1887488 - OCP 4.6: Topology Manager OpenShift E2E test fails: gu workload attached to SRIOV networks should let resource-aligned PODs have working SRIOV network interface
1887509 - Openshift-tests conformance TopologyManager tests run when Machine Config Operator is not installed on cluster
1887525 - Failures to set master HardwareDetails cannot easily be debugged
1887545 - 4.5 to 4.6 upgrade fails when external network is configured on a bond device: ovs-configuration service fails and node becomes unreachable
1887585 - ovn-masters stuck in crashloop after scale test
1887651 - [Internal Mode] Object gateway (RGW) in unknown state after OCP upgrade.
1887737 - Test TestImageRegistryRemovedWithImages is failing on e2e-vsphere-operator
1887740 - cannot install descheduler operator after uninstalling it
1887745 - API server is throwing 5xx error code for 42.11% of requests for LIST events
1887750 - oc explain localvolumediscovery
returns empty description
1887751 - oc explain localvolumediscoveryresult
returns empty description
1887778 - Add ContainerRuntimeConfig gatherer
1887783 - PVC upload cannot continue after approve the certificate
1887797 - [CNV][V2V] Default network type is bridge for interface bound to POD network in VMWare migration wizard
1887799 - User workload monitoring prometheus-config-reloader OOM
1887850 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install test is flaky
1887863 - Installer panics on invalid flavor
1887864 - Clean up dependencies to avoid invalid scan flagging
1887934 - TestForwardedHeaderPolicyAppend, TestForwardedHeaderPolicyReplace, and TestForwardedHeaderPolicyIfNone consistently fail because of case-sensitive comparison
1887936 - Kube-scheduler should be able to parse v1beta1 KubeSchedulerConfig
1888015 - workaround kubelet graceful termination of static pods bug
1888028 - prevent extra cycle in aggregated apiservers
1888036 - Operator details shows old CRD versions
1888041 - non-terminating pods are going from running to pending
1888072 - Setting Supermicro node to PXE boot via Redfish doesn't take affect
1888073 - Operator controller continuously busy looping
1888118 - Memory requests not specified for image registry operator
1888150 - Install Operand Form on OperatorHub is displaying unformatted text
1888172 - PR 209 didn't update the sample archive, but machineset and pdbs are now namespaced
1888227 - Failed to deploy some of container image on the recent OCP 4.6 nightly build
1888292 - Fix CVE-2015-7501 affecting agent-maven-3.5
1888311 - p&f: make SAR traffic from oauth and openshift apiserver exempt
1888363 - namespaces crash in dev
1888378 - [IPI on Azure] errors destroying cluster when Azure resource group was never created
1888381 - instance:node_network_receive_bytes_excluding_lo:rate1m value twice expected
1888464 - installer missing permission definitions for TagResources and UntagResources when installing in existing VPC
1888494 - imagepruner pod is error when image registry storage is not configured
1888565 - [OSP] machine-config-daemon-firstboot.service failed with "error reading osImageURL from rpm-ostree"
1888595 - cluster-policy-controller logs shows error which reads initial monitor sync has error
1888601 - The poddisruptionbudgets is using the operator service account, instead of gather
1888657 - oc doesn't know its name
1888663 - sdn starts after kube-apiserver, delay readyz until oauth-apiserver is reachable
1888671 - Document the Cloud Provider's ignore-volume-az setting
1888738 - quay.io/openshift/origin-must-gather:latest is not a multi-arch, manifest-list image
1888763 - at least one of these parameters (Vendor, DeviceID or PfNames) has to be defined in nicSelector in CR %s", cr.GetName()
1888827 - ovnkube-master may segfault when trying to add IPs to a nil address set
1888861 - need to pass dual-stack service CIDRs to kube-apiserver in dual-stack cluster
1888866 - AggregatedAPIDown permanently firing after removing APIService
1888870 - JS error when using autocomplete in YAML editor
1888874 - hover message are not shown for some properties
1888900 - align plugins versions
1888985 - Cypress: Fix 'Ensures buttons have discernible text' accesibility violation
1889213 - The error message of uploading failure is not clear enough
1889267 - Increase the time out for creating template and upload image in the terraform
1889348 - Project link should be removed from Application Details page, since it is inaccurate (Application Stages)
1889374 - Kiali feature won't work on fresh 4.6 cluster
1889388 - ListBundles returns incorrect replaces/skips when bundles have been added via semver-skippatch mode
1889420 - OCP failed to add vsphere disk when pod moved to new node during cluster upgrade
1889515 - Accessibility - The symbols e.g checkmark in the Node > overview page has no text description, label, or other accessible information
1889529 - [Init-CR annotation] Inline alert shows operand instance was needed still appearing after creating an Operand instance
1889540 - [4.5 upgrade][alert]CloudCredentialOperatorDown
1889577 - Resources are not shown on project workloads page
1889620 - [Azure] - Machineset not scaling when publicIP:true in disconnected Azure enviroment
1889630 - Scheduling disabled popovers are missing for Node status in Node Overview and Details pages
1889692 - Selected Capacity is showing wrong size
1889694 - usbguard fails to install as RHCOS extension due to missing libprotobuf.so.15
1889698 - When the user clicked cancel at the Create Storage Class confirmation dialog all the data from the Local volume set goes off
1889710 - Prometheus metrics on disk take more space compared to OCP 4.5
1889721 - opm index add semver-skippatch mode does not respect prerelease versions
1889724 - When LocalVolumeDiscovery CR is created form the LSO page User doesn't see the Disk tab
1889767 - [vsphere] Remove certificate from upi-installer image
1889779 - error when destroying a vSphere installation that failed early
1889787 - OCP is flooding the oVirt engine with auth errors
1889838 - race in Operator update after fix from bz1888073
1889852 - support new AWS regions ap-east-1, af-south-1, eu-south-1
1889863 - Router prints incorrect log message for namespace label selector
1889891 - Backport timecache LRU fix
1889912 - Drains can cause high CPU usage
1889921 - Reported Degraded=False Available=False pair does not make sense
1889928 - [e2e][automation] Add more tests for golden os
1889943 - EgressNetworkPolicy does not work when setting Allow rule to a dnsName
1890038 - Infrastructure status.platform not migrated to status.platformStatus causes warnings
1890074 - MCO extension kernel-headers is invalid
1890104 - with Serverless 1.10 version of trigger/subscription/channel/IMC is V1 as latest
1890130 - multitenant mode consistently fails CI
1890141 - move off docker.io images for build/image-eco/templates/jenkins e2e
1890145 - The mismatched of font size for Status Ready and Health Check secondary text
1890180 - FieldDependency x-descriptor doesn't support non-sibling fields
1890182 - DaemonSet with existing owner garbage collected
1890228 - AWS: destroy stuck on route53 hosted zone not found
1890235 - e2e: update Protractor's checkErrors logging
1890250 - workers may fail to join the cluster during an update from 4.5
1890256 - Replacing a master node on a baremetal IPI deployment gets stuck when deleting the machine of the unhealthy member
1890270 - External IP doesn't work if the IP address is not assigned to a node
1890361 - s390x: Generate new ostree rpm with fix for rootfs immutability
1890456 - [vsphere] mapi_instance_create_failed doesn't work on vsphere
1890467 - unable to edit an application without a service
1890472 - [Kuryr] Bulk port creation exception not completely formatted
1890494 - Error assigning Egress IP on GCP
1890530 - cluster-policy-controller doesn't gracefully terminate
1890630 - [Kuryr] Available port count not correctly calculated for alerts
1890671 - [SA] verify-image-signature using service account does not work
1890677 - 'oc image info' claims 'does not exist' for application/vnd.oci.image.manifest.v1+json manifest
1890808 - New etcd alerts need to be added to the monitoring stack
1890951 - Mirror of multiarch images together with cluster logging case problems. It doesn't sync the "overall" sha it syncs only the sub arch sha.
1890984 - Rename operator-webhook-config to sriov-operator-webhook-config
1890995 - wew-app should provide more insight into why image deployment failed
1891023 - ovn-kubernetes rbac proxy never starts waiting for an incorrect API call
1891047 - Helm chart fails to install using developer console because of TLS certificate error
1891068 - [sig-instrumentation] Prometheus when installed on the cluster shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Early] failing due to TargetDown alert from kube-scheduler
1891080 - [LSO] When Localvolumeset and SC is already created before OCS install Creation of LVD and LVS is skipped when user click created storage cluster from UI
1891108 - p&f: Increase the concurrency share of workload-low priority level
1891143 - CVO deadlocked while shutting down, shortly after fresh cluster install (metrics goroutine)
1891189 - [LSO] max device limit is accepting negative values. PVC is not getting created and no error is shown
1891314 - Display incompatible helm charts for installation (kubeVersion of cluster doesn't meet requirements of chart)
1891362 - Wrong metrics count for openshift_build_result_total
1891368 - fync should be fsync for etcdHighFsyncDurations alert's annotations.message
1891374 - fync should be fsync for etcdHighFsyncDurations critical alert's annotations.message
1891376 - Extra text in Cluster Utilization charts
1891419 - Wrong detail head on network policy detail page.
1891459 - Snapshot tests should report stderr of failed commands
1891498 - Other machine config pools do not show during update
1891543 - OpenShift 4.6/OSP install fails when node flavor has less than 25GB, even with dedicated storage
1891551 - Clusterautoscaler doesn't scale up as expected
1891552 - Handle missing labels as empty.
1891555 - The windows oc.exe binary does not have version metadata
1891559 - kuryr-cni cannot start new thread
1891614 - [mlx] testpmd fails inside OpenShift pod using DevX version 19.11
1891625 - [Release 4.7] Mutable LoadBalancer Scope
1891702 - installer get pending when additionalTrustBundle is added into install-config.yaml
1891716 - OVN cluster upgrade from 4.6.1 to 4.7 fails
1891740 - OperatorStatusChanged is noisy
1891758 - the authentication operator may spam DeploymentUpdated event endlessly
1891759 - Dockerfile builds cannot change /etc/pki/ca-trust
1891816 - [UPI] [OSP] control-plane.yml provisioning playbook fails on OSP 16.1
1891825 - Error message not very informative in case of mode mismatch
1891898 - The ClusterServiceVersion can define Webhooks that cannot be created.
1891951 - UI should show warning while creating pools with compression on
1891952 - [Release 4.7] Apps Domain Enhancement
1891993 - 4.5 to 4.6 upgrade doesn't remove deployments created by marketplace
1891995 - OperatorHub displaying old content
1891999 - Storage efficiency card showing wrong compression ratio
1892004 - OCP 4.6 opm on Ubuntu 18.04.4 - error /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.28' not found (required by ./opm)
1892167 - [SR-IOV] SriovNetworkNodePolicies apply ignoring the spec.nodeSelector.
1892198 - TypeError in 'Performance Profile' tab displayed for 'Performance Addon Operator'
1892288 - assisted install workflow creates excessive control-plane disruption
1892338 - HAProxyReloadFail alert only briefly fires in the event of a broken HAProxy config
1892358 - [e2e][automation] update feature gate for kubevirt-gating job
1892376 - Deleted netnamespace could not be re-created
1892390 - TestOverwrite/OverwriteBundle/DefaultBehavior in operator-registry is flaky
1892393 - TestListPackages is flaky
1892448 - MCDPivotError alert/metric missing
1892457 - NTO-shipped stalld needs to use FIFO for boosting.
1892467 - linuxptp-daemon crash
1892521 - [AWS] Startup bootstrap machine failed due to ignition file is missing in disconnected UPI env
1892653 - User is unable to create KafkaSource with v1beta
1892724 - VFS added to the list of devices of the nodeptpdevice CRD
1892799 - Mounting additionalTrustBundle in the operator
1893117 - Maintenance mode on vSphere blocks installation.
1893351 - TLS secrets are not able to edit on console.
1893362 - The ovs-xxxxx_openshift-sdn container does not terminate gracefully, slowing down reboots
1893386 - false-positive ReadyIngressNodes_NoReadyIngressNodes: Auth operator makes risky "worker" assumption when guessing about ingress availability
1893546 - Deploy using virtual media fails on node cleaning step
1893601 - overview filesystem utilization of OCP is showing the wrong values
1893645 - oc describe route SIGSEGV
1893648 - Ironic image building process is not compatible with UEFI secure boot
1893724 - OperatorHub generates incorrect RBAC
1893739 - Force deletion doesn't work for snapshots if snapshotclass is already deleted
1893776 - No useful metrics for image pull time available, making debugging issues there impossible
1893798 - Lots of error messages starting with "get namespace to enqueue Alertmanager instances failed" in the logs of prometheus-operator
1893832 - ErrorCount field is missing in baremetalhosts.metal3.io CRD
1893889 - disabled dropdown items in the pf dropdown component are skipped over and unannounced by JAWS
1893926 - Some "Dynamic PV (block volmode)" pattern storage e2e tests are wrongly skipped
1893944 - Wrong product name for Multicloud Object Gateway
1893953 - (release-4.7) Gather default StatefulSet configs
1893956 - Installation always fails at "failed to initialize the cluster: Cluster operator image-registry is still updating"
1893963 - [Testday] Workloads-> Virtualization is not loading for Firefox browser
1893972 - Should skip e2e test cases as early as possible
1894013 - [v2v][Testday] VMware to CNV VM import]VMware URL: It is not clear that only the FQDN/IP address is required without 'https://'
1894020 - User with edit users cannot deploy images from their own namespace from the developer perspective
1894025 - OCP 4.5 to 4.6 upgrade for "aws-ebs-csi-driver-operator" fails when "defaultNodeSelector" is set
1894041 - [v2v][[Testday]VM import from VMware/RHV] VM import wizard: The target storage class name is not displayed if default storage class is used.
1894065 - tag new packages to enable TLS support
1894110 - Console shows wrong value for maxUnavailable and maxSurge when set to 0
1894144 - CI runs of baremetal IPI are failing due to newer libvirt libraries
1894146 - ironic-api used by metal3 is over provisioned and consumes a lot of RAM
1894194 - KuryrPorts leftovers from 4.6 GA need to be deleted
1894210 - Failed to encrypt OSDs on OCS4.6 installation (via UI)
1894216 - Improve OpenShift Web Console availability
1894275 - Fix CRO owners file to reflect node owner
1894278 - "database is locked" error when adding bundle to index image
1894330 - upgrade channels needs to be updated for 4.7
1894342 - oauth-apiserver logs many "[SHOULD NOT HAPPEN] failed to update managedFields for ... OAuthClient ... no corresponding type for oauth.openshift.io/v1, Kind=OAuthClient"
1894374 - Dont prevent the user from uploading a file with incorrect extension
1894432 - [oVirt] sometimes installer timeout on tmp_import_vm
1894477 - bash syntax error in nodeip-configuration.service
1894503 - add automated test for Polarion CNV-5045
1894519 - [OSP] External mode cluster creation disabled for Openstack and oVirt platform
1894539 - [on-prem] Unable to deploy additional machinesets on separate subnets
1894645 - Cinder volume provisioning crashes on nil cloud provider
1894677 - image-pruner job is panicking: klog stack
1894810 - Remove TechPreview Badge from Eventing in Serverless version 1.11.0
1894860 - 'backend' CI job passing despite failing tests
1894910 - Update the node to use the real-time kernel fails
1894992 - All nightly jobs for e2e-metal-ipi failing due to ipa image missing tenacity package
1895065 - Schema / Samples / Snippets Tabs are all selected at the same time
1895099 - vsphere-upi and vsphere-upi-serial jobs time out waiting for bootstrap to complete in CI
1895141 - panic in service-ca injector
1895147 - Remove memory limits on openshift-dns
1895169 - VM Template does not properly manage Mount Windows guest tools check box during VM creation
1895268 - The bundleAPIs should NOT be empty
1895309 - [OCP v47] The RHEL node scaleup fails due to "No package matching 'cri-o-1.19.*' found available" on OCP 4.7 cluster
1895329 - The infra index filled with warnings "WARNING: kubernetes.io/cinder built-in volume provider is now deprecated. The Cinder volume provider is deprecated and will be removed in a future release"
1895360 - Machine Config Daemon removes a file although its defined in the dropin
1895367 - Missing image in metadata DB index.db in disconnected Operator Hub installation. OCP 4.6.1
1895372 - Web console going blank after selecting any operator to install from OperatorHub
1895385 - Revert KUBELET_LOG_LEVEL back to level 3
1895423 - unable to edit an application with a custom builder image
1895430 - unable to edit custom template application
1895509 - Backup taken on one master cannot be restored on other masters
1895537 - [sig-imageregistry][Feature:ImageExtract] Image extract should extract content from an image
1895838 - oc explain description contains '/'
1895908 - "virtio" option is not available when modifying a CD-ROM to disk type
1895909 - e2e-metal-ipi-ovn-dualstack is failing
1895919 - NTO fails to load kernel modules
1895959 - configuring webhook token authentication should prevent cluster upgrades
1895979 - Unable to get coreos-installer with --copy-network to work
1896101 - [cnv][automation] Added negative tests for migration from VMWare and RHV
1896160 - CI: Some cluster operators are not ready: marketplace (missing: Degraded)
1896188 - [sig-cli] oc debug deployment configs from a build: local-busybox-1-build not completed
1896218 - Occasional GCP install failures: Error setting IAM policy for project ...: googleapi: Error 400: Service account ... does not exist., badRequest
1896229 - Current Rate of Bytes Received and Current Rate of Bytes Transmitted data can not be loaded
1896244 - Found a panic in storage e2e test
1896296 - Git links should avoid .git as part of the URL and should not link git:// urls in general
1896302 - [e2e][automation] Fix 4.6 test failures
1896365 - [Migration]The SDN migration cannot revert under some conditions
1896384 - [ovirt IPI]: local coredns resolution not working
1896446 - Git clone from private repository fails after upgrade OCP 4.5 to 4.6
1896529 - Incorrect instructions in the Serverless operator and application quick starts
1896645 - documentationBaseURL needs to be updated for 4.7
1896697 - [Descheduler] policy.yaml param in cluster configmap is empty
1896704 - Machine API components should honour cluster wide proxy settings
1896732 - "Attach to Virtual Machine OS" button should not be visible on old clusters
1896866 - File /etc/NetworkManager/system-connections/default_connection.nmconnection is incompatible with SR-IOV operator
1896898 - ovs-configuration.service fails when multiple IPv6 default routes are provided via RAs over the same interface and deployment bootstrap fails
1896918 - start creating new-style Secrets for AWS
1896923 - DNS pod /metrics exposed on anonymous http port
1896977 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters
1897003 - VNC console cannot be connected after visit it in new window
1897008 - Cypress: reenable check for 'aria-hidden-focus' rule & checkA11y test for modals
1897026 - [Migration] With updating optional network operator configuration, migration stucks on MCO
1897039 - router pod keeps printing log: template "msg"="router reloaded" "output"="[WARNING] 316/065823 (15) : parsing [/var/lib/haproxy/conf/haproxy.config:52]: option 'http-use-htx' is deprecated and ignored
1897050 - [IBM Power] LocalVolumeSet provisions boot partition as PV.
1897073 - [OCP 4.5] wrong netid assigned to Openshift projects/namespaces
1897138 - oVirt provider uses depricated cluster-api project
1897142 - When scaling replicas to zero, Octavia loadbalancer pool members are not updated accordingly
1897252 - Firing alerts are not showing up in console UI after cluster is up for some time
1897354 - Operator installation showing success, but Provided APIs are missing
1897361 - The MCO GCP-OP tests fail consistently on containerruntime tests with "connection refused"
1897412 - [sriov]disableDrain did not be updated in CRD of manifest
1897423 - Max unavailable and Max surge value are not shown on Deployment Config Details page
1897516 - Baremetal IPI deployment with IPv6 control plane fails when the nodes obtain both SLAAC and DHCPv6 addresses as they set their hostname to 'localhost'
1897520 - After restarting nodes the image-registry co is in degraded true state.
1897584 - Add casc plugins
1897603 - Cinder volume attachment detection failure in Kubelet
1897604 - Machine API deployment fails: Kube-Controller-Manager can't reach API: "Unauthorized"
1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers
1897641 - Baremetal IPI with IPv6 control plane: nodes respond with duplicate packets to ICMP6 echo requests
1897676 - [CI] [Azure] [UPI] CI failing since 4.6 changes in ignition
1897830 - [GSS] Unable to deploy OCS 4.5.2 on OCP 4.6.1, cannot
Create OCS Cluster Service1897891 - [RFE][v2v][UI][CNV VM import] Providing error message or/and block migration when vddk-init-image is missing
1897897 - ptp lose sync openshift 4.6
1898036 - no network after reboot (IPI)
1898045 - AWS EBS CSI Driver can not get updated cloud credential secret automatically
1898097 - mDNS floods the baremetal network
1898118 - Lack of logs on some image stream tests make hard to find root cause of a problem
1898134 - Descheduler logs show absolute values instead of percentage when LowNodeUtilization strategy is applied
1898159 - kcm operator shall pass --allocate-node-cidrs=false to kcm for ovn-kube and openshift-sdn cluster
1898174 - [OVN] EgressIP does not guard against node IP assignment
1898194 - GCP: can't install on custom machine types
1898238 - Installer validations allow same floating IP for API and Ingress
1898268 - [OVN]:
make checkbroken on 4.6
1898289 - E2E test: Use KUBEADM_PASSWORD_FILE by default
1898320 - Incorrect Apostrophe Translation of "it's" in Scheduling Disabled Popover
1898357 - Within the operatorhub details view, long unbroken text strings do not wrap cause breaking display.
1898407 - [Deployment timing regression] Deployment takes longer with 4.7
1898417 - GCP: the dns targets in Google Cloud DNS is not updated after recreating loadbalancer service
1898487 - [oVirt] Node is not removed when VM has been removed from oVirt engine
1898500 - Failure to upgrade operator when a Service is included in a Bundle
1898517 - Ironic auto-discovery may result in rogue nodes registered in ironic
1898532 - Display names defined in specDescriptors not respected
1898580 - When adding more than one node selector to the sriovnetworknodepolicy, the cni and the device plugin pods are constantly rebooted
1898613 - Whereabouts should exclude IPv6 ranges
1898655 - [oVirt] Node deleted in oVirt should cause the Machine to go into a Failed phase
1898679 - Operand creation form - Required "type: object" properties (Accordion component) are missing red asterisk
1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability
1898745 - installation failing with CVO reporting openshift-samples not rolled out, samples not setting versions in its ClusterOperator
1898839 - Wrong YAML in operator metadata
1898851 - Multiple Pods access the same volume on the same node e2e test cases are missed from aws ebs csi driver e2e test job
1898873 - Remove TechPreview Badge from Monitoring
1898954 - Backup script does not take /etc/kubernetes/static-pod-resources on a reliable way
1899111 - [RFE] Update jenkins-maven-agen to maven36
1899128 - VMI details screen -> show the warning that it is preferable to have a VM only if the VM actually does not exist
1899175 - bump the RHCOS boot images for 4.7
1899198 - Use new packages for ipa ramdisks
1899200 - In Installed Operators page I cannot search for an Operator by it's name
1899220 - Support AWS IMDSv2
1899350 - configure-ovs.sh doesn't configure bonding options
1899433 - When Creating OCS from ocs wizard Step Discover Disks shows Error "An error occurred Not Found"
1899459 - Failed to start monitoring pods once the operator removed from override list of CVO
1899515 - Passthrough credentials are not immediately re-distributed on update
1899575 - update discovery burst to reflect lots of CRDs on openshift clusters
1899582 - update discovery burst to reflect lots of CRDs on openshift clusters
1899588 - Operator objects are re-created after all other associated resources have been deleted
1899600 - Increased etcd fsync latency as of OCP 4.6
1899603 - workers-rhel7 CI jobs failing: Failed to remove rollback: error running rpm-ostree cleanup
1899627 - Project dashboard Active status using small icon
1899725 - Pods table does not wrap well with quick start sidebar open
1899746 - [ovn] error while waiting on flows for pod: OVS sandbox port is no longer active (probably due to a subsequent CNI ADD)
1899760 - etcd_request_duration_seconds_bucket metric has excessive cardinality
1899835 - catalog-operator repeatedly crashes with "runtime error: index out of range [0] with length 0"
1899839 - thanosRuler.resources.requests does not take effect in user-workload-monitoring-config confimap
1899853 - additionalSecurityGroupIDs not working for master nodes
1899922 - NP changes sometimes influence new pods.
1899949 - [Platform] Remove restriction on disk type selection for LocalVolumeSet
1900008 - Fix internationalized sentence fragments in ImageSearch.tsx
1900010 - Fix internationalized sentence fragments in BuildImageSelector.tsx
1900020 - Remove ' from internationalized keys
1900022 - Search Page - Top labels field is not applied to selected Pipeline resources
1900030 - disruption_tests: [sig-imageregistry] Image registry remain available failing consistently
1900126 - Creating a VM results in suggestion to create a default storage class when one already exists
1900138 - [OCP on RHV] Remove insecure mode from the installer
1900196 - stalld is not restarted after crash
1900239 - Skip "subPath should be able to unmount" NFS test
1900322 - metal3 pod's toleration for key: node-role.kubernetes.io/master currently matches on exact value matches but should match on Exists
1900377 - [e2e][automation] create new css selector for active users
1900496 - (release-4.7) Collect spec config for clusteroperator resources
1900672 - (s390x) Upgrade from old LUKS to new not working with DASD disks
1900699 - Impossible to add new Node on OCP 4.6 using large ECKD disks - fdasd issue
1900759 - include qemu-guest-agent by default
1900790 - Track all resource counts via telemetry
1900835 - Multus errors when cachefile is not found
1900935 -
oc adm release mirrorpanic panic: runtime error
1900989 - accessing the route cannot wake up the idled resources
1901040 - When scaling down the status of the node is stuck on deleting
1901057 - authentication operator health check failed when installing a cluster behind proxy
1901107 - pod donut shows incorrect information
1901111 - Installer dependencies are broken
1901200 - linuxptp-daemon crash when enable debug log level
1901301 - CBO should handle platform=BM without provisioning CR
1901355 - [Azure][4.7] Invalid vm size from customized compute nodes does not fail properly
1901363 - High Podready Latency due to timed out waiting for annotations
1901373 - redundant bracket on snapshot restore button
1901376 - [on-prem] Upgrade from 4.6 to 4.7 failed with "timed out waiting for the condition during waitForControllerConfigToBeCompleted: controllerconfig is not completed: ControllerConfig has not completed: completed(false) running(false) failing(true"
1901395 - "Edit virtual machine template" action link should be removed
1901472 - [OSP] Bootstrap and master nodes use different keepalived unicast setting
1901517 - RHCOS 4.6.1 uses a single NetworkManager connection for multiple NICs when using default DHCP
1901531 - Console returns a blank page while trying to create an operator Custom CR with Invalid Schema
1901594 - Kubernetes resource CRUD operations.Kubernetes resource CRUD operations Pod "before all" hook for "creates the resource instance"
1901604 - CNO blocks editing Kuryr options
1901675 - [sig-network] multicast when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' should allow multicast traffic in namespaces where it is enabled
1901909 - The device plugin pods / cni pod are restarted every 5 minutes
1901982 - [sig-builds][Feature:Builds] build can reference a cluster service with a build being created from new-build should be able to run a build that references a cluster service
1902019 - when podTopologySpreadConstraint strategy is enabled for descheduler it throws error
1902059 - Wire a real signer for service accout issuer
1902091 -
cluster-image-registry-operatorpod leaves connections open when fails connecting S3 storage
1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service
1902157 - The DaemonSet machine-api-termination-handler couldn't allocate Pod
1902253 - MHC status doesnt set RemediationsAllowed = 0
1902299 - Failed to mirror operator catalog - error: destination registry required
1902545 - Cinder csi driver node pod should add nodeSelector for Linux
1902546 - Cinder csi driver node pod doesn't run on master node
1902547 - Cinder csi driver controller pod doesn't run on master node
1902552 - Cinder csi driver does not use the downstream images
1902595 - Project workloads list view doesn't show alert icon and hover message
1902600 - Container csi-snapshotter in Cinder csi driver needs to use ImagePullPolicy=IfNotPresent
1902601 - Cinder csi driver pods run as BestEffort qosClass
1902653 - [BM][IPI] Master deployment failed: No valid host was found. Reason: No conductor service registered which supports driver redfish for conductor group
1902702 - [sig-auth][Feature:LDAP][Serial] ldap group sync can sync groups from ldap: oc cp over non-existing directory/file fails
1902746 - [BM][IP] Master deployment failed - Base.1.0.GeneralError: database is locked
1902824 - failed to generate semver informed package manifest: unable to determine default channel
1902894 - hybrid-overlay-node crashing trying to get node object during initialization
1902969 - Cannot load vmi detail page
1902981 - It should default to current namespace when create vm from template
1902996 - [AWS] UPI on USGov, bootstrap machine can not fetch ignition file via s3:// URI
1903033 - duplicated lines of imageContentSources is seen when mirror release image to local registry
1903034 - OLM continuously printing debug logs
1903062 - [Cinder csi driver] Deployment mounted volume have no write access
1903078 - Deleting VolumeSnapshotClass makes VolumeSnapshot not Ready
1903107 - Enable vsphere-problem-detector e2e tests
1903164 - OpenShift YAML editor jumps to top every few seconds
1903165 - Improve Canary Status Condition handling for e2e tests
1903172 - Column Management: Fix sticky footer on scroll
1903186 - [Descheduler] cluster logs should report some info when PodTopologySpreadConstraints strategy is enabled
1903188 - [Descheduler] cluster log reports failed to validate server configuration" err="unsupported log format:
1903192 - Role name missing on create role binding form
1903196 - Popover positioning is misaligned for Overview Dashboard status items
1903206 - Ingress controller incorrectly routes traffic to non-ready pods/backends.
1903226 - MutatingWebhookConfiguration pod-identity-webhook does not exclude critical control-plane components
1903248 - Backport Upstream Static Pod UID patch
1903277 - Deprovisioning Not Deleting Security Groups [VpcLimitExceeded on e2e-aws tests]
1903290 - Kubelet repeatedly log the same log line from exited containers
1903346 - PV backed by FC lun is not being unmounted properly and this leads to IO errors / xfs corruption.
1903382 - Panic when task-graph is canceled with a TaskNode with no tasks
1903400 - Migrate a VM which is not running goes to pending state
1903402 - Nic/Disk on VMI overview should link to VMI's nic/disk page
1903414 - NodePort is not working when configuring an egress IP address
1903424 - mapi_machine_phase_transition_seconds_sum doesn't work
1903464 - "Evaluating rule failed" for "record: cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum" and "record: cluster:kubelet_volume_stats_used_bytes:provisioner:sum"
1903639 - Hostsubnet gatherer produces wrong output
1903651 - Network Policies are not working as expected with OVN-Kubernetes when traffic hairpins back to the same source through a service
1903660 - Cannot install with Assisted Installer on top of IPv6 since network provider is not started
1903674 - [sig-apps] ReplicationController should serve a basic image on each replica with a private image
1903717 - Handle different Pod selectors for metal3 Deployment
1903733 - Scale up followed by scale down can delete all running workers
1903917 - Failed to load "Developer Catalog" page
1903999 - Httplog response code is always zero
1904026 - The quota controllers should resync on new resources and make progress
1904064 - Automated cleaning is disabled by default
1904124 - DHCP to static lease script doesn't work correctly if starting with infinite leases
1904125 - Boostrap VM .ign image gets added into 'default' pool instead of <cluster-name>-<id>-bootstrap
1904131 - kuryr tempest plugin test test_ipblock_network_policy_sg_rules fails
1904133 - KubeletConfig flooded with failure conditions
1904161 - AlertmanagerReceiversNotConfigured fires unconditionally on alertmanager restart
1904243 - RHCOS 4.6.1 missing ISCSI initiatorname.iscsi !
1904244 - MissingKey errors for two plugins using i18next.t
1904262 - clusterresourceoverride-operator has version: 1.0.0 every build
1904296 - VPA-operator has version: 1.0.0 every build
1904297 - The index image generated by "opm index prune" leaves unrelated images
1904305 - Should have scroll-down bar for the field which the values list has too many results under dashboards
1904385 - [oVirt] registry cannot mount volume on 4.6.4 -> 4.6.6 upgrade
1904497 - vsphere-problem-detector: Run on vSphere cloud only
1904501 - [Descheduler] descheduler does not evict any pod when PodTopologySpreadConstraint strategy is set
1904502 - vsphere-problem-detector: allow longer timeouts for some operations
1904503 - vsphere-problem-detector: emit alerts
1904538 - [sig-arch][Early] Managed cluster should start all core operators: monitoring: container has runAsNonRoot and image has non-numeric user (nobody)
1904578 - metric scraping for vsphere problem detector is not configured
1904582 - All application traffic broken due to unexpected load balancer change on 4.6.4 -> 4.6.6 upgrade
1904663 - IPI pointer customization MachineConfig always generated
1904679 - [Feature:ImageInfo] Image info should display information about images
1904683 -
[sig-builds][Feature:Builds] s2i build with a root user imagetests use docker.io image
1904684 - [sig-cli] oc debug ensure it works with image streams
1904713 - Helm charts with kubeVersion restriction are filtered incorrectly
1904776 - Snapshot modal alert is not pluralized
1904824 - Set vSphere hostname from guestinfo before NM starts
1904941 - Insights status is always showing a loading icon
1904973 - KeyError: 'nodeName' on NP deletion
1904985 - Prometheus and thanos sidecar targets are down
1904993 - Many ampersand special characters are found in strings
1905066 - QE - Monitoring test cases - smoke test suite automation
1905074 - QE -Gherkin linter to maintain standards
1905100 - Too many haproxy processes in default-router pod causing high load average
1905104 - Snapshot modal disk items missing keys
1905115 - CI: dev-scripts fail on 02_configure_host: Failed to start network ostestbm
1905119 - Race in AWS EBS determining whether custom CA bundle is used
1905128 - [e2e][automation] e2e tests succeed without actually execute
1905133 - operator conditions special-resource-operator
1905141 - vsphere-problem-detector: report metrics through telemetry
1905146 - Backend Tests: TestHelmRepoGetter_SkipDisabled failures
1905194 - Detecting broken connections to the Kube API takes up to 15 minutes
1905221 - CVO transitions from "Initializing" to "Updating" despite not attempting many manifests
1905232 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them failing due to inconsistent images between CI and OCP
1905253 - Inaccurate text at bottom of Events page
1905298 - openshift-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory
1905299 - OLM fails to update operator
1905307 - Provisioning CR is missing from must-gather
1905319 - cluster-samples-operator containers are not requesting required memory resource
1905320 - csi-snapshot-webhook is not requesting required memory resource
1905323 - dns-operator is not requesting required memory resource
1905324 - ingress-operator is not requesting required memory resource
1905327 - openshift-kube-scheduler initContainer wait-for-host-port is not requesting required resources: cpu, memory
1905328 - Changing the bound token service account issuer invalids previously issued bound tokens
1905329 - openshift-oauth-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory
1905330 - openshift-monitoring init-textfile is not requesting required resources: cpu, memory
1905338 - QE -Cypress Automation for Add Flow - Database, Yaml, OperatorBacked, PageDetails
1905347 - QE - Design Gherkin Scenarios
1905348 - QE - Design Gherkin Scenarios
1905362 - [sriov] Error message 'Fail to update DaemonSet' always shown in sriov operator pod
1905368 - [sriov] net-attach-def generated from sriovnetwork cannot be restored once it was deleted
1905370 - A-Z/Z-A sorting dropdown on Developer Catalog page is not aligned with filter text input
1905380 - Default to Red Hat/KubeVirt provider if common template does not have provider annotation
1905393 - CMO uses rbac.authorization.k8s.io/v1beta1 instead of rbac.authorization.k8s.io/v1
1905404 - The example of "Remove the entrypoint on the mysql:latest image" for
oc image appenddoes not work
1905416 - Hyperlink not working from Operator Description
1905430 - usbguard extension fails to install because of missing correct protobuf dependency version
1905492 - The stalld service has a higher scheduler priority than ksoftirq and rcu{b, c} threads
1905502 - Test flake - unable to get https transport for ephemeral-registry
1905542 - [GSS] The "External" mode option is not available when the OCP cluster is deployed using Redhat Cluster Assisted Installer 4.6.
1905599 - Errant change to lastupdatetime in copied CSV status can trigger runaway csv syncs
1905610 - Fix typo in export script
1905621 - Protractor login test fails against a 4.7 (nightly) Power cluster
1905640 - Subscription manual approval test is flaky
1905647 - Report physical core valid-for-subscription min/max/cumulative use to telemetry
1905696 - ClusterMoreUpdatesModal component did not get internationalized
1905748 - with sharded ingresscontrollers, all shards reload when any endpoint changes
1905761 - NetworkPolicy with Egress policyType is resulting in SDN errors and improper communication within Project
1905778 - inconsistent ingresscontroller between fresh installed cluster and upgraded cluster
1905792 - [OVN]Cannot create egressfirewalll with dnsName
1905889 - Should create SA for each namespace that the operator scoped
1905920 - Quickstart exit and restart
1905941 - Page goes to error after create catalogsource
1905977 - QE ghaekin design scenaio-pipeline metrics ODC-3711
1906032 - Canary Controller: Canary daemonset rolls out slowly in large clusters
1906100 - Disconnected cluster upgrades are failing from the cli, when signature retrieval is being blackholed instead of quickly rejected
1906105 - CBO annotates an existing Metal3 deployment resource to indicate that it is managing it
1906118 - OCS feature detection constantly polls storageclusters and storageclasses
1906120 - 'Create Role Binding' form not setting user or group value when created from a user or group resource
1906121 - [oc] After new-project creation, the kubeconfig file does not set the project
1906134 - OLM should not create OperatorConditions for copied CSVs
1906143 - CBO supports log levels
1906186 - i18n: Translators are not able to translate
thiswithout context for alert manager config
1906228 - tuned and openshift-tuned sometimes do not terminate gracefully, slowing reboots
1906274 - StorageClass installed by Cinder csi driver operator should enable the allowVolumeExpansion to support volume resize.
1906276 -
oc image appendcan't work with multi-arch image with --filter-by-os='.*'
1906318 - use proper term for Authorized SSH Keys
1906335 - The lastTransitionTime, message, reason field of operatorcondition should be optional
1906356 - Unify Clone PVC boot source flow with URL/Container boot source
1906397 - IPA has incorrect kernel command line arguments
1906441 - HorizontalNav and NavBar have invalid keys
1906448 - Deploy using virtualmedia with provisioning network disabled fails - 'Failed to connect to the agent' in ironic-conductor log
1906459 - openstack: Quota Validation fails if unlimited quotas are given to a project
1906496 - [BUG] Thanos having possible memory leak consuming huge amounts of node's memory and killing them
1906508 - TestHeaderNameCaseAdjust outputs nil error message on some failures
1906511 - Root reprovisioning tests flaking often in CI
1906517 - Validation is not robust enough and may prevent to generate install-confing.
1906518 - Update snapshot API CRDs to v1
1906519 - Update LSO CRDs to use v1
1906570 - Number of disruptions caused by reboots on a cluster cannot be measured
1906588 - [ci][sig-builds] nodes is forbidden: User "e2e-test-jenkins-pipeline-xfghs-user" cannot list resource "nodes" in API group "" at the cluster scope
1906650 - Cannot collect network policy, EgressFirewall, egressip logs with gather_network_logs
1906655 - [SDN]Cannot colloect ovsdb-server.log and ovs-vswitchd.log with gather_network_logs
1906679 - quick start panel styles are not loaded
1906683 - Kn resources are not showing in Topology if triggers has KSVC and IMC as subscriber
1906684 - Event Source creation fails if user selects no app group and switch to yaml and then to form
1906685 - SinkBinding is shown in topology view if underlying resource along with actual source created
1906689 - user can pin to nav configmaps and secrets multiple times
1906691 - Add doc which describes disabling helm chart repository
1906713 - Quick starts not accesible for a developer user
1906718 - helm chart "provided by Redhat" is misspelled
1906732 - Machine API proxy support should be tested
1906745 - Update Helm endpoints to use Helm 3.4.x
1906760 - performance issues with topology constantly re-rendering
1906766 - localized
Autoscaled&
Autoscalingpod texts overlap with the pod ring
1906768 - Virtualization nav item is incorrectly placed in the Admin Workloads section
1906769 - topology fails to load with non-kubeadmin user
1906770 - shortcuts on mobiles view occupies a lot of space
1906798 - Dev catalog customization doesn't update console-config ConfigMap
1906806 - Allow installing extra packages in ironic container images
1906808 - [test-disabled] ServiceAccounts should support OIDC discovery of service account issuer
1906835 - Topology view shows add page before then showing full project workloads
1906840 - ClusterOperator should not have status "Updating" if operator version is the same as the release version
1906844 - EndpointSlice and EndpointSliceProxying feature gates should be disabled for openshift-sdn kube-proxy
1906860 - Bump kube dependencies to v1.20 for Net Edge components
1906864 - Quick Starts Tour: Need to adjust vertical spacing
1906866 - Translations of Sample-Utils
1906871 - White screen when sort by name in monitoring alerts page
1906872 - Pipeline Tech Preview Badge Alignment
1906875 - Provide an option to force backup even when API is not available.
1906877 - Placeholder' value in search filter do not match column heading in Vulnerabilities
1906879 - Add missing i18n keys
1906880 - oidcdiscoveryendpoint controller invalidates all TokenRequest API tokens during install
1906896 - No Alerts causes odd empty Table (Need no content message)
1906898 - Missing User RoleBindings in the Project Access Web UI
1906899 - Quick Start - Highlight Bounding Box Issue
1906916 - Teach CVO about flowcontrol.apiserver.k8s.io/v1beta1
1906933 - Cluster Autoscaler should have improved mechanisms for group identifiers
1906935 - Delete resources when Provisioning CR is deleted
1906968 - Must-gather should support collecting kubernetes-nmstate resources
1906986 - Ensure failed pod adds are retried even if the pod object doesn't change
1907199 - Need to upgrade machine-api-operator module version under cluster-api-provider-kubevirt
1907202 - configs.imageregistry.operator.openshift.io cluster does not update its status fields after URL change
1907211 - beta promotion of p&f switched storage version to v1beta1, making downgrades impossible.
1907269 - Tooltips data are different when checking stack or not checking stack for the same time
1907280 - Install tour of OCS not available.
1907282 - Topology page breaks with white screen
1907286 - The default mhc machine-api-termination-handler couldn't watch spot instance
1907287 - [csi-snapshot-webhook] should support both v1beta1 and v1 version when creating volumesnapshot/volumesnapshotcontent
1907293 - Increase timeouts in e2e tests
1907295 - Gherkin script for improve management for helm
1907299 - Advanced Subscription Badge for KMS and Arbiter not present
1907303 - Align VM template list items by baseline
1907304 - Use PF styles for selected template card in VM Wizard
1907305 - Drop 'ISO' from CDROM boot source message
1907307 - Support and provider labels should be passed on between templates and sources
1907310 - Pin action should be renamed to favorite
1907312 - VM Template source popover is missing info about added date
1907313 - ClusterOperator objects cannot be overriden with cvo-overrides
1907328 - iproute-tc package is missing in ovn-kube image
1907329 - CLUSTER_PROFILE env. variable is not used by the CVO
1907333 - Node stuck in degraded state, mcp reports "Failed to remove rollback: error running rpm-ostree cleanup -r: error: Timeout was reached"
1907373 - Rebase to kube 1.20.0
1907375 - Bump to latest available 1.20.x k8s - workloads team
1907378 - Gather netnamespaces networking info
1907380 - kube-rbac-proxy exposes tokens, has excessive verbosity
1907381 - OLM fails to deploy an operator if its deployment template contains a description annotation that doesn't match the CSV one
1907390 - prometheus-adapter: panic after k8s 1.20 bump
1907399 - build log icon link on topology nodes cause app to reload
1907407 - Buildah version not accessible
1907421 - [4.6.1]oc-image-mirror command failed on "error: unable to copy layer"
1907453 - Dev Perspective -> running vm details -> resources -> no data
1907454 - Install PodConnectivityCheck CRD with CNO
1907459 - "The Boot source is also maintained by Red Hat." is always shown for all boot sources
1907475 - Unable to estimate the error rate of ingress across the connected fleet
1907480 -
Active alertssection throwing forbidden error for users.
1907518 - Kamelets/Eventsource should be shown to user if they have create access
1907543 - Korean timestamps are shown when users' language preferences are set to German-en-en-US
1907610 - Update kubernetes deps to 1.20
1907612 - Update kubernetes deps to 1.20
1907621 - openshift/installer: bump cluster-api-provider-kubevirt version
1907628 - Installer does not set primary subnet consistently
1907632 - Operator Registry should update its kubernetes dependencies to 1.20
1907639 - pass dual-stack node IPs to kubelet in dual-stack clusters
1907644 - fix up handling of non-critical annotations on daemonsets/deployments
1907660 - Pod list does not render cell height correctly when pod names are too long (dynamic table rerendering issue?)
1907670 - CVE-2020-27846 crewjam/saml: authentication bypass in saml authentication
1907671 - Ingress VIP assigned to two infra nodes simultaneously - keepalived process running in pods seems to fail
1907767 - [e2e][automation]update test suite for kubevirt plugin
1907770 - Recent RHCOS 47.83 builds (from rhcos-47.83.202012072210-0 on) don't allow master and worker nodes to boot
1907792 - The
overridesof the OperatorCondition cannot block the operator upgrade
1907793 - Surface support info in VM template details
1907812 - 4.7 to 4.6 downgrade stuck in clusteroperator storage
1907822 - [OCP on OSP] openshift-install panic when checking quota with install-config have no flavor set
1907863 - Quickstarts status not updating when starting the tour
1907872 - dual stack with an ipv6 network fails on bootstrap phase
1907874 - QE - Design Gherkin Scenarios for epic ODC-5057
1907875 - No response when try to expand pvc with an invalid size
1907876 - Refactoring record package to make gatherer configurable
1907877 - QE - Automation- pipelines builder scripts
1907883 - Fix Pipleine creation without namespace issue
1907888 - Fix pipeline list page loader
1907890 - Misleading and incomplete alert message shown in pipeline-parameters and pipeline-resources form
1907892 - Unable to edit application deployed using "From Devfile" option
1907893 - navSortUtils.spec.ts unit test failure
1907896 - When a workload is added, Topology does not place the new items well
1907908 - VM Wizard always uses VirtIO for the VM rootdisk regardless what is defined in common-template
1907924 - Enable madvdontneed in OpenShift Images
1907929 - Enable madvdontneed in OpenShift System Components Part 2
1907936 - NTO is not reporting nto_profile_set_total metrics correctly after reboot
1907947 - The kubeconfig saved in tenantcluster shouldn't include anything that is not related to the current context
1907948 - OCM-O bump to k8s 1.20
1907952 - bump to k8s 1.20
1907972 - Update OCM link to open Insights tab
1907989 - DataVolumes was intorduced in common templates - VM creation fails in the UI
1907998 - Gather kube_pod_resource_request/limit metrics as exposed in upstream KEP 1916
1908001 - [CVE-2020-10749] Update github.com/containernetworking/plugins to v.0.8.6 in egress-router-cni
1908014 - e2e-aws-ansible and e2e-aws-helm are broken in ocp-release-operator-sdk
1908035 - dynamic-demo-plugin build does not generate dist directory
1908135 - quick search modal is not centered over topology
1908145 - kube-scheduler-recovery-controller container crash loop when router pod is co-scheduled
1908159 - [AWS C2S] MCO fails to sync cloud config
1908171 - GCP: Installation fails when installing cluster with n1-custom-4-16384custom type (n1-custom-4-16384)
1908180 - Add source for template is stucking in preparing pvc
1908217 - CI: Server-Side Apply should work for oauth.openshift.io/v1: has no tokens
1908231 - [Migration] The pods ovnkube-node are in CrashLoopBackOff after SDN to OVN
1908277 - QE - Automation- pipelines actions scripts
1908280 - Documentation describing
ignore-volume-azis incorrect
1908296 - Fix pipeline builder form yaml switcher validation issue
1908303 - [CVE-2020-28367 CVE-2020-28366] Remove CGO flag from rhel Dockerfile in Egress-Router-CNI
1908323 - Create button missing for PLR in the search page
1908342 - The new pv_collector_total_pv_count is not reported via telemetry
1908344 - [vsphere-problem-detector] CheckNodeProviderID and CheckNodeDiskUUID have the same name
1908347 - CVO overwrites ValidatingWebhookConfiguration for snapshots
1908349 - Volume snapshot tests are failing after 1.20 rebase
1908353 - QE - Automation- pipelines runs scripts
1908361 - bump to k8s 1.20
1908367 - QE - Automation- pipelines triggers scripts
1908370 - QE - Automation- pipelines secrets scripts
1908375 - QE - Automation- pipelines workspaces scripts
1908381 - Go Dependency Fixes for Devfile Lib
1908389 - Loadbalancer Sync failing on Azure
1908400 - Tests-e2e, increase timeouts, re-add TestArchiveUploadedAndResultsReceived
1908407 - Backport Upstream 95269 to fix potential crash in kubelet
1908410 - Exclude Yarn from VSCode search
1908425 - Create Role Binding form subject type and name are undefined when All Project is selected
1908431 - When the marketplace-operator pod get's restarted, the custom catalogsources are gone, as well as the pods
1908434 - Remove &apos from metal3-plugin internationalized strings
1908437 - Operator backed with no icon has no badge associated with the CSV tag
1908459 - bump to k8s 1.20
1908461 - Add bugzilla component to OWNERS file
1908462 - RHCOS 4.6 ostree removed dhclient
1908466 - CAPO AZ Screening/Validating
1908467 - Zoom in and zoom out in topology package should be sentence case
1908468 - [Azure][4.7] Installer can't properly parse instance type with non integer memory size
1908469 - nbdb failed to come up while bringing up OVNKubernetes cluster
1908471 - OLM should bump k8s dependencies to 1.20
1908484 - oc adm release extract --cloud=aws --credentials-requests dumps all manifests
1908493 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM
1908545 - VM clone dialog does not open
1908557 - [e2e][automation]Miss css id on bootsource and reviewcreate step on wizard
1908562 - Pod readiness is not being observed in real world cases
1908565 - [4.6] Cannot filter the platform/arch of the index image
1908573 - Align the style of flavor
1908583 - bootstrap does not run on additional networks if configured for master in install-config
1908596 - Race condition on operator installation
1908598 - Persistent Dashboard shows events for all provisioners
1908641 - Go back to Catalog Page link on Virtual Machine page vanishes on empty state
1908648 - Skip TestKernelType test on OKD, adjust TestExtensions
1908650 - The title of customize wizard is inconsistent
1908654 - cluster-api-provider: volumes and disks names shouldn't change by machine-api-operator
1908675 - Reenable [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default [Suite:openshift/conformance/parallel] [Suite:k8s]
1908687 - Option to save user settings separate when using local bridge (affects console developers only)
1908697 - Show
kubectl diff command in the oc diff help page
1908715 - Pressing the arrow up key when on topmost quick-search list item it should loop back to bottom
1908716 - UI breaks on click of sidebar of ksvc (if revisions not up) in topology on 4.7 builds
1908717 - "missing unit character in duration" error in some network dashboards
1908746 - [Safari] Drop Shadow doesn't works as expected on hover on workload
1908747 - stale S3 CredentialsRequest in CCO manifest
1908758 - AWS: NLB timeout value is rejected by AWS cloud provider after 1.20 rebase
1908830 - RHCOS 4.6 - Missing Initiatorname
1908868 - Update empty state message for EventSources and Channels tab
1908880 - 4.7 aws-serial CI: NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes
1908883 - CVE-2020-29652 golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference
1908888 - Dualstack does not work with multiple gateways
1908889 - Bump CNO to k8s 1.20
1908891 - TestDNSForwarding DNS operator e2e test is failing frequently
1908914 - CNO: upgrade nodes before masters
1908918 - Pipeline builder yaml view sidebar is not responsive
1908960 - QE - Design Gherkin Scenarios
1908971 - Gherkin Script for pipeline debt 4.7
1908983 - i18n: Add Horizontal Pod Autoscaler action menu is not translated
1908997 - Unsupported access mode should not be available when creating pvc by cinder-csi-driver/gcp-pd-csi-driver from web-console
1908998 - [cinder-csi-driver] doesn't detect the credentials change
1909004 - "No datapoints found" for RHEL node's filesystem graph
1909005 - i18n: workloads list view heading is not translated
1909012 - csi snapshot webhook does not block any invalid update for volumesnapshot and volumesnapshotcontent objects
1909027 - Disks option of Sectected capacity chart shows HDD disk even on selection of SDD disk type
1909043 - OCP + OCS 4.7 Internal - Storage cluster creation throws warning when zone=0 in VMware
1909067 - Web terminal should keep latest output when connection closes
1909070 - PLR and TR Logs component is not streaming as fast as tkn
1909092 - Error Message should not confuse user on Channel form
1909096 - OCP 4.7+OCS 4.7 - The Requested Cluster Capacity field needs to include the selected capacity in calculation in Review and Create Page
1909108 - Machine API components should use 1.20 dependencies
1909116 - Catalog Sort Items dropdown is not aligned on Firefox
1909198 - Move Sink action option is not working
1909207 - Accessibility Issue on monitoring page
1909236 - Remove pinned icon overlap on resource name
1909249 - Intermittent packet drop from pod to pod
1909276 - Accessibility Issue on create project modal
1909289 - oc debug of an init container no longer works
1909290 - Logging may be broken due to mix of k8s.io/klog v1 and v2
1909358 - registry.redhat.io/redhat/community-operator-index:latest only have hyperfoil-bundle
1909453 - Boot disk RAID can corrupt ESP if UEFI firmware writes to it
1909455 - Boot disk RAID will not boot if the primary disk enumerates but fails I/O
1909464 - Build operator-registry with golang-1.15
1909502 - NO_PROXY is not matched between bootstrap and global cluster setting which lead to desired master machineconfig is not found
1909521 - Add kubevirt cluster type for e2e-test workflow
1909527 - [IPI Baremetal] After upgrade from 4.6 to 4.7 metal3 pod does not get created
1909587 - [OCP4] all of the OCP master nodes with soft-anti-affinity run on the same OSP node
1909610 - Fix available capacity when no storage class selected
1909678 - scale up / down buttons available on pod details side panel
1909723 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder & base images to be consistent with ART
1909730 - unbound variable error if EXTRA_PKGS_LIST is not defined
1909739 - Arbiter request data changes
1909744 - cluster-api-provider-openstack: Bump gophercloud
1909790 - PipelineBuilder yaml view cannot be used for editing a pipeline
1909791 - Update standalone kube-proxy config for EndpointSlice
1909792 - Empty states for some details page subcomponents are not i18ned
1909815 - Perspective switcher is only half-i18ned
1909821 - OCS 4.7 LSO installation blocked because of Error "Invalid value: "integer": spec.flexibleScaling in body
1909836 - operator-install-global Cypress test was failing in OLM as it depends on an operator that isn't installed in CI
1909864 - promote-release-openshift-machine-os-content-e2e-aws-4.5 is perm failing
1909911 - [OVN]EgressFirewall caused a segfault
1909943 - Upgrade from 4.6 to 4.7 stuck due to write /sys/devices/xxxx/block/sda/queue/scheduler: invalid argument
1909958 - Support Quick Start Highlights Properly
1909978 - ignore-volume-az = yes not working on standard storageClass
1909981 - Improve statement in template select step
1909992 - Fail to pull the bundle image when using the private index image
1910024 - Reload issue in latest(4.7) UI code on 4.6 cluster locally in dev
1910036 - QE - Design Gherkin Scenarios ODC-4504
1910049 - UPI: ansible-galaxy is not supported
1910127 - [UPI on oVirt]: Improve UPI Documentation
1910140 - fix the api dashboard with changes in upstream kube 1.20
1910160 - If two OperatorConditions include the same deployments they will keep updating the deployment's containers with the OPERATOR_CONDITION_NAME Environment Variable
1910165 - DHCP to static lease script doesn't handle multiple addresses
1910305 - [Descheduler] - The minKubeVersion should be 1.20.0
1910409 - Notification drawer is not localized for i18n
1910459 - Could not provision gcp volume if delete secret gcp-pd-cloud-credentials
1910492 - KMS details are auto-populated on the screen in next attempt at Storage cluster creation
1910501 - Installed Operators->Operand required: Clicking on cancel in Storage cluster page takes back to the Install Operator page
1910533 - [OVN] It takes about 5 minutes for EgressIP failover to work
1910581 - library-go: proxy ENV is not injected into csi-driver-controller which lead to storage operator never get ready
1910666 - Creating a Source Secret from type SSH-Key should use monospace font for better usability
1910738 - OCP 4.7 Installation fails on VMWare due to 1 worker that is degraded
1910739 - Redfish-virtualmedia (idrac) deploy fails on "The Virtual Media image server is already connected"
1910753 - Support Directory Path to Devfile
1910805 - Missing translation for Pipeline status and breadcrumb text
1910829 - Cannot delete a PVC if the dv's phase is WaitForFirstConsumer
1910840 - Show Nonexistent command info in the
oc rollback -hhelp page
1910859 - breadcrumbs doesn't use last namespace
1910866 - Unify templates string
1910870 - Unify template dropdown action
1911016 - Prometheus unable to mount NFS volumes after upgrading to 4.6
1911129 - Monitoring charts renders nothing when switching from a Deployment to "All workloads"
1911176 - [MSTR-998] Wrong text shown when hovering on lines of charts in API Performance dashboard
1911212 - [MSTR-998] API Performance Dashboard "Period" drop-down has a choice "$__auto_interval_period" which can bring "1:154: parse error: missing unit character in duration"
1911213 - Wrong and misleading warning for VMs that were created manually (not from template)
1911257 - [aws-c2s] failed to create cluster, kube-cloud-config was not created
1911269 - waiting for the build message present when build exists
1911280 - Builder images are not detected for Dotnet, Httpd, NGINX
1911307 - Pod Scale-up requires extra privileges in OpenShift web-console
1911381 - "Select Persistent Volume Claim project" shows in customize wizard when select a source available template
1911382 - "source volumeMode (Block) and target volumeMode (Filesystem) do not match" shows in VM Error
1911387 - Hit error - "Cannot read property 'value' of undefined" while creating VM from template
1911408 - [e2e][automation] Add auto-clone cli tests and new flow of VM creation
1911418 - [v2v] The target storage class name is not displayed if default storage class is used
1911434 - git ops empty state page displays icon with watermark
1911443 - SSH Cretifiaction field should be validated
1911465 - IOPS display wrong unit
1911474 - Devfile Application Group Does Not Delete Cleanly (errors)
1911487 - Pruning Deployments should use ReplicaSets instead of ReplicationController
1911574 - Expose volume mode on Upload Data form
1911617 - [CNV][UI] Failure to add source to VM template when no default storage class is defined
1911632 - rpm-ostree command fail due to wrong options when updating ocp-4.6 to 4.7 on worker nodes with rt-kernel
1911656 - using 'operator-sdk run bundle' to install operator successfully, but the command output said 'Failed to run bundle''
1911664 - [Negative Test] After deleting metal3 pod, scaling worker stuck on provisioning state
1911782 - Descheduler should not evict pod used local storage by the PVC
1911796 - uploading flow being displayed before submitting the form
1912066 - The ansible type operator's manager container is not stable when managing the CR
1912077 - helm operator's default rbac forbidden
1912115 - [automation] Analyze job keep failing because of 'JavaScript heap out of memory'
1912237 - Rebase CSI sidecars for 4.7
1912381 - [e2e][automation] Miss css ID on Create Network Attachment Definition page
1912409 - Fix flow schema deployment
1912434 - Update guided tour modal title
1912522 - DNS Operator e2e test: TestCoreDNSImageUpgrade is fundamentally broken
1912523 - Standalone pod status not updating in topology graph
1912536 - Console Plugin CR for console-demo-plugin has wrong apiVersion
1912558 - TaskRun list and detail screen doesn't show Pending status
1912563 - p&f: carry 97206: clean up executing request on panic
1912565 - OLM macOS local build broken by moby/term dependency
1912567 - [OCP on RHV] Node becomes to 'NotReady' status when shutdown vm from RHV UI only on the second deletion
1912577 - 4.1/4.2->4.3->...-> 4.7 upgrade is stuck during 4.6->4.7 with co/openshift-apiserver Degraded, co/network not Available and several other components pods CrashLoopBackOff
1912590 - publicImageRepository not being populated
1912640 - Go operator's controller pods is forbidden
1912701 - Handle dual-stack configuration for NIC IP
1912703 - multiple queries can't be plotted in the same graph under some conditons
1912730 - Operator backed: In-context should support visual connector if SBO is not installed
1912828 - Align High Performance VMs with High Performance in RHV-UI
1912849 - VM from wizard - default flavor does not match the actual flavor set by common templates
1912852 - VM from wizard - available VM templates - "storage" field is "0 B"
1912888 - recycler template should be moved to KCM operator
1912907 - Helm chart repository index can contain unresolvable relative URL's
1912916 - Set external traffic policy to cluster for IBM platform
1912922 - Explicitly specifying the operator generated default certificate for an ingress controller breaks the ingress controller
1912938 - Update confirmation modal for quick starts
1912942 - cluster-storage-operator: proxy ENV is not injected into vsphere-problem-detector deployment
1912944 - cluster-storage-operator: proxy ENV is not injected into Manila CSI driver operator deployment
1912945 - aws-ebs-csi-driver-operator: proxy ENV is not injected into the CSI driver
1912946 - gcp-pd-csi-driver-operator: proxy ENV is not injected into the CSI driver
1912947 - openstack-cinder-csi-driver-operator: proxy ENV is not injected into the CSI driver
1912948 - csi-driver-manila-operator: proxy ENV is not injected into the CSI driver
1912949 - ovirt-csi-driver-operator: proxy ENV is not injected into the CSI driver
1912977 - rebase upstream static-provisioner
1913006 - Remove etcd v2 specific alerts with etcd_http* metrics
1913011 - [OVN] Pod's external traffic not use egressrouter macvlan ip as a source ip
1913037 - update static-provisioner base image
1913047 - baremetal clusteroperator progressing status toggles between true and false when cluster is in a steady state
1913085 - Regression OLM uses scoped client for CRD installation
1913096 - backport: cadvisor machine metrics are missing in k8s 1.19
1913132 - The installation of Openshift Virtualization reports success early before it 's succeeded eventually
1913154 - Upgrading to 4.6.10 nightly failed with RHEL worker nodes: Failed to find /dev/disk/by-label/root
1913196 - Guided Tour doesn't handle resizing of browser
1913209 - Support modal should be shown for community supported templates
1913226 - [Migration] The SDN migration rollback failed if customize vxlanPort
1913249 - update info alert this template is not aditable
1913285 - VM list empty state should link to virtualization quick starts
1913289 - Rebase AWS EBS CSI driver for 4.7
1913292 - OCS 4.7 Installation failed over vmware when arbiter was enabled, as flexibleScaling is also getting enabled
1913297 - Remove restriction of taints for arbiter node
1913306 - unnecessary scroll bar is present on quick starts panel
1913325 - 1.20 rebase for openshift-apiserver
1913331 - Import from git: Fails to detect Java builder
1913332 - Pipeline visualization breaks the UI when multiple taskspecs are used
1913343 - (release-4.7) Added changelog file for insights-operator
1913356 - (release-4.7) Implemented gathering specific logs from openshift apiserver operator
1913371 - Missing i18n key "Administrator" in namespace "console-app" and language "en."
1913386 - users can see metrics of namespaces for which they don't have rights when monitoring own services with prometheus user workloads
1913420 - Time duration setting of resources is not being displayed
1913536 - 4.6.9 -> 4.7 upgrade hangs. RHEL 7.9 worker stuck on "error enabling unit: Failed to execute operation: File exists\\n\"
1913554 - Recording rule for ingress error fraction SLI is incorrect, uses irate instead of increase
1913560 - Normal user cannot load template on the new wizard
1913563 - "Virtual Machine" is not on the same line in create button when logged with normal user
1913567 - Tooltip data should be same for line chart or stacked chart, display data value same as the table
1913568 - Normal user cannot create template
1913582 - [Migration]SDN to OVN migration stucks on MCO for rhel worker
1913585 - Topology descriptive text fixes
1913608 - Table data contains data value None after change time range in graph and change back
1913651 - Improved Red Hat image and crashlooping OpenShift pod collection
1913660 - Change location and text of Pipeline edit flow alert
1913685 - OS field not disabled when creating a VM from a template
1913716 - Include additional use of existing libraries
1913725 - Refactor Insights Operator Plugin states
1913736 - Regression: fails to deploy computes when using root volumes
1913747 - Update operator to kubernetes 1.20.1 to pickup upstream fixes
1913751 - add third-party network plugin test suite to openshift-tests
1913783 - QE-To fix the merging pr issue, commenting the afterEach() block
1913807 - Template support badge should not be shown for community supported templates
1913821 - Need definitive steps about uninstalling descheduler operator
1913851 - Cluster Tasks are not sorted in pipeline builder
1913864 - BuildConfig YAML template references ruby ImageStreamTag that no longer exists
1913951 - Update the Devfile Sample Repo to an Official Repo Host
1913960 - Cluster Autoscaler should use 1.20 dependencies
1913969 - Field dependency descriptor can sometimes cause an exception
1914060 - Disk created from 'Import via Registry' cannot be used as boot disk
1914066 - [sriov] sriov dp pod crash when delete ovs HW offload policy
1914090 - Grafana - The resulting dataset is too large to graph (OCS RBD volumes being counted as disks)
1914119 - vsphere problem detector operator has no permission to update storages.operator.openshift.io instances
1914125 - Still using /dev/vde as default device path when create localvolume
1914183 - Empty NAD page is missing link to quickstarts
1914196 - target port in
from dockerfileflow does nothing
1914204 - Creating VM from dev perspective may fail with template not found error
1914209 - Associate image secret name to pipeline serviceaccount imagePullSecrets
1914212 - [e2e][automation] Add test to validate bootable disk souce
1914250 - ovnkube-node fails on master nodes when both DHCPv6 and SLAAC addresses are configured on nodes
1914284 - Upgrade to OCP 4.6.9 results in cluster-wide DNS and connectivity issues due to bad NetworkPolicy flows
1914287 - Bring back selfLink
1914301 - User VM Template source should show the same provider as template itself
1914303 - linuxptp-daemon is not forwarding ptp4l stderr output to openshift logs
1914309 - /terminal page when WTO not installed shows nonsensical error
1914334 - order of getting started samples is arbitrary
1914343 - [sig-imageregistry][Feature:ImageTriggers] Annotation trigger reconciles after the image is overwritten [Suite:openshift/conformance/parallel] timeout on s390x
1914349 - Increase and decrease buttons in max and min pods in HPA page has distorted UI
1914405 - Quick search modal should be opened when coming back from a selection
1914407 - Its not clear that node-ca is running as non-root
1914427 - Count of pods on the dashboard is incorrect
1914439 - Typo in SRIOV port create command example
1914451 - cluster-storage-operator pod running as root
1914452 - oc image append, oc image extract outputs wrong suggestion to use --keep-manifest-list=true
1914642 - Customize Wizard Storage tab does not pass validation
1914723 - SamplesTBRInaccessibleOnBoot Alert has a misspelling
1914793 - device names should not be translated
1914894 - Warn about using non-groupified api version
1914926 - webdriver-manager pulls incorrect version of ChomeDriver due to a bug
1914932 - Put correct resource name in relatedObjects
1914938 - PVC disk is not shown on customization wizard general tab
1914941 - VM Template rootdisk is not deleted after fetching default disk bus
1914975 - Collect logs from openshift-sdn namespace
1915003 - No estimate of average node readiness during lifetime of a cluster
1915027 - fix MCS blocking iptables rules
1915041 - s3:ListMultipartUploadParts is relied on implicitly
1915079 - Canary controller should not periodically rotate the canary route endpoint for performance reasons
1915080 - Large number of tcp connections with shiftstack ocp cluster in about 24 hours
1915085 - Pods created and rapidly terminated get stuck
1915114 - [aws-c2s] worker machines are not create during install
1915133 - Missing default pinned nav items in dev perspective
1915176 - Update snapshot API CRDs to v1 in web-console when creating volumesnapshot related resource
1915187 - Remove the "Tech preview" tag in web-console for volumesnapshot
1915188 - Remove HostSubnet anonymization
1915200 - [OCP 4.7+ OCS 4.6]Arbiter related Note should not show up during UI deployment
1915217 - OKD payloads expect to be signed with production keys
1915220 - Remove dropdown workaround for user settings
1915235 - Failed to upgrade to 4.7 from 4.6 due to the machine-config failure
1915262 - When deploying with assisted install the CBO operator is installed and enabled without metal3 pod
1915277 - [e2e][automation]fix cdi upload form test
1915295 - [BM][IP][Dualstack] Installation failed - operators report dial tcp 172.30.0.1:443: i/o timeout
1915304 - Updating scheduling component builder & base images to be consistent with ART
1915312 - Prevent schedule Linux openshift-network-diagnostics pod on Windows node
1915318 - [Metal] bareMetal IPI - cannot interact with toolbox container after first execution only in parallel from different connection
1915348 - [RFE] linuxptp operator needs to expose the uds_address_socket to be used by an application pod
1915357 - Dev Catalog doesn't load anything if virtualization operator is installed
1915379 - New template wizard should require provider and make support input a dropdown type
1915408 - Failure in operator-registry kind e2e test
1915416 - [Descheduler] descheduler evicts pod which does not have any ownerRef or descheduler evict annotation
1915460 - Cluster name size might affect installations
1915500 - [aws c2s] kube-controller-manager crash loops trying to fetch the AWS instance
1915540 - Silent 4.7 RHCOS install failure on ppc64le
1915579 - [Metal] redhat-support-tool became unavailable after tcpdump usage (BareMetal IPI)
1915582 - p&f: carry upstream pr 97860
1915594 - [e2e][automation] Improve test for disk validation
1915617 - Bump bootimage for various fixes
1915624 - "Please fill in the following field: Template provider" blocks customize wizard
1915627 - Translate Guided Tour text.
1915643 - OCP4.6 to 4.7 upgrade failed due to manila csi driver operator sync error
1915647 - Intermittent White screen when the connector dragged to revision
1915649 - "Template support" pop up is not a warning; checkbox text should be rephrased
1915654 - [e2e][automation] Add a verification for Afinity modal should hint "Matching node found"
1915661 - Can't run the 'oc adm prune' command in a pod
1915672 - Kuryr doesn't work with selfLink disabled.
1915674 - Golden image PVC creation - storage size should be taken from the template
1915685 - Message for not supported template is not clear enough
1915760 - Need to increase timeout to wait rhel worker get ready
1915793 - quick starts panel syncs incorrectly across browser windows
1915798 - oauth connection errors for openshift console pods on an OVNKube OCP 4.7 cluster
1915818 - vsphere-problem-detector: use "_totals" in metrics
1915828 - Latest Dell firmware (04.40.00.00) fails to install IPI on BM using idrac-virtualmedia protocol
1915859 - vsphere-problem-detector: does not report ESXi host version nor VM HW version
1915871 - operator-sdk version in new downstream image should be v1.2.0-ocp not v4.7.0
1915879 - Pipeline Dashboard tab Rename to Pipeline Metrics
1915885 - Kuryr doesn't support workers running on multiple subnets
1915898 - TaskRun log output shows "undefined" in streaming
1915907 - test/cmd/builds.sh uses docker.io
1915912 - sig-storage-csi-snapshotter image not available
1915926 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder & base images to be consistent with ART
1915929 - A11y Violation: svg-img-alt for time axis of Utilization Card on Cluster Dashboard
1915939 - Resizing the browser window removes Web Terminal Icon
1915945 - [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]
1915959 - Baremetal cluster operator is included in a ROKS installation of 4.7
1915962 - ROKS: manifest with machine health check fails to apply in 4.7
1915972 - Global configuration breadcrumbs do not work as expected
1915981 - Install ethtool and conntrack in container for debugging
1915995 - "Edit RoleBinding Subject" action under RoleBinding list page kebab actions causes unhandled exception
1915998 - Installer bootstrap node setting of additional subnets inconsistent with additional security groups
1916021 - OLM enters infinite loop if Pending CSV replaces itself
1916056 - Need Visual Web Terminal metric enabled for OCP monitoring telemetry
1916081 - non-existant should be non-existent in CloudCredentialOperatorTargetNamespaceMissing alert's annotations
1916099 - VM creation - customization wizard - user should be allowed to delete and re-create root disk
1916126 - [e2e][automation] Help fix tests for vm guest-agent and next-run-configuration
1916145 - Explicitly set minimum versions of python libraries
1916164 - Update csi-driver-nfs builder & base images to be consistent with ART
1916221 - csi-snapshot-controller-operator: bump dependencies for 4.7
1916271 - Known issues should mention failure to apply soft-anti-affinity to masters beyond the third
1916363 - [OVN] ovs-configuration.service reports as failed within all nodes using version 4.7.0-fc.2
1916379 - error metrics from vsphere-problem-detector should be gauge
1916382 - Can't create ext4 filesystems with Ignition
1916384 - 4.5.15 and later cluster-version operator does not sync ClusterVersion status before exiting, leaving 'verified: false' even for verified updates
1916401 - Deleting an ingress controller with a bad DNS Record hangs
1916417 - [Kuryr] Must-gather does not have all Custom Resources information
1916419 - [sig-devex][Feature:ImageEcosystem][Slow] openshift images should be SCL enabled returning s2i usage when running the image
1916454 - teach CCO about upgradeability from 4.6 to 4.7
1916486 - [OCP RHV] [Docs] Update RHV CSI provisioning section in OCP documenation
1916502 - Boot disk mirroring fails with mdadm error
1916524 - Two rootdisk shows on storage step
1916580 - Default yaml is broken for VM and VM template
1916621 - oc adm node-logs examples are wrong
1916642 - [zh_CN] Redundant period in Secrets - Create drop down menu - Key value secret.
1916692 - Possibly fails to destroy LB and thus cluster
1916711 - Update Kube dependencies in MCO to 1.20.0
1916747 - remove links to quick starts if virtualization operator isn't updated to 2.6
1916764 - editing a workload with no application applied, will auto fill the app
1916834 - Pipeline Metrics - Text Updates
1916843 - collect logs from openshift-sdn-controller pod
1916853 - cluster will not gracefully recover if openshift-etcd namespace is removed
1916882 - OCS 4.7 LSO : wizard (Discover disks and create storageclass) does not show zone when topology.kubernetes.io/zone are added manually
1916888 - OCS wizard Donor chart does not get updated when
Device Typeis edited
1916938 - Using 4.6 install-config.yaml file with lbFloatingIP results in validation error "Forbidden: cannot specify lbFloatingIP and apiFloatingIP together"
1916949 - ROKS: manifests in openshift-oauth-apiserver ns fails to create with non-existent namespace
1917101 - [UPI on oVirt] - 'RHCOS image' topic isn't located in the right place in UPI document
1917114 - Upgrade from 4.5.9 to 4.7 fails as authentication operator is Degraded due to '"ProxyConfigController" controller failed to sync "key"' error
1917117 - Common templates - disks screen: invalid disk name
1917124 - Custom template - clone existing PVC - the name of the target VM's data volume is hard-coded; only one VM can be created
1917146 - [oVirt] Consume 23-10 ovirt sdk- csi operator
1917147 - [oVirt] csi operator panics if ovirt-engine suddenly becomes unavailable.
1917148 - [oVirt] Consume 23-10 ovirt sdk
1917239 - Monitoring time options overlaps monitoring tab navigation when Quickstart panel is opened
1917272 - Should update the default minSize to 1Gi when create localvolumeset on web console
1917303 - [automation][e2e] make kubevirt-plugin gating job mandatory
1917315 - localvolumeset-local-provisoner-xxx pods are not killed after upgrading from 4.6 to 4.7
1917327 - annotations.message maybe wrong for NTOPodsNotReady alert
1917367 - Refactor periodic.go
1917371 - Add docs on how to use the built-in profiler
1917372 - Application metrics are shown on Metrics dashboard but not in linked Prometheus UI in OCP management console
1917395 - pv-pool backing store name restriction should be at 43 characters from the ocs ui
1917484 - [BM][IPI] Failed to scale down machineset
1917522 - Deprecate --filter-by-os in oc adm catalog mirror
1917537 - controllers continuously busy reconciling operator
1917551 - use min_over_time for vsphere prometheus alerts
1917585 - OLM Operator install page missing i18n
1917587 - Manila CSI operator becomes degraded if user doesn't have permissions to list share types
1917605 - Deleting an exgw causes pods to no longer route to other exgws
1917614 - [aws c2s] ingress operator uses unavailable resourcegrouptaggings API
1917656 - Add to Project/application for eventSources from topology shows 404
1917658 - Show TP badge for sources powered by camel connectors in create flow
1917660 - Editing parallelism of job get error info
1917678 - Could not provision pv when no symlink and target found on rhel worker
1917679 - Hide double CTA in admin pipelineruns tab
1917683 -
NodeTextFileCollectorScrapeErroralert in OCP 4.6 cluster.
1917759 - Console operator panics after setting plugin that does not exists to the console-operator config
1917765 - ansible-operator version in downstream image should be v1.3.0 not v4.7.0
1917770 - helm-operator version in downstream image should be v1.3.0 not v4.7.0
1917799 - Gather s list of names and versions of installed OLM operators
1917803 - [sig-storage] Pod Disks should be able to delete a non-existent PD without error
1917814 - Show Broker create option in eventing under admin perspective
1917838 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types
1917872 - [oVirt] rebase on latest SDK 2021-01-12
1917911 - network-tools needs ovnkube-trace binary from ovn-kubernetes image
1917938 - upgrade version of dnsmasq package
1917942 - Canary controller causes panic in ingress-operator
1918019 - Undesired scrollbars in markdown area of QuickStart
1918068 - Flaky olm integration tests
1918085 - reversed name of job and namespace in cvo log
1918112 - Flavor is not editable if a customize VM is created from cli
1918129 - Update IO sample archive with missing resources & remove IP anonymization from clusteroperator resources
1918132 - i18n: Volume Snapshot Contents menu is not translated
1918133 - [e2e][automation] Fix ocp 4.7 existing tests - part2
1918140 - Deployment openstack-cinder-csi-driver-controller and openstack-manila-csi-controllerplugin doesn't be installed on OSP
1918153 - When
&character is set as an environment variable in a build config it is getting converted as
\u00261918185 - Capitalization on PLR details page
1918287 - [ovirt] ovirt csi driver is flooding RHV with API calls and spam the event UI with new connections
1918318 - Kamelet connector's are not shown in eventing section under Admin perspective
1918351 - Gather SAP configuration (SCC & ClusterRoleBinding)
1918375 - [calico] rbac-proxy container in kube-proxy fails to create tokenreviews
1918395 - [ovirt] increase livenessProbe period
1918415 - MCD nil pointer on dropins
1918438 - [ja_JP, zh_CN] Serverless i18n misses
1918440 - Kernel Arguments get reapplied even when no new kargs has been added in MachineConfig
1918471 - CustomNoUpgrade Feature gates are not working correctly
1918558 - Supermicro nodes boot to PXE upon reboot after successful deployment to disk
1918622 - Updating ose-jenkins-agent-maven builder & base images to be consistent with ART
1918623 - Updating ose-jenkins-agent-nodejs-12 builder & base images to be consistent with ART
1918625 - Updating ose-jenkins-agent-nodejs-10 builder & base images to be consistent with ART
1918635 - Updating openshift-jenkins-2 builder & base images to be consistent with ART #1197
1918639 - Event listener with triggerRef crashes the console
1918648 - Subscription page doesn't show InstallPlan correctly
1918716 - Manilacsi becomes degraded even though it is not available with the underlying Openstack
1918748 - helmchartrepo is not http(s)_proxy-aware
1918757 - Consistant fallures of features/project-creation.feature Cypress test in CI
1918803 - Need dedicated details page w/ global config breadcrumbs for 'KnativeServing' plugin
1918826 - Insights popover icons are not horizontally aligned
1918879 - need better debug for bad pull secrets
1918958 - The default NMstate instance from the operator is incorrect
1919097 - Close bracket ")" missing at the end of the sentence in the UI
1919231 - quick search modal cut off on smaller screens
1919259 - Make "Add x" singular in Pipeline Builder
1919260 - VM Template list actions should not wrap
1919271 - NM prepender script doesn't support systemd-resolved
1919341 - Updating ose-jenkins-agent-maven builder & base images to be consistent with ART
1919360 - Need managed-cluster-info metric enabled for OCP monitoring telemetry
1919379 - dotnet logo out of date
1919387 - Console login fails with no error when it can't write to localStorage
1919396 - A11y Violation: svg-img-alt on Pod Status ring
1919407 - OpenStack IPI has three-node control plane limitation, but InstallConfigs aren't verified
1919750 - Search InstallPlans got Minified React error
1919778 - Upgrade is stuck in insights operator Degraded with "Source clusterconfig could not be retrieved" until insights operator pod is manually deleted
1919823 - OCP 4.7 Internationalization Chinese tranlate issue
1919851 - Visualization does not render when Pipeline & Task share same name
1919862 - The tip information for
oc new-project --skip-config-writeis wrong
1919876 - VM created via customize wizard cannot inherit template's PVC attributes
1919877 - Click on KSVC breaks with white screen
1919879 - The toolbox container name is changed from 'toolbox-root' to 'toolbox-' in a chroot environment
1919945 - user entered name value overridden by default value when selecting a git repository
1919968 - [release-4.7] Undiagnosed panic detected in pod runtime.go:76: invalid memory address or nil pointer dereference
1919970 - NTO does not update when the tuned profile is updated.
1919999 - Bump Cluster Resource Operator Golang Versions
1920027 - machine-config-operator consistently failing during 4.6 to 4.7 upgrades and clusters do not install successfully with proxy configuration
1920200 - user-settings network error results in infinite loop of requests
1920205 - operator-registry e2e tests not working properly
1920214 - Bump golang to 1.15 in cluster-resource-override-admission
1920248 - re-running the pipelinerun with pipelinespec crashes the UI
1920320 - VM template field is "Not available" if it's created from common template
1920367 - When creating localvolumeset instance from the web console, the title for setting volumeMode is
Disk Mode1920368 - Fix containers creation issue resulting in runc running on Guaranteed Pod CPUs
1920390 - Monitoring > Metrics graph shifts to the left when clicking the "Stacked" option and when toggling data series lines on / off
1920426 - Egress Router CNI OWNERS file should have ovn-k team members
1920427 - Need to update
oc loginhelp page since we don't support prompt interactively for the username
1920430 - [V2V] [UI] Browser window becomes empty when running import wizard for the first time
1920438 - openshift-tuned panics on turning debugging on/off.
1920445 - e2e-gcp-ovn-upgrade job is actually using openshift-sdn
1920481 - kuryr-cni pods using unreasonable amount of CPU
1920509 - wait for port 6443 to be open in the kube-scheduler container; use ss instead of lsof
1920524 - Topology graph crashes adding Open Data Hub operator
1920526 - catalog operator causing CPU spikes and bad etcd performance
1920551 - Boot Order is not editable for Templates in "openshift" namespace
1920555 - bump cluster-resource-override-admission api dependencies
1920571 - fcp multipath will not recover failed paths automatically
1920619 - Remove default scheduler profile value
1920655 - Console should not show the Create Autoscaler link in cluster settings when the CRD is not present
1920674 - MissingKey errors in bindings namespace
1920684 - Text in language preferences modal is misleading
1920695 - CI is broken because of bad image registry reference in the Makefile
1920756 - update generic-admission-server library to get the system:masters authorization optimization
1920769 - [Upgrade] OCP upgrade from 4.6.13 to 4.7.0-fc.4 for "network-check-target" failed when "defaultNodeSelector" is set
1920771 - i18n: Delete persistent volume claim drop down is not translated
1920806 - [OVN]Nodes lost network connection after reboot on the vSphere UPI
1920912 - Unable to power off BMH from console
1920981 - When OCS was deployed with arbiter mode enable add capacity is increasing the count by "2"
1920984 - [e2e][automation] some menu items names are out dated
1921013 - Gather PersistentVolume definition (if any) used in image registry config
1921023 - Do not enable Flexible Scaling to true for Internal mode clusters(revert to 4.6 behavior)
1921087 - 'start next quick start' link doesn't work and is unintuitive
1921088 - test-cmd is failing on volumes.sh pretty consistently
1921248 - Clarify the kubelet configuration cr description
1921253 - Text filter default placeholder text not internationalized
1921258 - User Preferences: Active perspective and project change in the current window when selected in a different window
1921275 - Panic in authentication-operator in (*deploymentController).updateOperatorDeploymentInfo
1921277 - Fix Warning and Info log statements to handle arguments
1921281 - oc get -o yaml --export returns "error: unknown flag: --export"
1921458 - [SDK] Gracefully handle the
run bundle-upgradeif the lower version operator doesn't exist
1921556 - [OCS with Vault]: OCS pods didn't comeup after deploying with Vault details from UI
1921572 - For external source (i.e GitHub Source) form view as well shows yaml
1921580 - [e2e][automation]Test VM detail view actions dropdown does not pass
1921610 - Pipeline metrics font size inconsistency
1921644 - [e2e][automation] tests errors with wrong cloudInit new line syntax
1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation
1921655 - [OSP] Incorrect error handling during cloudinfo generation
1921713 - [e2e][automation] fix failing VM migration tests
1921762 - Serving and Eventing breadcrumbs should direct users back to tabbed page view
1921774 - delete application modal errors when a resource cannot be found
1921806 - Explore page APIResourceLinks aren't i18ned
1921823 - CheckBoxControls not internationalized
1921836 - AccessTableRows don't internationalize "User" or "Group"
1921857 - Test flake when hitting router in e2e tests due to one router not being up to date
1921880 - Dynamic plugins are not initialized on console load in production mode
1921911 - Installer PR #4589 is causing leak of IAM role policy bindings
1921921 - "Global Configuration" breadcrumb does not use sentence case
1921949 - Console bug - source code URL broken for gitlab self-hosted repositories
1921954 - Subscription-related constraints in ResolutionFailed events are misleading
1922015 - buttons in modal header are invisible on Safari
1922021 - Nodes terminal page 'Expand' 'Collapse' button not translated
1922050 - [e2e][automation] Improve vm clone tests
1922066 - Cannot create VM from custom template which has extra disk
1922098 - Namespace selection dialog is not closed after select a namespace
1922099 - Updated Readme documentation for QE code review and setup
1922146 - Egress Router CNI doesn't have logging support.
1922267 - Collect specific ADFS error
1922292 - Bump RHCOS boot images for 4.7
1922454 - CRI-O doesn't enable pprof by default
1922473 - reconcile LSO images for 4.8
1922573 - oc returns an error while using -o jsonpath when there is no resource found in the namespace
1922782 - Source registry missing docker:// in yaml
1922907 - Interop UI Tests - step implementation for updating feature files
1922911 - Page crash when click the "Stacked" checkbox after clicking the data series toggle buttons
1922991 - "verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build" test fails on OKD
1923003 - WebConsole Insights widget showing "Issues pending" when the cluster doesn't report anything
1923098 - [vsphere-problem-detector-operator] Need permission to access replicasets.apps resources
1923102 - [vsphere-problem-detector-operator] pod's version is not correct
1923245 - [Assisted-4.7] [Staging][Minimal-ISO] nodes fails to boot
1923674 - k8s 1.20 vendor dependencies
1923721 - PipelineRun running status icon is not rotating
1923753 - Increase initialDelaySeconds for ovs-daemons container in the ovs-node daemonset for upgrade scenarios
1923774 - Docker builds failing for openshift/cluster-resource-override-admission-operator
1923802 - ci/prow/e2e-aws-olm build failing for openshift/cluster-resource-override-admission-operator
1923874 - Unable to specify values with % in kubeletconfig
1923888 - Fixes error metadata gathering
1923892 - Update arch.md after refactor.
1923894 - "installed" operator status in operatorhub page does not reflect the real status of operator
1923895 - Changelog generation.
1923911 - [e2e][automation] Improve tests for vm details page and list filter
1923945 - PVC Name and Namespace resets when user changes os/flavor/workload
1923951 - EventSources shows
undefined` in project
1923973 - Dynamic plugin demo README does not contain info how to enable the ConsolePlugins
1924046 - Localhost: Refreshing on a Project removes it from nav item urls
1924078 - Topology quick search View all results footer should be sticky.
1924081 - NTO should ship the latest Tuned daemon release 2.15
1924084 - backend tests incorrectly hard-code artifacts dir
1924128 - [sig-builds][Feature:Builds] verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build
1924135 - Under sufficient load, CRI-O may segfault
1924143 - Code Editor Decorator url is broken for Bitbucket repos
1924188 - Language selector dropdown doesn't always pre-select the language
1924365 - Add extra disk for VM which use boot source PXE
1924383 - Degraded network operator during upgrade to 4.7.z
1924387 - [ja_JP][zh_CN] Incorrect warning message for deleting namespace on Delete Pod dialog box.
1924480 - non cluster admin can not take VM snapshot: An error occurred, cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on
1924583 - Deprectaed templates are listed in the Templates screen
1924870 - pick upstream pr#96901: plumb context with request deadline
1924955 - Images from Private external registry not working in deploy Image
1924961 - k8sutil.TrimDNS1123Label creates invalid values
1924985 - Build egress-router-cni for both RHEL 7 and 8
1925020 - Console demo plugin deployment image shoult not point to dockerhub
1925024 - Remove extra validations on kafka source form view net section
1925039 - [e2e] Fix Test - ID(CNV-5327) Change Custom Flavor while VM is running
1925072 - NTO needs to ship the current latest stalld v1.7.0
1925163 - Missing info about dev catalog in boot source template column
1925200 - Monitoring Alert icon is missing on the workload in Topology view
1925262 - apiserver getting 2 SIGTERM signals which was immediately making it exit code 1
1925319 - bash syntax error in configure-ovs.sh script
1925408 - Remove StatefulSet gatherer and replace it with gathering corresponding config map data
1925516 - Pipeline Metrics Tooltips are overlapping data
1925562 - Add new ArgoCD link from GitOps application environments page
1925596 - Gitops details page image and commit id text overflows past card boundary
1926556 - 'excessive etcd leader changes' test case failing in serial job because prometheus data is wiped by machine set test
1926588 - The tarball of operator-sdk is not ready for ocp4.7
1927456 - 4.7 still points to 4.6 catalog images
1927500 - API server exits non-zero on 2 SIGTERM signals
1929278 - Monitoring workloads using too high a priorityclass
1929645 - Remove openshift:kubevirt-machine-controllers decleration from machine-api
1929920 - Cluster monitoring documentation link is broken - 404 not found
- References:
https://access.redhat.com/security/cve/CVE-2018-10103 https://access.redhat.com/security/cve/CVE-2018-10105 https://access.redhat.com/security/cve/CVE-2018-14461 https://access.redhat.com/security/cve/CVE-2018-14462 https://access.redhat.com/security/cve/CVE-2018-14463 https://access.redhat.com/security/cve/CVE-2018-14464 https://access.redhat.com/security/cve/CVE-2018-14465 https://access.redhat.com/security/cve/CVE-2018-14466 https://access.redhat.com/security/cve/CVE-2018-14467 https://access.redhat.com/security/cve/CVE-2018-14468 https://access.redhat.com/security/cve/CVE-2018-14469 https://access.redhat.com/security/cve/CVE-2018-14470 https://access.redhat.com/security/cve/CVE-2018-14553 https://access.redhat.com/security/cve/CVE-2018-14879 https://access.redhat.com/security/cve/CVE-2018-14880 https://access.redhat.com/security/cve/CVE-2018-14881 https://access.redhat.com/security/cve/CVE-2018-14882 https://access.redhat.com/security/cve/CVE-2018-16227 https://access.redhat.com/security/cve/CVE-2018-16228 https://access.redhat.com/security/cve/CVE-2018-16229 https://access.redhat.com/security/cve/CVE-2018-16230 https://access.redhat.com/security/cve/CVE-2018-16300 https://access.redhat.com/security/cve/CVE-2018-16451 https://access.redhat.com/security/cve/CVE-2018-16452 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2019-3884 https://access.redhat.com/security/cve/CVE-2019-5018 https://access.redhat.com/security/cve/CVE-2019-6977 https://access.redhat.com/security/cve/CVE-2019-6978 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9455 https://access.redhat.com/security/cve/CVE-2019-9458 https://access.redhat.com/security/cve/CVE-2019-11068 https://access.redhat.com/security/cve/CVE-2019-12614 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13225 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15165 https://access.redhat.com/security/cve/CVE-2019-15166 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-15917 https://access.redhat.com/security/cve/CVE-2019-15925 https://access.redhat.com/security/cve/CVE-2019-16167 https://access.redhat.com/security/cve/CVE-2019-16168 https://access.redhat.com/security/cve/CVE-2019-16231 https://access.redhat.com/security/cve/CVE-2019-16233 https://access.redhat.com/security/cve/CVE-2019-16935 https://access.redhat.com/security/cve/CVE-2019-17450 https://access.redhat.com/security/cve/CVE-2019-17546 https://access.redhat.com/security/cve/CVE-2019-18197 https://access.redhat.com/security/cve/CVE-2019-18808 https://access.redhat.com/security/cve/CVE-2019-18809 https://access.redhat.com/security/cve/CVE-2019-19046 https://access.redhat.com/security/cve/CVE-2019-19056 https://access.redhat.com/security/cve/CVE-2019-19062 https://access.redhat.com/security/cve/CVE-2019-19063 https://access.redhat.com/security/cve/CVE-2019-19068 https://access.redhat.com/security/cve/CVE-2019-19072 https://access.redhat.com/security/cve/CVE-2019-19221 https://access.redhat.com/security/cve/CVE-2019-19319 https://access.redhat.com/security/cve/CVE-2019-19332 https://access.redhat.com/security/cve/CVE-2019-19447 https://access.redhat.com/security/cve/CVE-2019-19524 https://access.redhat.com/security/cve/CVE-2019-19533 https://access.redhat.com/security/cve/CVE-2019-19537 https://access.redhat.com/security/cve/CVE-2019-19543 https://access.redhat.com/security/cve/CVE-2019-19602 https://access.redhat.com/security/cve/CVE-2019-19767 https://access.redhat.com/security/cve/CVE-2019-19770 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-19956 https://access.redhat.com/security/cve/CVE-2019-20054 https://access.redhat.com/security/cve/CVE-2019-20218 https://access.redhat.com/security/cve/CVE-2019-20386 https://access.redhat.com/security/cve/CVE-2019-20387 https://access.redhat.com/security/cve/CVE-2019-20388 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20636 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-20812 https://access.redhat.com/security/cve/CVE-2019-20907 https://access.redhat.com/security/cve/CVE-2019-20916 https://access.redhat.com/security/cve/CVE-2020-0305 https://access.redhat.com/security/cve/CVE-2020-0444 https://access.redhat.com/security/cve/CVE-2020-1716 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-1751 https://access.redhat.com/security/cve/CVE-2020-1752 https://access.redhat.com/security/cve/CVE-2020-1971 https://access.redhat.com/security/cve/CVE-2020-2574 https://access.redhat.com/security/cve/CVE-2020-2752 https://access.redhat.com/security/cve/CVE-2020-2922 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3898 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-6405 https://access.redhat.com/security/cve/CVE-2020-7595 https://access.redhat.com/security/cve/CVE-2020-7774 https://access.redhat.com/security/cve/CVE-2020-8177 https://access.redhat.com/security/cve/CVE-2020-8492 https://access.redhat.com/security/cve/CVE-2020-8563 https://access.redhat.com/security/cve/CVE-2020-8566 https://access.redhat.com/security/cve/CVE-2020-8619 https://access.redhat.com/security/cve/CVE-2020-8622 https://access.redhat.com/security/cve/CVE-2020-8623 https://access.redhat.com/security/cve/CVE-2020-8624 https://access.redhat.com/security/cve/CVE-2020-8647 https://access.redhat.com/security/cve/CVE-2020-8648 https://access.redhat.com/security/cve/CVE-2020-8649 https://access.redhat.com/security/cve/CVE-2020-9327 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-10029 https://access.redhat.com/security/cve/CVE-2020-10732 https://access.redhat.com/security/cve/CVE-2020-10749 https://access.redhat.com/security/cve/CVE-2020-10751 https://access.redhat.com/security/cve/CVE-2020-10763 https://access.redhat.com/security/cve/CVE-2020-10773 https://access.redhat.com/security/cve/CVE-2020-10774 https://access.redhat.com/security/cve/CVE-2020-10942 https://access.redhat.com/security/cve/CVE-2020-11565 https://access.redhat.com/security/cve/CVE-2020-11668 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-12465 https://access.redhat.com/security/cve/CVE-2020-12655 https://access.redhat.com/security/cve/CVE-2020-12659 https://access.redhat.com/security/cve/CVE-2020-12770 https://access.redhat.com/security/cve/CVE-2020-12826 https://access.redhat.com/security/cve/CVE-2020-13249 https://access.redhat.com/security/cve/CVE-2020-13630 https://access.redhat.com/security/cve/CVE-2020-13631 https://access.redhat.com/security/cve/CVE-2020-13632 https://access.redhat.com/security/cve/CVE-2020-14019 https://access.redhat.com/security/cve/CVE-2020-14040 https://access.redhat.com/security/cve/CVE-2020-14381 https://access.redhat.com/security/cve/CVE-2020-14382 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-14422 https://access.redhat.com/security/cve/CVE-2020-15157 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-15862 https://access.redhat.com/security/cve/CVE-2020-15999 https://access.redhat.com/security/cve/CVE-2020-16166 https://access.redhat.com/security/cve/CVE-2020-24490 https://access.redhat.com/security/cve/CVE-2020-24659 https://access.redhat.com/security/cve/CVE-2020-25211 https://access.redhat.com/security/cve/CVE-2020-25641 https://access.redhat.com/security/cve/CVE-2020-25658 https://access.redhat.com/security/cve/CVE-2020-25661 https://access.redhat.com/security/cve/CVE-2020-25662 https://access.redhat.com/security/cve/CVE-2020-25681 https://access.redhat.com/security/cve/CVE-2020-25682 https://access.redhat.com/security/cve/CVE-2020-25683 https://access.redhat.com/security/cve/CVE-2020-25684 https://access.redhat.com/security/cve/CVE-2020-25685 https://access.redhat.com/security/cve/CVE-2020-25686 https://access.redhat.com/security/cve/CVE-2020-25687 https://access.redhat.com/security/cve/CVE-2020-25694 https://access.redhat.com/security/cve/CVE-2020-25696 https://access.redhat.com/security/cve/CVE-2020-26160 https://access.redhat.com/security/cve/CVE-2020-27813 https://access.redhat.com/security/cve/CVE-2020-27846 https://access.redhat.com/security/cve/CVE-2020-28362 https://access.redhat.com/security/cve/CVE-2020-29652 https://access.redhat.com/security/cve/CVE-2021-2007 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYDZ+bNzjgjWX9erEAQghXg//awGwjQxJ5LEZWBTdgyuCa8mHEi2rop5T lmebolBMNRSbo9gI8LMSHlvIBBFiV4CuFvfxE0AVLNentfzOTH11TxNWe1KQYt4H EmcGHPeHWTxKDkvAHtVcWXy9WN3y5d4lHSaq6AR1nHRPcj/k1upyx22kotpnYxN8 4d49PjFTO3YbmdYpNLVJ9nY8izqUpTfM7YSyj6ANZSlaYc5Z215o6TPo6e3wobf4 mWu+VfDS0v+/AbGhQhO2sQ7r2ysJ85MB7c62cxck4a51KiA0NKd4xr0TAA4KHnNL ISHFzi5QYXu+meE+9wYRo1ZjJ5fbPj41+1TJbR6O4CbP0xQiFpcUSipNju3rGSGy Ae5G/QGT8J7HzOjlKVvY3SFu/odENR6c+xUIr7IB/FBlu7DdPF2XxMZDQD4DKHEk 4aiDbuiEL3Yf78Ic1RqPPmrj9plIwprVFQz+k3JaQXKD+1dBxO6tk+nVu2/5xNbM uR03hrthYYIpdXLSWU4lzq8j3kQ9wZ4j/m2o6/K6eHNl9PyqAG5jfQv9bVf8E3oG krzc/JLvOfHNEQ/oJs/v/DFDmnAxshCCtGWlpLJ5J0pcD3EePsrPNs1QtQurVrMv RjfBCWKOij53+BinrMKHdsHxfur7GCFCIQCVaLIv6GUjX2NWI0voIVA8JkrFNNp6 McvuEaxco7U= =sw8i -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Solution:
For information on upgrading Ansible Tower, reference the Ansible Tower Upgrade and Migration Guide: https://docs.ansible.com/ansible-tower/latest/html/upgrade-migration-guide/ index.html
- Bugs fixed (https://bugzilla.redhat.com/):
1790277 - CVE-2019-20372 nginx: HTTP request smuggling in configurations with URL redirect used as error_page 1828406 - CVE-2020-11022 jquery: Cross-site scripting due to improper injQuery.htmlPrefilter method 1850004 - CVE-2020-11023 jquery: Passing HTML containing elements to manipulation methods could result in untrusted code execution 1911314 - CVE-2020-35678 python-autobahn: allows redirect header injection 1928847 - CVE-2021-20253 ansible-tower: Privilege escalation via job isolation escape
- This software, such as Apache HTTP Server, is common to multiple JBoss middleware products, and is packaged under Red Hat JBoss Core Services to allow for faster distribution of updates, and for a more consistent update experience.
This release adds the new Apache HTTP Server 2.4.37 Service Pack 6 packages that are part of the JBoss Core Services offering. Refer to the Release Notes for information on the most significant bug fixes and enhancements included in this release. Solution:
Before applying the update, back up your existing installation, including all applications, configuration files, databases and database settings, and so on.
The References section of this erratum contains a download link for the update. You must be logged in to download the update. Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):
1914774 - CVE-2021-20178 ansible: user data leak in snmp_facts module 1915808 - CVE-2021-20180 ansible module: bitbucket_pipeline_variable exposes secured values 1916813 - CVE-2021-20191 ansible: multiple modules expose secured values 1925002 - CVE-2021-20228 ansible: basic.py no_log with fallback option 1939349 - CVE-2021-3447 ansible: multiple modules expose secured values
- ========================================================================== Ubuntu Security Notice USN-4662-1 December 08, 2020
openssl, openssl1.0 vulnerability
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 20.10
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
- Ubuntu 16.04 LTS
Summary:
OpenSSL could be made to crash if it processed specially crafted input.
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 20.10: libssl1.1 1.1.1f-1ubuntu4.1
Ubuntu 20.04 LTS: libssl1.1 1.1.1f-1ubuntu2.1
Ubuntu 18.04 LTS: libssl1.0.0 1.0.2n-1ubuntu5.5 libssl1.1 1.1.1-1ubuntu2.1~18.04.7
Ubuntu 16.04 LTS: libssl1.0.0 1.0.2g-1ubuntu4.18
After a standard system update you need to reboot your computer to make all the necessary changes. 7) - aarch64, ppc64le, s390x
This issue was reported to OpenSSL on 9th November 2020 by David Benjamin (Google). Initial analysis was performed by David Benjamin with additional analysis by Matt Caswell (OpenSSL). The fix was developed by Matt Caswell.
Note
OpenSSL 1.0.2 is out of support and no longer receiving public updates.
References
URL for this Security Advisory: https://www.openssl.org/news/secadv/20201208.txt
Note: the online version of the advisory may be updated with additional details over time.
For details of OpenSSL severity classifications please see: https://www.openssl.org/policies/secpolicy.html
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202012-1527", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.57" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "12.12.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "33" }, { "model": "mysql server", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "8.0.15" }, { "model": "mysql server", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.0.22" }, { "model": "solidfire", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "enterprise session border controller", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "cz8.3" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "10.0.0" }, { "model": "openssl", "scope": "lt", "trust": 1.0, "vendor": "openssl", "version": "1.0.2x" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "12.13.0" }, { "model": "e-series santricity os controller", "scope": "gte", "trust": 1.0, "vendor": "netapp", "version": "11.0.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "10.0" }, { "model": "sinec infrastructure network services", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0.1.1" }, { "model": "communications unified session manager", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "scz8.2.5" }, { "model": "communications session border controller", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "cz8.3" }, { "model": "log correlation engine", "scope": "lt", "trust": 1.0, "vendor": "tenable", "version": "6.0.9" }, { "model": "ef600a", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "communications subscriber-aware load balancer", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "cz8.3" }, { "model": "data ontap", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "hci storage node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "business intelligence", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "5.5.0.0.0" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "12.20.1" }, { "model": "communications session router", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "cz8.3" }, { "model": "oncommand workflow automation", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "manageability software development kit", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "oncommand insight", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "14.0.0" }, { "model": "enterprise manager base platform", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "13.3.0.0" }, { "model": "graalvm", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "20.3.0" }, { "model": "enterprise manager base platform", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "13.4.0.0" }, { "model": "enterprise session border controller", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "cz8.2" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "15.0.0" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "10.23.1" }, { "model": "graalvm", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "19.3.4" }, { "model": "communications diameter intelligence hub", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.1.0" }, { "model": "nessus network monitor", "scope": "lt", "trust": 1.0, "vendor": "tenable", "version": "5.13.1" }, { "model": "http server", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "12.2.1.4.0" }, { "model": "jd edwards enterpriseone tools", "scope": "lt", "trust": 1.0, "vendor": "oracle", "version": "9.2.5.3" }, { "model": "jd edwards world security", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "a9.4" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "10.12.0" }, { "model": "communications subscriber-aware load balancer", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "cz8.2" }, { "model": "communications session border controller", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "cz8.4" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "14.14.0" }, { "model": "communications subscriber-aware load balancer", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "cz8.4" }, { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.56" }, { "model": "communications session router", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "cz8.2" }, { "model": "enterprise manager ops center", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "12.4.0.0" }, { "model": "communications diameter intelligence hub", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "8.0.0" }, { "model": "business intelligence", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "12.2.1.4.0" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "14.15.4" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "14.15.0" }, { "model": "snapcenter", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "communications session router", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "cz8.4" }, { "model": "e-series santricity os controller", "scope": "lte", "trust": 1.0, "vendor": "netapp", "version": "11.60.3" }, { "model": "active iq unified manager", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.58" }, { "model": "hci management node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "openssl", "scope": "lt", "trust": 1.0, "vendor": "openssl", "version": "1.1.1i" }, { "model": "communications cloud native core network function cloud native environment", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "1.10.0" }, { "model": "business intelligence", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "12.2.1.3.0" }, { "model": "santricity smi-s provider", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "enterprise communications broker", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "pcz3.3" }, { "model": "plug-in for symantec netbackup", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "10.13.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "32" }, { "model": "api gateway", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "11.1.2.4.0" }, { "model": "communications diameter intelligence hub", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.2.3" }, { "model": "mysql server", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "5.7.32" }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "1.0.2" }, { "model": "enterprise communications broker", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "pcz3.2" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "9.0" }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "1.1.1" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "12.0.0" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "15.5.0" }, { "model": "mysql", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.0.22" }, { "model": "business intelligence", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "5.9.0.0.0" }, { "model": "essbase", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "21.2" }, { "model": "aff a250", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "communications session border controller", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "cz8.2" }, { "model": "hci compute node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "clustered data ontap antivirus connector", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "enterprise session border controller", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "cz8.4" }, { "model": "communications diameter intelligence hub", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "8.2.0" }, { "model": "enterprise manager for storage management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "13.4.0.0" }, { "model": "enterprise communications broker", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "pcz3.1" }, { "model": "hitachi ops center analyzer viewpoint", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "openssl", "scope": "eq", "trust": 0.8, "vendor": "openssl", "version": "note that, 1.1.0 is no longer supported has not been evaluated for this vulnerability." }, { "model": "jp1/base", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "jp1/automatic job management system 3", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-009865" }, { "db": "NVD", "id": "CVE-2020-1971" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0.2x", "versionStartIncluding": "1.0.2", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.1.1i", "versionStartIncluding": "1.1.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:32:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:33:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:oracle:api_gateway:11.1.2.4.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.56:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:business_intelligence:12.2.1.3.0:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.57:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_world_security:a9.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:business_intelligence:12.2.1.4.0:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_manager_base_platform:13.3.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:business_intelligence:5.5.0.0.0:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.58:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_manager_base_platform:13.4.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:http_server:12.2.1.4.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_manager_for_storage_management:13.4.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_manager_ops_center:12.4.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.0.22", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:graalvm:19.3.4:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:graalvm:20.3.0:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:essbase:21.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_enterpriseone_tools:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "9.2.5.3", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_communications_broker:pcz3.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_communications_broker:pcz3.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_communications_broker:pcz3.3:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_unified_session_manager:scz8.2.5:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_session_border_controller:cz8.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_session_border_controller:cz8.3:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_session_border_controller:cz8.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_subscriber-aware_load_balancer:cz8.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_subscriber-aware_load_balancer:cz8.3:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_subscriber-aware_load_balancer:cz8.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_session_router:cz8.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_session_router:cz8.3:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_session_router:cz8.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_session_border_controller:cz8.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_session_border_controller:cz8.3:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_session_border_controller:cz8.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_server:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.0.22", "versionStartIncluding": "8.0.15", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_server:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "5.7.32", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:business_intelligence:5.9.0.0.0:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_network_function_cloud_native_environment:1.10.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_diameter_intelligence_hub:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.1.0", "versionStartIncluding": "8.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_diameter_intelligence_hub:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.2.3", "versionStartIncluding": "8.2.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:santricity_smi-s_provider:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:snapcenter:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:oncommand_workflow_automation:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:oncommand_insight:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:data_ontap:-:*:*:*:*:7-mode:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap_antivirus_connector:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:solidfire:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:hci_management_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:h:netapp:hci_storage_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:vmware_vsphere:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:manageability_software_development_kit:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:h:netapp:hci_compute_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:e-series_santricity_os_controller:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "11.60.3", "versionStartIncluding": "11.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:plug-in_for_symantec_netbackup:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:windows:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:ef600a_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:ef600a:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:aff_a250_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:aff_a250:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:tenable:log_correlation_engine:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "6.0.9", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "5.13.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_infrastructure_network_services:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0.1.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndIncluding": "14.14.0", "versionStartIncluding": "14.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndIncluding": "10.12.0", "versionStartIncluding": "10.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndIncluding": "12.12.0", "versionStartIncluding": "12.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndExcluding": "15.5.0", "versionStartIncluding": "15.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:lts:*:*:*", "cpe_name": [], "versionEndExcluding": "14.15.4", "versionStartIncluding": "14.15.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:lts:*:*:*", "cpe_name": [], "versionEndExcluding": "12.20.1", "versionStartIncluding": "12.13.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:lts:*:*:*", "cpe_name": [], "versionEndExcluding": "10.23.1", "versionStartIncluding": "10.13.0", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2020-1971" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "160654" }, { "db": "PACKETSTORM", "id": "160644" }, { "db": "PACKETSTORM", "id": "161546" }, { "db": "PACKETSTORM", "id": "161727" }, { "db": "PACKETSTORM", "id": "161382" }, { "db": "PACKETSTORM", "id": "162142" }, { "db": "PACKETSTORM", "id": "160651" } ], "trust": 0.7 }, "cve": "CVE-2020-1971", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 4.3, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Medium", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "Partial", "baseScore": 4.3, "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2020-1971", "impactScore": null, "integrityImpact": "None", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "PARTIAL", "baseScore": 4.3, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "id": "VHN-173115", "impactScore": 2.9, "integrityImpact": "NONE", "severity": "MEDIUM", "trust": 0.1, "vectorString": "AV:N/AC:M/AU:N/C:N/I:N/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "HIGH", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 5.9, "baseSeverity": "MEDIUM", "confidentialityImpact": "NONE", "exploitabilityScore": 2.2, "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H", "version": "3.1" }, { "attackComplexity": "High", "attackVector": "Network", "author": "NVD", "availabilityImpact": "High", "baseScore": 5.9, "baseSeverity": "Medium", "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2020-1971", "impactScore": null, "integrityImpact": "None", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2020-1971", "trust": 1.8, "value": "MEDIUM" }, { "author": "VULHUB", "id": "VHN-173115", "trust": 0.1, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2020-1971", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-173115" }, { "db": "VULMON", "id": "CVE-2020-1971" }, { "db": "JVNDB", "id": "JVNDB-2020-009865" }, { "db": "NVD", "id": "CVE-2020-1971" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "The X.509 GeneralName type is a generic type for representing different types of names. One of those name types is known as EDIPartyName. OpenSSL provides a function GENERAL_NAME_cmp which compares different instances of a GENERAL_NAME to see if they are equal or not. This function behaves incorrectly when both GENERAL_NAMEs contain an EDIPARTYNAME. A NULL pointer dereference and a crash may occur leading to a possible denial of service attack. OpenSSL itself uses the GENERAL_NAME_cmp function for two purposes: 1) Comparing CRL distribution point names between an available CRL and a CRL distribution point embedded in an X509 certificate 2) When verifying that a timestamp response token signer matches the timestamp authority name (exposed via the API functions TS_RESP_verify_response and TS_RESP_verify_token) If an attacker can control both items being compared then that attacker could trigger a crash. For example if the attacker can trick a client or server into checking a malicious certificate against a malicious CRL then this may occur. Note that some applications automatically download CRLs based on a URL embedded in a certificate. This checking happens prior to the signatures on the certificate and CRL being verified. OpenSSL\u0027s s_server, s_client and verify tools have support for the \"-crl_download\" option which implements automatic CRL downloading and this attack has been demonstrated to work against those tools. Note that an unrelated bug means that affected versions of OpenSSL cannot parse or construct correct encodings of EDIPARTYNAME. However it is possible to construct a malformed EDIPARTYNAME that OpenSSL\u0027s parser will accept and hence trigger this attack. All OpenSSL 1.1.1 and 1.0.2 versions are affected by this issue. Other OpenSSL releases are out of support and have not been checked. Fixed in OpenSSL 1.1.1i (Affected 1.1.1-1.1.1h). Fixed in OpenSSL 1.0.2x (Affected 1.0.2-1.0.2w). OpenSSL Project Than, OpenSSL Security Advisory [08 December 2020] Has been published. Severity - high (Severity: High)EDIPARTYNAME NULL pointer reference - CVE-2020-1971OpenSSL of GENERAL_NAME_cmp() the function is X.509 This function compares data such as the host name included in the certificate. GENERAL_NAME_cmp() Both arguments to be compared in the function are EDIPartyName If it was of type GENERAL_NAME_cmp() in a function NULL pointer reference (CWE-476) may occur and crash the server or client application calling the function.Crafted X.509 Denial of service by performing certificate verification processing (DoS) You may be attacked. The product supports a variety of encryption algorithms, including symmetric ciphers, hash algorithms, secure hash algorithms, etc. A denial of service security issue exists in OpenSSL prior to 1.1.1i. This could\nprevent installations to flavors detected as `baremetal`, which might have\nthe required capacity to complete the installation. This is usually caused\nby OpenStack administrators not setting the appropriate metadata on their\nbare metal flavors. Validations are now skipped on flavors detected as\n`baremetal`, to prevent incorrect failures from being reported. \n(BZ#1889416)\n\n* Previously, there was a broken link on the OperatorHub install page of\nthe web console, which was intended to reference the cluster monitoring\ndocumentation. Bugs fixed (https://bugzilla.redhat.com/):\n\n1885442 - Console doesn\u0027t load in iOS Safari when using self-signed certificates\n1885946 - console-master-e2e-gcp-console test periodically fail due to no Alerts found\n1887551 - Unsupported access mode should not be available to select when creating pvc by aws-ebs-csi-driver(gp2-csi) from web-console\n1888165 - [release 4.6] IO doesn\u0027t recognize namespaces - 2 resources with the same name in 2 namespaces -\u003e only 1 gets collected\n1888650 - Fix CVE-2015-7501 affecting agent-maven-3.5\n1888717 - Cypress: Fix \u0027link-name\u0027 accesibility violation\n1888721 - ovn-masters stuck in crashloop after scale test\n1890993 - Selected Capacity is showing wrong size\n1890994 - When the user clicked cancel at the Create Storage Class confirmation dialog all the data from the Local volume set goes off\n1891427 - CLI does not save login credentials as expected when using the same username in multiple clusters\n1891454 - EgressNetworkPolicy does not work when setting Allow rule to a dnsName\n1891499 - Other machine config pools do not show during update\n1891891 - Wrong detail head on network policy detail page. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Server AUS (v. 7.4) - x86_64\nRed Hat Enterprise Linux Server E4S (v. 7.4) - ppc64le, x86_64\nRed Hat Enterprise Linux Server Optional AUS (v. 7.4) - x86_64\nRed Hat Enterprise Linux Server Optional E4S (v. 7.4) - ppc64le, x86_64\nRed Hat Enterprise Linux Server Optional TUS (v. 7.4) - x86_64\nRed Hat Enterprise Linux Server TUS (v. 7.4) - x86_64\n\n3. Description:\n\nOpenSSL is a toolkit that implements the Secure Sockets Layer (SSL) and\nTransport Layer Security (TLS) protocols, as well as a full-strength\ngeneral-purpose cryptography library. \n\nSecurity Fix(es):\n\n* openssl: EDIPARTYNAME NULL pointer de-reference (CVE-2020-1971)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nFor the update to take effect, all services linked to the OpenSSL library\nmust be restarted, or the system rebooted. Package List:\n\nRed Hat Enterprise Linux Server AUS (v. 7.4):\n\nSource:\nopenssl-1.0.2k-9.el7_4.src.rpm\n\nx86_64:\nopenssl-1.0.2k-9.el7_4.x86_64.rpm\nopenssl-debuginfo-1.0.2k-9.el7_4.i686.rpm\nopenssl-debuginfo-1.0.2k-9.el7_4.x86_64.rpm\nopenssl-devel-1.0.2k-9.el7_4.i686.rpm\nopenssl-devel-1.0.2k-9.el7_4.x86_64.rpm\nopenssl-libs-1.0.2k-9.el7_4.i686.rpm\nopenssl-libs-1.0.2k-9.el7_4.x86_64.rpm\n\nRed Hat Enterprise Linux Server E4S (v. 7.4):\n\nSource:\nopenssl-1.0.2k-9.el7_4.src.rpm\n\nppc64le:\nopenssl-1.0.2k-9.el7_4.ppc64le.rpm\nopenssl-debuginfo-1.0.2k-9.el7_4.ppc64le.rpm\nopenssl-devel-1.0.2k-9.el7_4.ppc64le.rpm\nopenssl-libs-1.0.2k-9.el7_4.ppc64le.rpm\n\nx86_64:\nopenssl-1.0.2k-9.el7_4.x86_64.rpm\nopenssl-debuginfo-1.0.2k-9.el7_4.i686.rpm\nopenssl-debuginfo-1.0.2k-9.el7_4.x86_64.rpm\nopenssl-devel-1.0.2k-9.el7_4.i686.rpm\nopenssl-devel-1.0.2k-9.el7_4.x86_64.rpm\nopenssl-libs-1.0.2k-9.el7_4.i686.rpm\nopenssl-libs-1.0.2k-9.el7_4.x86_64.rpm\n\nRed Hat Enterprise Linux Server TUS (v. 7.4):\n\nSource:\nopenssl-1.0.2k-9.el7_4.src.rpm\n\nx86_64:\nopenssl-1.0.2k-9.el7_4.x86_64.rpm\nopenssl-debuginfo-1.0.2k-9.el7_4.i686.rpm\nopenssl-debuginfo-1.0.2k-9.el7_4.x86_64.rpm\nopenssl-devel-1.0.2k-9.el7_4.i686.rpm\nopenssl-devel-1.0.2k-9.el7_4.x86_64.rpm\nopenssl-libs-1.0.2k-9.el7_4.i686.rpm\nopenssl-libs-1.0.2k-9.el7_4.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional AUS (v. 7.4):\n\nx86_64:\nopenssl-debuginfo-1.0.2k-9.el7_4.i686.rpm\nopenssl-debuginfo-1.0.2k-9.el7_4.x86_64.rpm\nopenssl-perl-1.0.2k-9.el7_4.x86_64.rpm\nopenssl-static-1.0.2k-9.el7_4.i686.rpm\nopenssl-static-1.0.2k-9.el7_4.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional E4S (v. 7.4):\n\nppc64le:\nopenssl-debuginfo-1.0.2k-9.el7_4.ppc64le.rpm\nopenssl-perl-1.0.2k-9.el7_4.ppc64le.rpm\nopenssl-static-1.0.2k-9.el7_4.ppc64le.rpm\n\nx86_64:\nopenssl-debuginfo-1.0.2k-9.el7_4.i686.rpm\nopenssl-debuginfo-1.0.2k-9.el7_4.x86_64.rpm\nopenssl-perl-1.0.2k-9.el7_4.x86_64.rpm\nopenssl-static-1.0.2k-9.el7_4.i686.rpm\nopenssl-static-1.0.2k-9.el7_4.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional TUS (v. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update\nAdvisory ID: RHSA-2020:5633-01\nProduct: Red Hat OpenShift Enterprise\nAdvisory URL: https://access.redhat.com/errata/RHSA-2020:5633\nIssue date: 2021-02-24\nCVE Names: CVE-2018-10103 CVE-2018-10105 CVE-2018-14461 \n CVE-2018-14462 CVE-2018-14463 CVE-2018-14464 \n CVE-2018-14465 CVE-2018-14466 CVE-2018-14467 \n CVE-2018-14468 CVE-2018-14469 CVE-2018-14470 \n CVE-2018-14553 CVE-2018-14879 CVE-2018-14880 \n CVE-2018-14881 CVE-2018-14882 CVE-2018-16227 \n CVE-2018-16228 CVE-2018-16229 CVE-2018-16230 \n CVE-2018-16300 CVE-2018-16451 CVE-2018-16452 \n CVE-2018-20843 CVE-2019-3884 CVE-2019-5018 \n CVE-2019-6977 CVE-2019-6978 CVE-2019-8625 \n CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n CVE-2019-8846 CVE-2019-9455 CVE-2019-9458 \n CVE-2019-11068 CVE-2019-12614 CVE-2019-13050 \n CVE-2019-13225 CVE-2019-13627 CVE-2019-14889 \n CVE-2019-15165 CVE-2019-15166 CVE-2019-15903 \n CVE-2019-15917 CVE-2019-15925 CVE-2019-16167 \n CVE-2019-16168 CVE-2019-16231 CVE-2019-16233 \n CVE-2019-16935 CVE-2019-17450 CVE-2019-17546 \n CVE-2019-18197 CVE-2019-18808 CVE-2019-18809 \n CVE-2019-19046 CVE-2019-19056 CVE-2019-19062 \n CVE-2019-19063 CVE-2019-19068 CVE-2019-19072 \n CVE-2019-19221 CVE-2019-19319 CVE-2019-19332 \n CVE-2019-19447 CVE-2019-19524 CVE-2019-19533 \n CVE-2019-19537 CVE-2019-19543 CVE-2019-19602 \n CVE-2019-19767 CVE-2019-19770 CVE-2019-19906 \n CVE-2019-19956 CVE-2019-20054 CVE-2019-20218 \n CVE-2019-20386 CVE-2019-20387 CVE-2019-20388 \n CVE-2019-20454 CVE-2019-20636 CVE-2019-20807 \n CVE-2019-20812 CVE-2019-20907 CVE-2019-20916 \n CVE-2020-0305 CVE-2020-0444 CVE-2020-1716 \n CVE-2020-1730 CVE-2020-1751 CVE-2020-1752 \n CVE-2020-1971 CVE-2020-2574 CVE-2020-2752 \n CVE-2020-2922 CVE-2020-3862 CVE-2020-3864 \n CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 \n CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 \n CVE-2020-3897 CVE-2020-3898 CVE-2020-3899 \n CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n CVE-2020-6405 CVE-2020-7595 CVE-2020-7774 \n CVE-2020-8177 CVE-2020-8492 CVE-2020-8563 \n CVE-2020-8566 CVE-2020-8619 CVE-2020-8622 \n CVE-2020-8623 CVE-2020-8624 CVE-2020-8647 \n CVE-2020-8648 CVE-2020-8649 CVE-2020-9327 \n CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 \n CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 \n CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 \n CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 \n CVE-2020-9925 CVE-2020-10018 CVE-2020-10029 \n CVE-2020-10732 CVE-2020-10749 CVE-2020-10751 \n CVE-2020-10763 CVE-2020-10773 CVE-2020-10774 \n CVE-2020-10942 CVE-2020-11565 CVE-2020-11668 \n CVE-2020-11793 CVE-2020-12465 CVE-2020-12655 \n CVE-2020-12659 CVE-2020-12770 CVE-2020-12826 \n CVE-2020-13249 CVE-2020-13630 CVE-2020-13631 \n CVE-2020-13632 CVE-2020-14019 CVE-2020-14040 \n CVE-2020-14381 CVE-2020-14382 CVE-2020-14391 \n CVE-2020-14422 CVE-2020-15157 CVE-2020-15503 \n CVE-2020-15862 CVE-2020-15999 CVE-2020-16166 \n CVE-2020-24490 CVE-2020-24659 CVE-2020-25211 \n CVE-2020-25641 CVE-2020-25658 CVE-2020-25661 \n CVE-2020-25662 CVE-2020-25681 CVE-2020-25682 \n CVE-2020-25683 CVE-2020-25684 CVE-2020-25685 \n CVE-2020-25686 CVE-2020-25687 CVE-2020-25694 \n CVE-2020-25696 CVE-2020-26160 CVE-2020-27813 \n CVE-2020-27846 CVE-2020-28362 CVE-2020-29652 \n CVE-2021-2007 CVE-2021-3121 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.7.0 is now available. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.7.0. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2020:5634\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-x86_64\n\nThe image digest is\nsha256:d74b1cfa81f8c9cc23336aee72d8ae9c9905e62c4874b071317a078c316f8a70\n\n(For s390x architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-s390x\n\nThe image digest is\nsha256:a68ca03d87496ddfea0ac26b82af77231583a58a7836b95de85efe5e390ad45d\n\n(For ppc64le architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-ppc64le\n\nThe image digest is\nsha256:bc7b04e038c8ff3a33b827f4ee19aa79b26e14c359a7dcc1ced9f3b58e5f1ac6\n\nAll OpenShift Container Platform 4.7 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor. \n\nSecurity Fix(es):\n\n* crewjam/saml: authentication bypass in saml authentication\n(CVE-2020-27846)\n\n* golang: crypto/ssh: crafted authentication request can lead to nil\npointer dereference (CVE-2020-29652)\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n\n* nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)\n\n* kubernetes: Secret leaks in kube-controller-manager when using vSphere\nProvider (CVE-2020-8563)\n\n* containernetworking/plugins: IPv6 router advertisements allow for MitM\nattacks on IPv4 clusters (CVE-2020-10749)\n\n* heketi: gluster-block volume password details available in logs\n(CVE-2020-10763)\n\n* golang.org/x/text: possibility to trigger an infinite loop in\nencoding/unicode could lead to crash (CVE-2020-14040)\n\n* jwt-go: access restriction bypass vulnerability (CVE-2020-26160)\n\n* golang-github-gorilla-websocket: integer overflow leads to denial of\nservice (CVE-2020-27813)\n\n* golang: math/big: panic during recursive division of very large numbers\n(CVE-2020-28362)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n3. Solution:\n\nFor OpenShift Container Platform 4.7, see the following documentation,\nwhich\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -cli.html. \n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1620608 - Restoring deployment config with history leads to weird state\n1752220 - [OVN] Network Policy fails to work when project label gets overwritten\n1756096 - Local storage operator should implement must-gather spec\n1756173 - /etc/udev/rules.d/66-azure-storage.rules missing from initramfs\n1768255 - installer reports 100% complete but failing components\n1770017 - Init containers restart when the exited container is removed from node. \n1775057 - [MSTR-485] Cluster is abnormal after etcd backup/restore when the backup is conducted during etcd encryption is migrating\n1775444 - RFE: k8s cpu manager does not restrict /usr/bin/pod cpuset\n1777038 - Cluster scaled beyond host subnet limits does not fire alert or cleanly report why it cannot scale\n1777224 - InfraID in metadata.json and .openshift_install_state.json is not consistent when repeating `create` commands\n1784298 - \"Displaying with reduced resolution due to large dataset.\" would show under some conditions\n1785399 - Under condition of heavy pod creation, creation fails with \u0027error reserving pod name ...: name is reserved\"\n1797766 - Resource Requirements\" specDescriptor fields - CPU and Memory injects empty string YAML editor\n1801089 - [OVN] Installation failed and monitoring pod not created due to some network error. \n1805025 - [OSP] Machine status doesn\u0027t become \"Failed\" when creating a machine with invalid image\n1805639 - Machine status should be \"Failed\" when creating a machine with invalid machine configuration\n1806000 - CRI-O failing with: error reserving ctr name\n1806915 - openshift-service-ca: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be\n1806917 - openshift-service-ca-operator: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be\n1810438 - Installation logs are not gathered from OCP nodes\n1812085 - kubernetes-networking-namespace-pods dashboard doesn\u0027t exist\n1812412 - Monitoring Dashboard: on restricted cluster, query timed out in expression evaluation\n1813012 - EtcdDiscoveryDomain no longer needed\n1813949 - openshift-install doesn\u0027t use env variables for OS_* for some of API endpoints\n1816812 - OpenShift test suites are not resilient to rate limited registries (like docker.io) and cannot control their dependencies for offline use\n1819053 - loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: OpenAPI spec does not exist\n1819457 - Package Server is in \u0027Cannot update\u0027 status despite properly working\n1820141 - [RFE] deploy qemu-quest-agent on the nodes\n1822744 - OCS Installation CI test flaking\n1824038 - Integration Tests: StaleElementReferenceError in OLM single-installmode scenario\n1825892 - StorageClasses and PVs are not cleaned completely after running the csi verification tool\n1826301 - Wrong NodeStatus reports in file-integrity scan when configuration error in aide.conf file\n1829723 - User workload monitoring alerts fire out of the box\n1832968 - oc adm catalog mirror does not mirror the index image itself\n1833012 - Lower OVNKubernetes HTTP E/W performance compared with OpenShiftSDN\n1833220 - CVE-2020-10749 containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters\n1834995 - olmFull suite always fails once th suite is run on the same cluster\n1836017 - vSphere UPI: Both Internal and External load balancers for kube-apiserver should use /readyz\n1837953 - Replacing masters doesn\u0027t work for ovn-kubernetes 4.4\n1838352 - OperatorExited, Pending marketplace-operator-... pod for several weeks\n1838751 - [oVirt][Tracker] Re-enable skipped network tests\n1839239 - csi-snapshot-controller flickers Degraded=True on etcd hiccups\n1840759 - [aws-ebs-csi-driver] The volume created by aws ebs csi driver can not be deleted when the cluster is destroyed\n1841039 - authentication-operator: Add e2e test for password grants to Keycloak being set as OIDC IdP\n1841119 - Get rid of config patches and pass flags directly to kcm\n1841175 - When an Install Plan gets deleted, OLM does not create a new one\n1841381 - Issue with memoryMB validation\n1841885 - oc adm catalog mirror command attempts to pull from registry.redhat.io when using --from-dir option\n1844727 - Etcd container leaves grep and lsof zombie processes\n1845387 - CVE-2020-10763 heketi: gluster-block volume password details available in logs\n1847074 - Filter bar layout issues at some screen widths on search page\n1848358 - CRDs with preserveUnknownFields:true don\u0027t reflect in status that they are non-structural\n1849543 - [4.5]kubeletconfig\u0027s description will show multiple lines for finalizers when upgrade from 4.4.8-\u003e4.5\n1851103 - Use of NetworkManager-wait-online.service in rhcos-growpart.service\n1851203 - [GSS] [RFE] Need a simpler representation of capactiy breakdown in total usage and per project breakdown in OCS 4 dashboard\n1851351 - OCP 4.4.9: EtcdMemberIPMigratorDegraded: rpc error: code = Canceled desc = grpc: the client connection is closing\n1851693 - The `oc apply` should return errors instead of hanging there when failing to create the CRD\n1852289 - Upgrade testsuite fails on ppc64le environment - Unsupported LoadBalancer service\n1853115 - the restriction of --cloud option should be shown in help text. \n1853116 - `--to` option does not work with `--credentials-requests` flag. \n1853352 - [v2v][UI] Storage Class fields Should Not be empty in VM disks view\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1854567 - \"Installed Operators\" list showing \"duplicated\" entries during installation\n1855325 - [Feature:Prometheus][Conformance] Prometheus when installed on the cluster [Top Level] [Feature:Prometheus][Conformance] Prometheus when installed on the cluster should report telemetry if a cloud.openshift.com token is present\n1855351 - Inconsistent Installer reactions to Ctrl-C during user input process\n1855408 - OVN cluster unstable after running minimal scale test\n1856351 - Build page should show metrics for when the build ran, not the last 30 minutes\n1856354 - New APIServices missing from OpenAPI definitions\n1857446 - ARO/Azure: excessive pod memory allocation causes node lockup\n1857877 - Operator upgrades can delete existing CSV before completion\n1858578 - [v2v] [ui] VM import RHV to CNV Target VM Name longer than 63 chars should not be allowed\n1859174 - [IPI][OSP] Having errors from 4.3 to 4.6 about Security group rule already created\n1860136 - default ingress does not propagate annotations to route object on update\n1860322 - [OCPv4.5.2] after unexpected shutdown one of RHV Hypervisors, OCP worker nodes machine are marked as \"Failed\"\n1860518 - unable to stop a crio pod\n1861383 - Route with `haproxy.router.openshift.io/timeout: 365d` kills the ingress controller\n1862430 - LSO: PV creation lock should not be acquired in a loop\n1862489 - LSO autoprovisioning should exclude top level disks that are part of LVM volume group. \n1862608 - Virtual media does not work on hosts using BIOS, only UEFI\n1862918 - [v2v] User should only select SRIOV network when importin vm with SRIOV network\n1865743 - Some pods are stuck in ContainerCreating and some sdn pods are in CrashLoopBackOff\n1865839 - rpm-ostree fails with \"System transaction in progress\" when moving to kernel-rt\n1866043 - Configurable table column headers can be illegible\n1866087 - Examining agones helm chart resources results in \"Oh no!\"\n1866261 - Need to indicate the intentional behavior for Ansible in the `create api` help info\n1866298 - [RHOCS Usability Study][Installation] Labeling the namespace should be a part of the installation flow or be clearer as a requirement\n1866320 - [RHOCS Usability Study][Dashboard] Users were confused by Available Capacity and the Total Capacity\n1866334 - [RHOCS Usability Study][Installation] On the Operator installation page, there\u2019s no indication on which labels offer tooltip/help\n1866340 - [RHOCS Usability Study][Dashboard] It was not clear why \u201cNo persistent storage alerts\u201d was prominently displayed\n1866343 - [RHOCS Usability Study][Dashboard] User wanted to know the time frame for Data Consumption, e.g I/O Operations\n1866445 - kola --basic-qemu-scenarios scenario fail on ppc64le \u0026 s390x\n1866482 - Few errors are seen when oc adm must-gather is run\n1866605 - No metadata.generation set for build and buildconfig objects\n1866873 - MCDDrainError \"Drain failed on , updates may be blocked\" missing rendered node name\n1866901 - Deployment strategy for BMO allows multiple pods to run at the same time\n1866925 - openshift-install destroy cluster should fail quickly when provided with invalid credentials on Azure. \n1867165 - Cannot assign static address to baremetal install bootstrap vm\n1867380 - When using webhooks in OCP 4.5 fails to rollout latest deploymentconfig\n1867400 - [OCs 4.5]UI should not allow creation of second storagecluster of different mode in a single OCS\n1867477 - HPA monitoring cpu utilization fails for deployments which have init containers\n1867518 - [oc] oc should not print so many goroutines when ANY command fails\n1867608 - ds/machine-config-daemon takes 100+ minutes to rollout on 250 node cluster\n1867965 - OpenShift Console Deployment Edit overwrites deployment yaml\n1868004 - opm index add appears to produce image with wrong registry server binary\n1868065 - oc -o jsonpath prints possible warning / bug \"Unable to decode server response into a Table\"\n1868104 - Baremetal actuator should not delete Machine objects\n1868125 - opm index add is not creating an index with valid images when --permissive flag is added, the index is empty instead\n1868384 - CLI does not save login credentials as expected when using the same username in multiple clusters\n1868527 - OpenShift Storage using VMWare vSAN receives error \"Failed to add disk \u0027scsi0:2\u0027\" when mounted pod is created on separate node\n1868645 - After a disaster recovery pods a stuck in \"NodeAffinity\" state and not running\n1868748 - ClusterProvisioningIP in baremetal platform has wrong JSON annotation\n1868765 - [vsphere][ci] could not reserve an IP address: no available addresses\n1868770 - catalogSource named \"redhat-operators\" deleted in a disconnected cluster\n1868976 - Prometheus error opening query log file on EBS backed PVC\n1869293 - The configmap name looks confusing in aide-ds pod logs\n1869606 - crio\u0027s failing to delete a network namespace\n1870337 - [sig-storage] Managed cluster should have no crashlooping recycler pods over four minutes\n1870342 - [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]\n1870373 - Ingress Operator reports available when DNS fails to provision\n1870467 - D/DC Part of Helm / Operator Backed should not have HPA\n1870728 - openshift-install creates expired ignition files from stale .openshift_install_state.json\n1870800 - [4.6] Managed Column not appearing on Pods Details page\n1871170 - e2e tests are needed to validate the functionality of the etcdctl container\n1872001 - EtcdDiscoveryDomain no longer needed\n1872095 - content are expanded to the whole line when only one column in table on Resource Details page\n1872124 - Could not choose device type as \"disk\" or \"part\" when create localvolumeset from web console\n1872128 - Can\u0027t run container with hostPort on ipv6 cluster\n1872166 - \u0027Silences\u0027 link redirects to unexpected \u0027Alerts\u0027 view after creating a silence in the Developer perspective\n1872251 - [aws-ebs-csi-driver] Verify job in CI doesn\u0027t check for vendor dir sanity\n1872786 - Rules in kube-apiserver.rules are taking too long and consuming too much memory for Prometheus to evaluate them\n1872821 - [DOC] Typo in Ansible Operator Tutorial\n1872907 - Fail to create CR from generated Helm Base Operator\n1872923 - Click \"Cancel\" button on the \"initialization-resource\" creation form page should send users to the \"Operator details\" page instead of \"Install Operator\" page (previous page)\n1873007 - [downstream] failed to read config when running the operator-sdk in the home path\n1873030 - Subscriptions without any candidate operators should cause resolution to fail\n1873043 - Bump to latest available 1.19.x k8s\n1873114 - Nodes goes into NotReady state (VMware)\n1873288 - Changing Cluster-Wide Pull Secret Does Not Trigger Updates In Kubelet Filesystem\n1873305 - Failed to power on /inspect node when using Redfish protocol\n1873326 - Accessibility - The symbols e.g checkmark in the overview page has no text description, label, or other accessible information\n1873480 - Accessibility - No text description, alt text, label, or other accessible information associated with the help icon: \u201c?\u201d button/icon in Developer Console -\u003eNavigation\n1873556 - [Openstack] HTTP_PROXY setting for NetworkManager-resolv-prepender not working\n1873593 - MCO fails to cope with ContainerRuntimeConfig thas has a name \u003e 63 characters\n1874057 - Pod stuck in CreateContainerError - error msg=\"container_linux.go:348: starting container process caused \\\"chdir to cwd (\\\\\\\"/mount-point\\\\\\\") set in config.json failed: permission denied\\\"\"\n1874074 - [CNV] Windows 2019 Default Template Not Defaulting to Proper NIC/Storage Driver\n1874192 - [RFE] \"Create Backing Store\" page doesn\u0027t allow to select already defined k8s secret as target bucket credentials when Google Cloud Storage is selected as a provider\n1874240 - [vsphere] unable to deprovision - Runtime error list attached objects\n1874248 - Include validation for vcenter host in the install-config\n1874340 - vmware: NodeClockNotSynchronising alert is triggered in openshift cluster after upgrading form 4.4.16 to 4.5.6\n1874583 - apiserver tries and fails to log an event when shutting down\n1874584 - add retry for etcd errors in kube-apiserver\n1874638 - Missing logging for nbctl daemon\n1874736 - [downstream] no version info for the helm-operator\n1874901 - add utm_source parameter to Red Hat Marketplace URLs for attribution\n1874968 - Accessibility: The project selection drop down is a keyboard trap\n1875247 - Dependency resolution error \"found more than one head for channel\" is unhelpful for users\n1875516 - disabled scheduling is easy to miss in node page of OCP console\n1875598 - machine status is Running for a master node which has been terminated from the console\n1875806 - When creating a service of type \"LoadBalancer\" (Kuryr,OVN) communication through this loadbalancer failes after 2-5 minutes. \n1876166 - need to be able to disable kube-apiserver connectivity checks\n1876469 - Invalid doc link on yaml template schema description\n1876701 - podCount specDescriptor change doesn\u0027t take effect on operand details page\n1876815 - Installer uses the environment variable OS_CLOUD for manifest generation despite explicit prompt\n1876935 - AWS volume snapshot is not deleted after the cluster is destroyed\n1877071 - vSphere IPI - Nameserver limits were exceeded, some nameservers have been omitted\n1877105 - add redfish to enabled_bios_interfaces\n1877116 - e2e aws calico tests fail with `rpc error: code = ResourceExhausted`\n1877273 - [OVN] EgressIP cannot fail over to available nodes after one egressIP node shutdown\n1877648 - [sriov]VF from allocatable and capacity of node is incorrect when the policy is only \u0027rootDevices\u0027\n1877681 - Manually created PV can not be used\n1877693 - dnsrecords specify recordTTL as 30 but the value is null in AWS Route 53\n1877740 - RHCOS unable to get ip address during first boot\n1877812 - [ROKS] IBM cloud failed to terminate OSDs when upgraded between internal builds of OCS 4.5\n1877919 - panic in multus-admission-controller\n1877924 - Cannot set BIOS config using Redfish with Dell iDracs\n1878022 - Met imagestreamimport error when import the whole image repository\n1878086 - OCP 4.6+OCS 4.6(multiple SC) Internal Mode- UI should populate the default \"Filesystem Name\" instead of providing a textbox, \u0026 the name should be validated\n1878301 - [4.6] [UI] Unschedulable used to always be displayed when Node is Ready status\n1878701 - After deleting and recreating a VM with same name, the VM events contain the events from the old VM\n1878766 - CPU consumption on nodes is higher than the CPU count of the node. \n1878772 - On the nodes there are up to 547 zombie processes caused by thanos and Prometheus. \n1878823 - \"oc adm release mirror\" generating incomplete imageContentSources when using \"--to\" and \"--to-release-image\"\n1878845 - 4.5 to 4.6.rc.4 upgrade failure: authentication operator health check connection refused for multitenant mode\n1878900 - Installer complains about not enough vcpu for the baremetal flavor where generic bm flavor is being used\n1878953 - RBAC error shows when normal user access pvc upload page\n1878956 - `oc api-resources` does not include API version\n1878972 - oc adm release mirror removes the architecture information\n1879013 - [RFE]Improve CD-ROM interface selection\n1879056 - UI should allow to change or unset the evictionStrategy\n1879057 - [CSI Certificate Test] Test failed for CSI certification tests for CSIdriver openshift-storage.rbd.csi.ceph.com with RWX enabled\n1879094 - RHCOS dhcp kernel parameters not working as expected\n1879099 - Extra reboot during 4.5 -\u003e 4.6 upgrade\n1879244 - Error adding container to network \"ipvlan-host-local\": \"master\" field is required\n1879248 - OLM Cert Dir for Webhooks does not align SDK/Kubebuilder\n1879282 - Update OLM references to point to the OLM\u0027s new doc site\n1879283 - panic after nil pointer dereference in pkg/daemon/update.go\n1879365 - Overlapping, divergent openshift-cluster-storage-operator manifests\n1879419 - [RFE]Improve boot source description for \u0027Container\u0027 and \u2018URL\u2019\n1879430 - openshift-object-counts quota is not dynamically updating as the resource is deleted. \n1879565 - IPv6 installation fails on node-valid-hostname\n1879777 - Overlapping, divergent openshift-machine-api namespace manifests\n1879878 - Messages flooded in thanos-querier pod- oauth-proxy container: Authorization header does not start with \u0027Basic\u0027, skipping basic authentication in Log message in thanos-querier pod the oauth-proxy\n1879930 - Annotations shouldn\u0027t be removed during object reconciliation\n1879976 - No other channel visible from console\n1880068 - image pruner is not aware of image policy annotation, StatefulSets, etc. \n1880148 - dns daemonset rolls out slowly in large clusters\n1880161 - Actuator Update calls should have fixed retry time\n1880259 - additional network + OVN network installation failed\n1880389 - Pipeline Runs with skipped Tasks incorrectly show Tasks as \"Failed\"\n1880410 - Convert Pipeline Visualization node to SVG\n1880417 - [vmware] Fail to boot with Secure Boot enabled, kernel lockdown denies iopl access to afterburn\n1880443 - broken machine pool management on OpenStack\n1880450 - Host failed to install because its installation stage joined took longer than expected 20m0s. \n1880473 - IBM Cloudpak operators installation stuck \"UpgradePending\" with InstallPlan status updates failing due to size limitation\n1880680 - [4.3] [Tigera plugin] - openshift-kube-proxy fails - Failed to execute iptables-restore: exit status 4 (iptables-restore v1.8.4 (nf_tables)\n1880785 - CredentialsRequest missing description in `oc explain`\n1880787 - No description for Provisioning CRD for `oc explain`\n1880902 - need dnsPlocy set in crd ingresscontrollers\n1880913 - [DeScheduler] - change loglevel from Info to Error when priority class given in the descheduler params is not present in the cluster\n1881027 - Cluster installation fails at with error : the container name \\\"assisted-installer\\\" is already in use\n1881046 - [OSP] openstack-cinder-csi-driver-operator doesn\u0027t contain required manifests and assets\n1881155 - operator install authentication: Authentication require functional ingress which requires at least one schedulable and ready node\n1881268 - Image uploading failed but wizard claim the source is available\n1881322 - kube-scheduler not scheduling pods for certificates not renewed automatically after nodes restoration\n1881347 - [v2v][ui]VM Import Wizard does not call Import provider cleanup\n1881881 - unable to specify target port manually resulting in application not reachable\n1881898 - misalignment of sub-title in quick start headers\n1882022 - [vsphere][ipi] directory path is incomplete, terraform can\u0027t find the cluster\n1882057 - Not able to select access modes for snapshot and clone\n1882140 - No description for spec.kubeletConfig\n1882176 - Master recovery instructions don\u0027t handle IP change well\n1882191 - Installation fails against external resources which lack DNS Subject Alternative Name\n1882209 - [ BateMetal IPI ] local coredns resolution not working\n1882210 - [release 4.7] insights-operator: Fix bug in reflector not recovering from \"Too large resource version\"\n1882268 - [e2e][automation]Add Integration Test for Snapshots\n1882361 - Retrieve and expose the latest report for the cluster\n1882485 - dns-node-resolver corrupts /etc/hosts if internal registry is not in use\n1882556 - git:// protocol in origin tests is not currently proxied\n1882569 - CNO: Replacing masters doesn\u0027t work for ovn-kubernetes 4.4\n1882608 - Spot instance not getting created on AzureGovCloud\n1882630 - Fstype is changed after deleting pv provisioned by localvolumeset instance\n1882649 - IPI installer labels all images it uploads into glance as qcow2\n1882653 - The Approval should display the Manual after the APPROVAL changed to Manual from the Automatic\n1882658 - [RFE] Volume Snapshot is not listed under inventory in Project Details page\n1882660 - Operators in a namespace should be installed together when approve one\n1882667 - [ovn] br-ex Link not found when scale up RHEL worker\n1882723 - [vsphere]Suggested mimimum value for providerspec not working\n1882730 - z systems not reporting correct core count in recording rule\n1882750 - [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully\n1882781 - nameserver= option to dracut creates extra NM connection profile\n1882785 - Multi-Arch CI Jobs destroy libvirt network but occasionally leave it defined\n1882844 - [IPI on vsphere] Executing \u0027openshift-installer destroy cluster\u0027 leaves installer tag categories in vsphere\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1883388 - Bare Metal Hosts Details page doesn\u0027t show Mainitenance and Power On/Off status\n1883422 - operator-sdk cleanup fail after installing operator with \"run bundle\" without installmode and og with ownnamespace\n1883425 - Gather top installplans and their count\n1883502 - Logging is broken due to mix of k8s.io/klog v1 and v2\n1883523 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel]\n1883538 - must gather report \"cannot file manila/aws ebs/ovirt csi related namespaces and objects\" error\n1883560 - operator-registry image needs clean up in /tmp\n1883563 - Creating duplicate namespace from create namespace modal breaks the UI\n1883614 - [OCP 4.6] [UI] UI should not describe power cycle as \"graceful\"\n1883642 - [sig-imageregistry][Feature:ImageTriggers][Serial] ImageStream admission TestImageStreamAdmitSpecUpdate\n1883660 - e2e-metal-ipi CI job consistently failing on 4.4\n1883765 - [user workload monitoring] improve latency of Thanos sidecar when streaming read requests\n1883766 - [e2e][automation] Adjust tests for UI changes\n1883768 - [user workload monitoring] The Prometheus operator should discard invalid TLS configurations\n1883773 - opm alpha bundle build fails on win10 home\n1883790 - revert \"force cert rotation every couple days for development\" in 4.7\n1883803 - node pull secret feature is not working as expected\n1883836 - Jenkins imagestream ubi8 and nodejs12 update\n1883847 - The UI does not show checkbox for enable encryption at rest for OCS\n1883853 - go list -m all does not work\n1883905 - race condition in opm index add --overwrite-latest\n1883946 - Understand why trident CSI pods are getting deleted by OCP\n1884035 - Pods are illegally transitioning back to pending\n1884041 - e2e should provide error info when minimum number of pods aren\u0027t ready in kube-system namespace\n1884131 - oauth-proxy repository should run tests\n1884165 - Repos should be disabled in -firstboot.service before OS extensions are applied\n1884221 - IO becomes unhealthy due to a file change\n1884258 - Node network alerts should work on ratio rather than absolute values\n1884270 - Git clone does not support SCP-style ssh locations\n1884334 - CVO marks an upgrade as failed when an operator takes more than 20 minutes to rollout\n1884435 - vsphere - loopback is randomly not being added to resolver\n1884565 - oauth-proxy crashes on invalid usage\n1884584 - Kuryr controller continuously restarting due to unable to clean up Network Policy\n1884613 - Create Instance of Prometheus from operator returns blank page for non cluster-admin users\n1884628 - ovs-configuration service fails when the external network is configured on a tagged vlan on top of a bond device on a baremetal IPI deployment\n1884629 - Visusally impaired user using screen reader not able to select Admin/Developer console options in drop down menu. \n1884632 - Adding BYOK disk encryption through DES\n1884654 - Utilization of a VMI is not populated\n1884655 - KeyError on self._existing_vifs[port_id]\n1884664 - Operator install page shows \"installing...\" instead of going to install status page\n1884672 - Failed to inspect hardware. Reason: unable to start inspection: \u0027idrac\u0027\n1884691 - Installer blocks cloud-credential-operator manual mode on GCP and Azure\n1884724 - Quick Start: Serverless quickstart doesn\u0027t match Operator install steps\n1884739 - Node process segfaulted\n1884824 - Update baremetal-operator libraries to k8s 1.19\n1885002 - network kube-rbac-proxy scripts crashloop rather than non-crash looping\n1885138 - Wrong detection of pending state in VM details\n1885151 - [Cloud Team - Cluster API Provider Azure] Logging is broken due to mix of k8s.io/klog v1 and v2\n1885165 - NoRunningOvnMaster alert falsely triggered\n1885170 - Nil pointer when verifying images\n1885173 - [e2e][automation] Add test for next run configuration feature\n1885179 - oc image append fails on push (uploading a new layer)\n1885213 - Vertical Pod Autoscaler (VPA) not working with DeploymentConfig\n1885218 - [e2e][automation] Add virtctl to gating script\n1885223 - Sync with upstream (fix panicking cluster-capacity binary)\n1885235 - Prometheus: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885241 - kube-rbac-proxy: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885243 - prometheus-adapter: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885244 - prometheus-operator: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885246 - cluster-monitoring-operator: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885249 - openshift-state-metrics: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885308 - Supermicro nodes failed to boot via disk during installation when using IPMI and UEFI\n1885315 - unit tests fail on slow disks\n1885319 - Remove redundant use of group and kind of DataVolumeTemplate\n1885343 - Console doesn\u0027t load in iOS Safari when using self-signed certificates\n1885344 - 4.7 upgrade - dummy bug for 1880591\n1885358 - add p\u0026f configuration to protect openshift traffic\n1885365 - MCO does not respect the install section of systemd files when enabling\n1885376 - failed to initialize the cluster: Cluster operator marketplace is still updating\n1885398 - CSV with only Webhook conversion can\u0027t be installed\n1885403 - Some OLM events hide the underlying errors\n1885414 - Need to disable HTX when not using HTTP/2 in order to preserve HTTP header name case\n1885425 - opm index add cannot batch add multiple bundles that use skips\n1885543 - node tuning operator builds and installs an unsigned RPM\n1885644 - Panic output due to timeouts in openshift-apiserver\n1885676 - [OCP 4.7]UI should fallback to minimal deployment only after total CPU \u003c 30 || totalMemory \u003c 72 GiB for initial deployment\n1885702 - Cypress: Fix \u0027aria-hidden-focus\u0027 accesibility violations\n1885706 - Cypress: Fix \u0027link-name\u0027 accesibility violation\n1885761 - DNS fails to resolve in some pods\n1885856 - Missing registry v1 protocol usage metric on telemetry\n1885864 - Stalld service crashed under the worker node\n1885930 - [release 4.7] Collect ServiceAccount statistics\n1885940 - kuryr/demo image ping not working\n1886007 - upgrade test with service type load balancer will never work\n1886022 - Move range allocations to CRD\u0027s\n1886028 - [BM][IPI] Failed to delete node after scale down\n1886111 - UpdatingopenshiftStateMetricsFailed: DeploymentRollout of openshift-monitoring/openshift-state-metrics: got 1 unavailable replicas\n1886134 - Need to set GODEBUG=x509ignoreCN=0 in initrd\n1886154 - System roles are not present while trying to create new role binding through web console\n1886166 - 1885517 Clone - Not needed for 4.7 - upgrade from 4.5-\u003e4.6 causes broadcast storm\n1886168 - Remove Terminal Option for Windows Nodes\n1886200 - greenwave / CVP is failing on bundle validations, cannot stage push\n1886229 - Multipath support for RHCOS sysroot\n1886294 - Unable to schedule a pod due to Insufficient ephemeral-storage\n1886327 - Attempt to add a worker using bad roodDeviceHint: bmh and machine become Provisioned, no error in status\n1886353 - [e2e][automation] kubevirt-gating job fails for a missing virtctl URL\n1886397 - Move object-enum to console-shared\n1886423 - New Affinities don\u0027t contain ID until saving\n1886435 - Azure UPI uses deprecated command \u0027group deployment\u0027\n1886449 - p\u0026f: add configuration to protect oauth server traffic\n1886452 - layout options doesn\u0027t gets selected style on click i.e grey background\n1886462 - IO doesn\u0027t recognize namespaces - 2 resources with the same name in 2 namespaces -\u003e only 1 gets collected\n1886488 - move e2e test off of nfs image from docker.io/gmontero/nfs-server:latest\n1886524 - Change default terminal command for Windows Pods\n1886553 - i/o timeout experienced from build02 when targeting CI test cluster during test execution\n1886600 - panic: assignment to entry in nil map\n1886620 - Application behind service load balancer with PDB is not disrupted\n1886627 - Kube-apiserver pods restarting/reinitializing periodically\n1886635 - CVE-2020-8563 kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider\n1886636 - Panic in machine-config-operator\n1886749 - Removing network policy from namespace causes inability to access pods through loadbalancer. \n1886751 - Gather MachineConfigPools\n1886766 - PVC dropdown has \u0027Persistent Volume\u0027 Label\n1886834 - ovn-cert is mandatory in both master and node daemonsets\n1886848 - [OSP] machine instance-state annotation discrepancy with providerStatus.instanceState\n1886861 - ordered-values.yaml not honored if values.schema.json provided\n1886871 - Neutron ports created for hostNetworking pods\n1886890 - Overwrite jenkins-agent-base imagestream\n1886900 - Cluster-version operator fills logs with \"Manifest: ...\" spew\n1886922 - [sig-network] pods should successfully create sandboxes by getting pod\n1886973 - Local storage operator doesn\u0027t include correctly populate LocalVolumeDiscoveryResult in console\n1886977 - [v2v]Incorrect VM Provider type displayed in UI while importing VMs through VMIO\n1887010 - Imagepruner met error \"Job has reached the specified backoff limit\" which causes image registry degraded\n1887026 - FC volume attach fails with \u201cno fc disk found\u201d error on OCP 4.6 PowerVM cluster\n1887040 - [upgrade] ovs pod crash for rhel worker when upgarde from 4.5 to 4.6\n1887046 - Event for LSO need update to avoid confusion\n1887088 - cluster-node-tuning-operator refers to missing cluster-node-tuned image\n1887375 - User should be able to specify volumeMode when creating pvc from web-console\n1887380 - Unsupported access mode should not be available to select when creating pvc by aws-ebs-csi-driver(gp2-csi) from web-console\n1887392 - openshift-apiserver: delegated authn/z should have ttl \u003e metrics/healthz/readyz/openapi interval\n1887428 - oauth-apiserver service should be monitored by prometheus\n1887441 - ingress misconfiguration may break authentication but ingress operator keeps reporting \"degraded: False\"\n1887454 - [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data\n1887456 - It is impossible to attach the default NIC to a bridge with the latest version of OVN Kubernetes\n1887465 - Deleted project is still referenced\n1887472 - unable to edit application group for KSVC via gestures (shift+Drag)\n1887488 - OCP 4.6: Topology Manager OpenShift E2E test fails: gu workload attached to SRIOV networks should let resource-aligned PODs have working SRIOV network interface\n1887509 - Openshift-tests conformance TopologyManager tests run when Machine Config Operator is not installed on cluster\n1887525 - Failures to set master HardwareDetails cannot easily be debugged\n1887545 - 4.5 to 4.6 upgrade fails when external network is configured on a bond device: ovs-configuration service fails and node becomes unreachable\n1887585 - ovn-masters stuck in crashloop after scale test\n1887651 - [Internal Mode] Object gateway (RGW) in unknown state after OCP upgrade. \n1887737 - Test TestImageRegistryRemovedWithImages is failing on e2e-vsphere-operator\n1887740 - cannot install descheduler operator after uninstalling it\n1887745 - API server is throwing 5xx error code for 42.11% of requests for LIST events\n1887750 - `oc explain localvolumediscovery` returns empty description\n1887751 - `oc explain localvolumediscoveryresult` returns empty description\n1887778 - Add ContainerRuntimeConfig gatherer\n1887783 - PVC upload cannot continue after approve the certificate\n1887797 - [CNV][V2V] Default network type is bridge for interface bound to POD network in VMWare migration wizard\n1887799 - User workload monitoring prometheus-config-reloader OOM\n1887850 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install test is flaky\n1887863 - Installer panics on invalid flavor\n1887864 - Clean up dependencies to avoid invalid scan flagging\n1887934 - TestForwardedHeaderPolicyAppend, TestForwardedHeaderPolicyReplace, and TestForwardedHeaderPolicyIfNone consistently fail because of case-sensitive comparison\n1887936 - Kube-scheduler should be able to parse v1beta1 KubeSchedulerConfig\n1888015 - workaround kubelet graceful termination of static pods bug\n1888028 - prevent extra cycle in aggregated apiservers\n1888036 - Operator details shows old CRD versions\n1888041 - non-terminating pods are going from running to pending\n1888072 - Setting Supermicro node to PXE boot via Redfish doesn\u0027t take affect\n1888073 - Operator controller continuously busy looping\n1888118 - Memory requests not specified for image registry operator\n1888150 - Install Operand Form on OperatorHub is displaying unformatted text\n1888172 - PR 209 didn\u0027t update the sample archive, but machineset and pdbs are now namespaced\n1888227 - Failed to deploy some of container image on the recent OCP 4.6 nightly build\n1888292 - Fix CVE-2015-7501 affecting agent-maven-3.5\n1888311 - p\u0026f: make SAR traffic from oauth and openshift apiserver exempt\n1888363 - namespaces crash in dev\n1888378 - [IPI on Azure] errors destroying cluster when Azure resource group was never created\n1888381 - instance:node_network_receive_bytes_excluding_lo:rate1m value twice expected\n1888464 - installer missing permission definitions for TagResources and UntagResources when installing in existing VPC\n1888494 - imagepruner pod is error when image registry storage is not configured\n1888565 - [OSP] machine-config-daemon-firstboot.service failed with \"error reading osImageURL from rpm-ostree\"\n1888595 - cluster-policy-controller logs shows error which reads initial monitor sync has error\n1888601 - The poddisruptionbudgets is using the operator service account, instead of gather\n1888657 - oc doesn\u0027t know its name\n1888663 - sdn starts after kube-apiserver, delay readyz until oauth-apiserver is reachable\n1888671 - Document the Cloud Provider\u0027s ignore-volume-az setting\n1888738 - quay.io/openshift/origin-must-gather:latest is not a multi-arch, manifest-list image\n1888763 - at least one of these parameters (Vendor, DeviceID or PfNames) has to be defined in nicSelector in CR %s\", cr.GetName()\n1888827 - ovnkube-master may segfault when trying to add IPs to a nil address set\n1888861 - need to pass dual-stack service CIDRs to kube-apiserver in dual-stack cluster\n1888866 - AggregatedAPIDown permanently firing after removing APIService\n1888870 - JS error when using autocomplete in YAML editor\n1888874 - hover message are not shown for some properties\n1888900 - align plugins versions\n1888985 - Cypress: Fix \u0027Ensures buttons have discernible text\u0027 accesibility violation\n1889213 - The error message of uploading failure is not clear enough\n1889267 - Increase the time out for creating template and upload image in the terraform\n1889348 - Project link should be removed from Application Details page, since it is inaccurate (Application Stages)\n1889374 - Kiali feature won\u0027t work on fresh 4.6 cluster\n1889388 - ListBundles returns incorrect replaces/skips when bundles have been added via semver-skippatch mode\n1889420 - OCP failed to add vsphere disk when pod moved to new node during cluster upgrade\n1889515 - Accessibility - The symbols e.g checkmark in the Node \u003e overview page has no text description, label, or other accessible information\n1889529 - [Init-CR annotation] Inline alert shows operand instance was needed still appearing after creating an Operand instance\n1889540 - [4.5 upgrade][alert]CloudCredentialOperatorDown\n1889577 - Resources are not shown on project workloads page\n1889620 - [Azure] - Machineset not scaling when publicIP:true in disconnected Azure enviroment\n1889630 - Scheduling disabled popovers are missing for Node status in Node Overview and Details pages\n1889692 - Selected Capacity is showing wrong size\n1889694 - usbguard fails to install as RHCOS extension due to missing libprotobuf.so.15\n1889698 - When the user clicked cancel at the Create Storage Class confirmation dialog all the data from the Local volume set goes off\n1889710 - Prometheus metrics on disk take more space compared to OCP 4.5\n1889721 - opm index add semver-skippatch mode does not respect prerelease versions\n1889724 - When LocalVolumeDiscovery CR is created form the LSO page User doesn\u0027t see the Disk tab\n1889767 - [vsphere] Remove certificate from upi-installer image\n1889779 - error when destroying a vSphere installation that failed early\n1889787 - OCP is flooding the oVirt engine with auth errors\n1889838 - race in Operator update after fix from bz1888073\n1889852 - support new AWS regions ap-east-1, af-south-1, eu-south-1\n1889863 - Router prints incorrect log message for namespace label selector\n1889891 - Backport timecache LRU fix\n1889912 - Drains can cause high CPU usage\n1889921 - Reported Degraded=False Available=False pair does not make sense\n1889928 - [e2e][automation] Add more tests for golden os\n1889943 - EgressNetworkPolicy does not work when setting Allow rule to a dnsName\n1890038 - Infrastructure status.platform not migrated to status.platformStatus causes warnings\n1890074 - MCO extension kernel-headers is invalid\n1890104 - with Serverless 1.10 version of trigger/subscription/channel/IMC is V1 as latest\n1890130 - multitenant mode consistently fails CI\n1890141 - move off docker.io images for build/image-eco/templates/jenkins e2e\n1890145 - The mismatched of font size for Status Ready and Health Check secondary text\n1890180 - FieldDependency x-descriptor doesn\u0027t support non-sibling fields\n1890182 - DaemonSet with existing owner garbage collected\n1890228 - AWS: destroy stuck on route53 hosted zone not found\n1890235 - e2e: update Protractor\u0027s checkErrors logging\n1890250 - workers may fail to join the cluster during an update from 4.5\n1890256 - Replacing a master node on a baremetal IPI deployment gets stuck when deleting the machine of the unhealthy member\n1890270 - External IP doesn\u0027t work if the IP address is not assigned to a node\n1890361 - s390x: Generate new ostree rpm with fix for rootfs immutability\n1890456 - [vsphere] mapi_instance_create_failed doesn\u0027t work on vsphere\n1890467 - unable to edit an application without a service\n1890472 - [Kuryr] Bulk port creation exception not completely formatted\n1890494 - Error assigning Egress IP on GCP\n1890530 - cluster-policy-controller doesn\u0027t gracefully terminate\n1890630 - [Kuryr] Available port count not correctly calculated for alerts\n1890671 - [SA] verify-image-signature using service account does not work\n1890677 - \u0027oc image info\u0027 claims \u0027does not exist\u0027 for application/vnd.oci.image.manifest.v1+json manifest\n1890808 - New etcd alerts need to be added to the monitoring stack\n1890951 - Mirror of multiarch images together with cluster logging case problems. It doesn\u0027t sync the \"overall\" sha it syncs only the sub arch sha. \n1890984 - Rename operator-webhook-config to sriov-operator-webhook-config\n1890995 - wew-app should provide more insight into why image deployment failed\n1891023 - ovn-kubernetes rbac proxy never starts waiting for an incorrect API call\n1891047 - Helm chart fails to install using developer console because of TLS certificate error\n1891068 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Early] failing due to TargetDown alert from kube-scheduler\n1891080 - [LSO] When Localvolumeset and SC is already created before OCS install Creation of LVD and LVS is skipped when user click created storage cluster from UI\n1891108 - p\u0026f: Increase the concurrency share of workload-low priority level\n1891143 - CVO deadlocked while shutting down, shortly after fresh cluster install (metrics goroutine)\n1891189 - [LSO] max device limit is accepting negative values. PVC is not getting created and no error is shown\n1891314 - Display incompatible helm charts for installation (kubeVersion of cluster doesn\u0027t meet requirements of chart)\n1891362 - Wrong metrics count for openshift_build_result_total\n1891368 - fync should be fsync for etcdHighFsyncDurations alert\u0027s annotations.message\n1891374 - fync should be fsync for etcdHighFsyncDurations critical alert\u0027s annotations.message\n1891376 - Extra text in Cluster Utilization charts\n1891419 - Wrong detail head on network policy detail page. \n1891459 - Snapshot tests should report stderr of failed commands\n1891498 - Other machine config pools do not show during update\n1891543 - OpenShift 4.6/OSP install fails when node flavor has less than 25GB, even with dedicated storage\n1891551 - Clusterautoscaler doesn\u0027t scale up as expected\n1891552 - Handle missing labels as empty. \n1891555 - The windows oc.exe binary does not have version metadata\n1891559 - kuryr-cni cannot start new thread\n1891614 - [mlx] testpmd fails inside OpenShift pod using DevX version 19.11\n1891625 - [Release 4.7] Mutable LoadBalancer Scope\n1891702 - installer get pending when additionalTrustBundle is added into install-config.yaml\n1891716 - OVN cluster upgrade from 4.6.1 to 4.7 fails\n1891740 - OperatorStatusChanged is noisy\n1891758 - the authentication operator may spam DeploymentUpdated event endlessly\n1891759 - Dockerfile builds cannot change /etc/pki/ca-trust\n1891816 - [UPI] [OSP] control-plane.yml provisioning playbook fails on OSP 16.1\n1891825 - Error message not very informative in case of mode mismatch\n1891898 - The ClusterServiceVersion can define Webhooks that cannot be created. \n1891951 - UI should show warning while creating pools with compression on\n1891952 - [Release 4.7] Apps Domain Enhancement\n1891993 - 4.5 to 4.6 upgrade doesn\u0027t remove deployments created by marketplace\n1891995 - OperatorHub displaying old content\n1891999 - Storage efficiency card showing wrong compression ratio\n1892004 - OCP 4.6 opm on Ubuntu 18.04.4 - error /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28\u0027 not found (required by ./opm)\n1892167 - [SR-IOV] SriovNetworkNodePolicies apply ignoring the spec.nodeSelector. \n1892198 - TypeError in \u0027Performance Profile\u0027 tab displayed for \u0027Performance Addon Operator\u0027\n1892288 - assisted install workflow creates excessive control-plane disruption\n1892338 - HAProxyReloadFail alert only briefly fires in the event of a broken HAProxy config\n1892358 - [e2e][automation] update feature gate for kubevirt-gating job\n1892376 - Deleted netnamespace could not be re-created\n1892390 - TestOverwrite/OverwriteBundle/DefaultBehavior in operator-registry is flaky\n1892393 - TestListPackages is flaky\n1892448 - MCDPivotError alert/metric missing\n1892457 - NTO-shipped stalld needs to use FIFO for boosting. \n1892467 - linuxptp-daemon crash\n1892521 - [AWS] Startup bootstrap machine failed due to ignition file is missing in disconnected UPI env\n1892653 - User is unable to create KafkaSource with v1beta\n1892724 - VFS added to the list of devices of the nodeptpdevice CRD\n1892799 - Mounting additionalTrustBundle in the operator\n1893117 - Maintenance mode on vSphere blocks installation. \n1893351 - TLS secrets are not able to edit on console. \n1893362 - The ovs-xxxxx_openshift-sdn container does not terminate gracefully, slowing down reboots\n1893386 - false-positive ReadyIngressNodes_NoReadyIngressNodes: Auth operator makes risky \"worker\" assumption when guessing about ingress availability\n1893546 - Deploy using virtual media fails on node cleaning step\n1893601 - overview filesystem utilization of OCP is showing the wrong values\n1893645 - oc describe route SIGSEGV\n1893648 - Ironic image building process is not compatible with UEFI secure boot\n1893724 - OperatorHub generates incorrect RBAC\n1893739 - Force deletion doesn\u0027t work for snapshots if snapshotclass is already deleted\n1893776 - No useful metrics for image pull time available, making debugging issues there impossible\n1893798 - Lots of error messages starting with \"get namespace to enqueue Alertmanager instances failed\" in the logs of prometheus-operator\n1893832 - ErrorCount field is missing in baremetalhosts.metal3.io CRD\n1893889 - disabled dropdown items in the pf dropdown component are skipped over and unannounced by JAWS\n1893926 - Some \"Dynamic PV (block volmode)\" pattern storage e2e tests are wrongly skipped\n1893944 - Wrong product name for Multicloud Object Gateway\n1893953 - (release-4.7) Gather default StatefulSet configs\n1893956 - Installation always fails at \"failed to initialize the cluster: Cluster operator image-registry is still updating\"\n1893963 - [Testday] Workloads-\u003e Virtualization is not loading for Firefox browser\n1893972 - Should skip e2e test cases as early as possible\n1894013 - [v2v][Testday] VMware to CNV VM import]VMware URL: It is not clear that only the FQDN/IP address is required without \u0027https://\u0027\n1894020 - User with edit users cannot deploy images from their own namespace from the developer perspective\n1894025 - OCP 4.5 to 4.6 upgrade for \"aws-ebs-csi-driver-operator\" fails when \"defaultNodeSelector\" is set\n1894041 - [v2v][[Testday]VM import from VMware/RHV] VM import wizard: The target storage class name is not displayed if default storage class is used. \n1894065 - tag new packages to enable TLS support\n1894110 - Console shows wrong value for maxUnavailable and maxSurge when set to 0\n1894144 - CI runs of baremetal IPI are failing due to newer libvirt libraries\n1894146 - ironic-api used by metal3 is over provisioned and consumes a lot of RAM\n1894194 - KuryrPorts leftovers from 4.6 GA need to be deleted\n1894210 - Failed to encrypt OSDs on OCS4.6 installation (via UI)\n1894216 - Improve OpenShift Web Console availability\n1894275 - Fix CRO owners file to reflect node owner\n1894278 - \"database is locked\" error when adding bundle to index image\n1894330 - upgrade channels needs to be updated for 4.7\n1894342 - oauth-apiserver logs many \"[SHOULD NOT HAPPEN] failed to update managedFields for ... OAuthClient ... no corresponding type for oauth.openshift.io/v1, Kind=OAuthClient\"\n1894374 - Dont prevent the user from uploading a file with incorrect extension\n1894432 - [oVirt] sometimes installer timeout on tmp_import_vm\n1894477 - bash syntax error in nodeip-configuration.service\n1894503 - add automated test for Polarion CNV-5045\n1894519 - [OSP] External mode cluster creation disabled for Openstack and oVirt platform\n1894539 - [on-prem] Unable to deploy additional machinesets on separate subnets\n1894645 - Cinder volume provisioning crashes on nil cloud provider\n1894677 - image-pruner job is panicking: klog stack\n1894810 - Remove TechPreview Badge from Eventing in Serverless version 1.11.0\n1894860 - \u0027backend\u0027 CI job passing despite failing tests\n1894910 - Update the node to use the real-time kernel fails\n1894992 - All nightly jobs for e2e-metal-ipi failing due to ipa image missing tenacity package\n1895065 - Schema / Samples / Snippets Tabs are all selected at the same time\n1895099 - vsphere-upi and vsphere-upi-serial jobs time out waiting for bootstrap to complete in CI\n1895141 - panic in service-ca injector\n1895147 - Remove memory limits on openshift-dns\n1895169 - VM Template does not properly manage Mount Windows guest tools check box during VM creation\n1895268 - The bundleAPIs should NOT be empty\n1895309 - [OCP v47] The RHEL node scaleup fails due to \"No package matching \u0027cri-o-1.19.*\u0027 found available\" on OCP 4.7 cluster\n1895329 - The infra index filled with warnings \"WARNING: kubernetes.io/cinder built-in volume provider is now deprecated. The Cinder volume provider is deprecated and will be removed in a future release\"\n1895360 - Machine Config Daemon removes a file although its defined in the dropin\n1895367 - Missing image in metadata DB index.db in disconnected Operator Hub installation. OCP 4.6.1\n1895372 - Web console going blank after selecting any operator to install from OperatorHub\n1895385 - Revert KUBELET_LOG_LEVEL back to level 3\n1895423 - unable to edit an application with a custom builder image\n1895430 - unable to edit custom template application\n1895509 - Backup taken on one master cannot be restored on other masters\n1895537 - [sig-imageregistry][Feature:ImageExtract] Image extract should extract content from an image\n1895838 - oc explain description contains \u0027/\u0027\n1895908 - \"virtio\" option is not available when modifying a CD-ROM to disk type\n1895909 - e2e-metal-ipi-ovn-dualstack is failing\n1895919 - NTO fails to load kernel modules\n1895959 - configuring webhook token authentication should prevent cluster upgrades\n1895979 - Unable to get coreos-installer with --copy-network to work\n1896101 - [cnv][automation] Added negative tests for migration from VMWare and RHV\n1896160 - CI: Some cluster operators are not ready: marketplace (missing: Degraded)\n1896188 - [sig-cli] oc debug deployment configs from a build: local-busybox-1-build not completed\n1896218 - Occasional GCP install failures: Error setting IAM policy for project ...: googleapi: Error 400: Service account ... does not exist., badRequest\n1896229 - Current Rate of Bytes Received and Current Rate of Bytes Transmitted data can not be loaded\n1896244 - Found a panic in storage e2e test\n1896296 - Git links should avoid .git as part of the URL and should not link git:// urls in general\n1896302 - [e2e][automation] Fix 4.6 test failures\n1896365 - [Migration]The SDN migration cannot revert under some conditions\n1896384 - [ovirt IPI]: local coredns resolution not working\n1896446 - Git clone from private repository fails after upgrade OCP 4.5 to 4.6\n1896529 - Incorrect instructions in the Serverless operator and application quick starts\n1896645 - documentationBaseURL needs to be updated for 4.7\n1896697 - [Descheduler] policy.yaml param in cluster configmap is empty\n1896704 - Machine API components should honour cluster wide proxy settings\n1896732 - \"Attach to Virtual Machine OS\" button should not be visible on old clusters\n1896866 - File /etc/NetworkManager/system-connections/default_connection.nmconnection is incompatible with SR-IOV operator\n1896898 - ovs-configuration.service fails when multiple IPv6 default routes are provided via RAs over the same interface and deployment bootstrap fails\n1896918 - start creating new-style Secrets for AWS\n1896923 - DNS pod /metrics exposed on anonymous http port\n1896977 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1897003 - VNC console cannot be connected after visit it in new window\n1897008 - Cypress: reenable check for \u0027aria-hidden-focus\u0027 rule \u0026 checkA11y test for modals\n1897026 - [Migration] With updating optional network operator configuration, migration stucks on MCO\n1897039 - router pod keeps printing log: template \"msg\"=\"router reloaded\" \"output\"=\"[WARNING] 316/065823 (15) : parsing [/var/lib/haproxy/conf/haproxy.config:52]: option \u0027http-use-htx\u0027 is deprecated and ignored\n1897050 - [IBM Power] LocalVolumeSet provisions boot partition as PV. \n1897073 - [OCP 4.5] wrong netid assigned to Openshift projects/namespaces\n1897138 - oVirt provider uses depricated cluster-api project\n1897142 - When scaling replicas to zero, Octavia loadbalancer pool members are not updated accordingly\n1897252 - Firing alerts are not showing up in console UI after cluster is up for some time\n1897354 - Operator installation showing success, but Provided APIs are missing\n1897361 - The MCO GCP-OP tests fail consistently on containerruntime tests with \"connection refused\"\n1897412 - [sriov]disableDrain did not be updated in CRD of manifest\n1897423 - Max unavailable and Max surge value are not shown on Deployment Config Details page\n1897516 - Baremetal IPI deployment with IPv6 control plane fails when the nodes obtain both SLAAC and DHCPv6 addresses as they set their hostname to \u0027localhost\u0027\n1897520 - After restarting nodes the image-registry co is in degraded true state. \n1897584 - Add casc plugins\n1897603 - Cinder volume attachment detection failure in Kubelet\n1897604 - Machine API deployment fails: Kube-Controller-Manager can\u0027t reach API: \"Unauthorized\"\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1897641 - Baremetal IPI with IPv6 control plane: nodes respond with duplicate packets to ICMP6 echo requests\n1897676 - [CI] [Azure] [UPI] CI failing since 4.6 changes in ignition\n1897830 - [GSS] Unable to deploy OCS 4.5.2 on OCP 4.6.1, cannot `Create OCS Cluster Service`\n1897891 - [RFE][v2v][UI][CNV VM import] Providing error message or/and block migration when vddk-init-image is missing\n1897897 - ptp lose sync openshift 4.6\n1898036 - no network after reboot (IPI)\n1898045 - AWS EBS CSI Driver can not get updated cloud credential secret automatically\n1898097 - mDNS floods the baremetal network\n1898118 - Lack of logs on some image stream tests make hard to find root cause of a problem\n1898134 - Descheduler logs show absolute values instead of percentage when LowNodeUtilization strategy is applied\n1898159 - kcm operator shall pass --allocate-node-cidrs=false to kcm for ovn-kube and openshift-sdn cluster\n1898174 - [OVN] EgressIP does not guard against node IP assignment\n1898194 - GCP: can\u0027t install on custom machine types\n1898238 - Installer validations allow same floating IP for API and Ingress\n1898268 - [OVN]: `make check` broken on 4.6\n1898289 - E2E test: Use KUBEADM_PASSWORD_FILE by default\n1898320 - Incorrect Apostrophe Translation of \"it\u0027s\" in Scheduling Disabled Popover\n1898357 - Within the operatorhub details view, long unbroken text strings do not wrap cause breaking display. \n1898407 - [Deployment timing regression] Deployment takes longer with 4.7\n1898417 - GCP: the dns targets in Google Cloud DNS is not updated after recreating loadbalancer service\n1898487 - [oVirt] Node is not removed when VM has been removed from oVirt engine\n1898500 - Failure to upgrade operator when a Service is included in a Bundle\n1898517 - Ironic auto-discovery may result in rogue nodes registered in ironic\n1898532 - Display names defined in specDescriptors not respected\n1898580 - When adding more than one node selector to the sriovnetworknodepolicy, the cni and the device plugin pods are constantly rebooted\n1898613 - Whereabouts should exclude IPv6 ranges\n1898655 - [oVirt] Node deleted in oVirt should cause the Machine to go into a Failed phase\n1898679 - Operand creation form - Required \"type: object\" properties (Accordion component) are missing red asterisk\n1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability\n1898745 - installation failing with CVO reporting openshift-samples not rolled out, samples not setting versions in its ClusterOperator\n1898839 - Wrong YAML in operator metadata\n1898851 - Multiple Pods access the same volume on the same node e2e test cases are missed from aws ebs csi driver e2e test job\n1898873 - Remove TechPreview Badge from Monitoring\n1898954 - Backup script does not take /etc/kubernetes/static-pod-resources on a reliable way\n1899111 - [RFE] Update jenkins-maven-agen to maven36\n1899128 - VMI details screen -\u003e show the warning that it is preferable to have a VM only if the VM actually does not exist\n1899175 - bump the RHCOS boot images for 4.7\n1899198 - Use new packages for ipa ramdisks\n1899200 - In Installed Operators page I cannot search for an Operator by it\u0027s name\n1899220 - Support AWS IMDSv2\n1899350 - configure-ovs.sh doesn\u0027t configure bonding options\n1899433 - When Creating OCS from ocs wizard Step Discover Disks shows Error \"An error occurred Not Found\"\n1899459 - Failed to start monitoring pods once the operator removed from override list of CVO\n1899515 - Passthrough credentials are not immediately re-distributed on update\n1899575 - update discovery burst to reflect lots of CRDs on openshift clusters\n1899582 - update discovery burst to reflect lots of CRDs on openshift clusters\n1899588 - Operator objects are re-created after all other associated resources have been deleted\n1899600 - Increased etcd fsync latency as of OCP 4.6\n1899603 - workers-rhel7 CI jobs failing: Failed to remove rollback: error running rpm-ostree cleanup\n1899627 - Project dashboard Active status using small icon\n1899725 - Pods table does not wrap well with quick start sidebar open\n1899746 - [ovn] error while waiting on flows for pod: OVS sandbox port is no longer active (probably due to a subsequent CNI ADD)\n1899760 - etcd_request_duration_seconds_bucket metric has excessive cardinality\n1899835 - catalog-operator repeatedly crashes with \"runtime error: index out of range [0] with length 0\"\n1899839 - thanosRuler.resources.requests does not take effect in user-workload-monitoring-config confimap\n1899853 - additionalSecurityGroupIDs not working for master nodes\n1899922 - NP changes sometimes influence new pods. \n1899949 - [Platform] Remove restriction on disk type selection for LocalVolumeSet\n1900008 - Fix internationalized sentence fragments in ImageSearch.tsx\n1900010 - Fix internationalized sentence fragments in BuildImageSelector.tsx\n1900020 - Remove \u0026apos; from internationalized keys\n1900022 - Search Page - Top labels field is not applied to selected Pipeline resources\n1900030 - disruption_tests: [sig-imageregistry] Image registry remain available failing consistently\n1900126 - Creating a VM results in suggestion to create a default storage class when one already exists\n1900138 - [OCP on RHV] Remove insecure mode from the installer\n1900196 - stalld is not restarted after crash\n1900239 - Skip \"subPath should be able to unmount\" NFS test\n1900322 - metal3 pod\u0027s toleration for key: node-role.kubernetes.io/master currently matches on exact value matches but should match on Exists\n1900377 - [e2e][automation] create new css selector for active users\n1900496 - (release-4.7) Collect spec config for clusteroperator resources\n1900672 - (s390x) Upgrade from old LUKS to new not working with DASD disks\n1900699 - Impossible to add new Node on OCP 4.6 using large ECKD disks - fdasd issue\n1900759 - include qemu-guest-agent by default\n1900790 - Track all resource counts via telemetry\n1900835 - Multus errors when cachefile is not found\n1900935 - `oc adm release mirror` panic panic: runtime error\n1900989 - accessing the route cannot wake up the idled resources\n1901040 - When scaling down the status of the node is stuck on deleting\n1901057 - authentication operator health check failed when installing a cluster behind proxy\n1901107 - pod donut shows incorrect information\n1901111 - Installer dependencies are broken\n1901200 - linuxptp-daemon crash when enable debug log level\n1901301 - CBO should handle platform=BM without provisioning CR\n1901355 - [Azure][4.7] Invalid vm size from customized compute nodes does not fail properly\n1901363 - High Podready Latency due to timed out waiting for annotations\n1901373 - redundant bracket on snapshot restore button\n1901376 - [on-prem] Upgrade from 4.6 to 4.7 failed with \"timed out waiting for the condition during waitForControllerConfigToBeCompleted: controllerconfig is not completed: ControllerConfig has not completed: completed(false) running(false) failing(true\"\n1901395 - \"Edit virtual machine template\" action link should be removed\n1901472 - [OSP] Bootstrap and master nodes use different keepalived unicast setting\n1901517 - RHCOS 4.6.1 uses a single NetworkManager connection for multiple NICs when using default DHCP\n1901531 - Console returns a blank page while trying to create an operator Custom CR with Invalid Schema\n1901594 - Kubernetes resource CRUD operations.Kubernetes resource CRUD operations Pod \"before all\" hook for \"creates the resource instance\"\n1901604 - CNO blocks editing Kuryr options\n1901675 - [sig-network] multicast when using one of the plugins \u0027redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy\u0027 should allow multicast traffic in namespaces where it is enabled\n1901909 - The device plugin pods / cni pod are restarted every 5 minutes\n1901982 - [sig-builds][Feature:Builds] build can reference a cluster service with a build being created from new-build should be able to run a build that references a cluster service\n1902019 - when podTopologySpreadConstraint strategy is enabled for descheduler it throws error\n1902059 - Wire a real signer for service accout issuer\n1902091 - `cluster-image-registry-operator` pod leaves connections open when fails connecting S3 storage\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n1902157 - The DaemonSet machine-api-termination-handler couldn\u0027t allocate Pod\n1902253 - MHC status doesnt set RemediationsAllowed = 0\n1902299 - Failed to mirror operator catalog - error: destination registry required\n1902545 - Cinder csi driver node pod should add nodeSelector for Linux\n1902546 - Cinder csi driver node pod doesn\u0027t run on master node\n1902547 - Cinder csi driver controller pod doesn\u0027t run on master node\n1902552 - Cinder csi driver does not use the downstream images\n1902595 - Project workloads list view doesn\u0027t show alert icon and hover message\n1902600 - Container csi-snapshotter in Cinder csi driver needs to use ImagePullPolicy=IfNotPresent\n1902601 - Cinder csi driver pods run as BestEffort qosClass\n1902653 - [BM][IPI] Master deployment failed: No valid host was found. Reason: No conductor service registered which supports driver redfish for conductor group\n1902702 - [sig-auth][Feature:LDAP][Serial] ldap group sync can sync groups from ldap: oc cp over non-existing directory/file fails\n1902746 - [BM][IP] Master deployment failed - Base.1.0.GeneralError: database is locked\n1902824 - failed to generate semver informed package manifest: unable to determine default channel\n1902894 - hybrid-overlay-node crashing trying to get node object during initialization\n1902969 - Cannot load vmi detail page\n1902981 - It should default to current namespace when create vm from template\n1902996 - [AWS] UPI on USGov, bootstrap machine can not fetch ignition file via s3:// URI\n1903033 - duplicated lines of imageContentSources is seen when mirror release image to local registry\n1903034 - OLM continuously printing debug logs\n1903062 - [Cinder csi driver] Deployment mounted volume have no write access\n1903078 - Deleting VolumeSnapshotClass makes VolumeSnapshot not Ready\n1903107 - Enable vsphere-problem-detector e2e tests\n1903164 - OpenShift YAML editor jumps to top every few seconds\n1903165 - Improve Canary Status Condition handling for e2e tests\n1903172 - Column Management: Fix sticky footer on scroll\n1903186 - [Descheduler] cluster logs should report some info when PodTopologySpreadConstraints strategy is enabled\n1903188 - [Descheduler] cluster log reports failed to validate server configuration\" err=\"unsupported log format:\n1903192 - Role name missing on create role binding form\n1903196 - Popover positioning is misaligned for Overview Dashboard status items\n1903206 - Ingress controller incorrectly routes traffic to non-ready pods/backends. \n1903226 - MutatingWebhookConfiguration pod-identity-webhook does not exclude critical control-plane components\n1903248 - Backport Upstream Static Pod UID patch\n1903277 - Deprovisioning Not Deleting Security Groups [VpcLimitExceeded on e2e-aws tests]\n1903290 - Kubelet repeatedly log the same log line from exited containers\n1903346 - PV backed by FC lun is not being unmounted properly and this leads to IO errors / xfs corruption. \n1903382 - Panic when task-graph is canceled with a TaskNode with no tasks\n1903400 - Migrate a VM which is not running goes to pending state\n1903402 - Nic/Disk on VMI overview should link to VMI\u0027s nic/disk page\n1903414 - NodePort is not working when configuring an egress IP address\n1903424 - mapi_machine_phase_transition_seconds_sum doesn\u0027t work\n1903464 - \"Evaluating rule failed\" for \"record: cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum\" and \"record: cluster:kubelet_volume_stats_used_bytes:provisioner:sum\"\n1903639 - Hostsubnet gatherer produces wrong output\n1903651 - Network Policies are not working as expected with OVN-Kubernetes when traffic hairpins back to the same source through a service\n1903660 - Cannot install with Assisted Installer on top of IPv6 since network provider is not started\n1903674 - [sig-apps] ReplicationController should serve a basic image on each replica with a private image\n1903717 - Handle different Pod selectors for metal3 Deployment\n1903733 - Scale up followed by scale down can delete all running workers\n1903917 - Failed to load \"Developer Catalog\" page\n1903999 - Httplog response code is always zero\n1904026 - The quota controllers should resync on new resources and make progress\n1904064 - Automated cleaning is disabled by default\n1904124 - DHCP to static lease script doesn\u0027t work correctly if starting with infinite leases\n1904125 - Boostrap VM .ign image gets added into \u0027default\u0027 pool instead of \u003ccluster-name\u003e-\u003cid\u003e-bootstrap\n1904131 - kuryr tempest plugin test test_ipblock_network_policy_sg_rules fails\n1904133 - KubeletConfig flooded with failure conditions\n1904161 - AlertmanagerReceiversNotConfigured fires unconditionally on alertmanager restart\n1904243 - RHCOS 4.6.1 missing ISCSI initiatorname.iscsi !\n1904244 - MissingKey errors for two plugins using i18next.t\n1904262 - clusterresourceoverride-operator has version: 1.0.0 every build\n1904296 - VPA-operator has version: 1.0.0 every build\n1904297 - The index image generated by \"opm index prune\" leaves unrelated images\n1904305 - Should have scroll-down bar for the field which the values list has too many results under dashboards\n1904385 - [oVirt] registry cannot mount volume on 4.6.4 -\u003e 4.6.6 upgrade\n1904497 - vsphere-problem-detector: Run on vSphere cloud only\n1904501 - [Descheduler] descheduler does not evict any pod when PodTopologySpreadConstraint strategy is set\n1904502 - vsphere-problem-detector: allow longer timeouts for some operations\n1904503 - vsphere-problem-detector: emit alerts\n1904538 - [sig-arch][Early] Managed cluster should start all core operators: monitoring: container has runAsNonRoot and image has non-numeric user (nobody)\n1904578 - metric scraping for vsphere problem detector is not configured\n1904582 - All application traffic broken due to unexpected load balancer change on 4.6.4 -\u003e 4.6.6 upgrade\n1904663 - IPI pointer customization MachineConfig always generated\n1904679 - [Feature:ImageInfo] Image info should display information about images\n1904683 - `[sig-builds][Feature:Builds] s2i build with a root user image` tests use docker.io image\n1904684 - [sig-cli] oc debug ensure it works with image streams\n1904713 - Helm charts with kubeVersion restriction are filtered incorrectly\n1904776 - Snapshot modal alert is not pluralized\n1904824 - Set vSphere hostname from guestinfo before NM starts\n1904941 - Insights status is always showing a loading icon\n1904973 - KeyError: \u0027nodeName\u0027 on NP deletion\n1904985 - Prometheus and thanos sidecar targets are down\n1904993 - Many ampersand special characters are found in strings\n1905066 - QE - Monitoring test cases - smoke test suite automation\n1905074 - QE -Gherkin linter to maintain standards\n1905100 - Too many haproxy processes in default-router pod causing high load average\n1905104 - Snapshot modal disk items missing keys\n1905115 - CI: dev-scripts fail on 02_configure_host: Failed to start network ostestbm\n1905119 - Race in AWS EBS determining whether custom CA bundle is used\n1905128 - [e2e][automation] e2e tests succeed without actually execute\n1905133 - operator conditions special-resource-operator\n1905141 - vsphere-problem-detector: report metrics through telemetry\n1905146 - Backend Tests: TestHelmRepoGetter_SkipDisabled failures\n1905194 - Detecting broken connections to the Kube API takes up to 15 minutes\n1905221 - CVO transitions from \"Initializing\" to \"Updating\" despite not attempting many manifests\n1905232 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them failing due to inconsistent images between CI and OCP\n1905253 - Inaccurate text at bottom of Events page\n1905298 - openshift-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory\n1905299 - OLM fails to update operator\n1905307 - Provisioning CR is missing from must-gather\n1905319 - cluster-samples-operator containers are not requesting required memory resource\n1905320 - csi-snapshot-webhook is not requesting required memory resource\n1905323 - dns-operator is not requesting required memory resource\n1905324 - ingress-operator is not requesting required memory resource\n1905327 - openshift-kube-scheduler initContainer wait-for-host-port is not requesting required resources: cpu, memory\n1905328 - Changing the bound token service account issuer invalids previously issued bound tokens\n1905329 - openshift-oauth-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory\n1905330 - openshift-monitoring init-textfile is not requesting required resources: cpu, memory\n1905338 - QE -Cypress Automation for Add Flow - Database, Yaml, OperatorBacked, PageDetails\n1905347 - QE - Design Gherkin Scenarios\n1905348 - QE - Design Gherkin Scenarios\n1905362 - [sriov] Error message \u0027Fail to update DaemonSet\u0027 always shown in sriov operator pod\n1905368 - [sriov] net-attach-def generated from sriovnetwork cannot be restored once it was deleted\n1905370 - A-Z/Z-A sorting dropdown on Developer Catalog page is not aligned with filter text input\n1905380 - Default to Red Hat/KubeVirt provider if common template does not have provider annotation\n1905393 - CMO uses rbac.authorization.k8s.io/v1beta1 instead of rbac.authorization.k8s.io/v1\n1905404 - The example of \"Remove the entrypoint on the mysql:latest image\" for `oc image append` does not work\n1905416 - Hyperlink not working from Operator Description\n1905430 - usbguard extension fails to install because of missing correct protobuf dependency version\n1905492 - The stalld service has a higher scheduler priority than ksoftirq and rcu{b, c} threads\n1905502 - Test flake - unable to get https transport for ephemeral-registry\n1905542 - [GSS] The \"External\" mode option is not available when the OCP cluster is deployed using Redhat Cluster Assisted Installer 4.6. \n1905599 - Errant change to lastupdatetime in copied CSV status can trigger runaway csv syncs\n1905610 - Fix typo in export script\n1905621 - Protractor login test fails against a 4.7 (nightly) Power cluster\n1905640 - Subscription manual approval test is flaky\n1905647 - Report physical core valid-for-subscription min/max/cumulative use to telemetry\n1905696 - ClusterMoreUpdatesModal component did not get internationalized\n1905748 - with sharded ingresscontrollers, all shards reload when any endpoint changes\n1905761 - NetworkPolicy with Egress policyType is resulting in SDN errors and improper communication within Project\n1905778 - inconsistent ingresscontroller between fresh installed cluster and upgraded cluster\n1905792 - [OVN]Cannot create egressfirewalll with dnsName\n1905889 - Should create SA for each namespace that the operator scoped\n1905920 - Quickstart exit and restart\n1905941 - Page goes to error after create catalogsource\n1905977 - QE ghaekin design scenaio-pipeline metrics ODC-3711\n1906032 - Canary Controller: Canary daemonset rolls out slowly in large clusters\n1906100 - Disconnected cluster upgrades are failing from the cli, when signature retrieval is being blackholed instead of quickly rejected\n1906105 - CBO annotates an existing Metal3 deployment resource to indicate that it is managing it\n1906118 - OCS feature detection constantly polls storageclusters and storageclasses\n1906120 - \u0027Create Role Binding\u0027 form not setting user or group value when created from a user or group resource\n1906121 - [oc] After new-project creation, the kubeconfig file does not set the project\n1906134 - OLM should not create OperatorConditions for copied CSVs\n1906143 - CBO supports log levels\n1906186 - i18n: Translators are not able to translate `this` without context for alert manager config\n1906228 - tuned and openshift-tuned sometimes do not terminate gracefully, slowing reboots\n1906274 - StorageClass installed by Cinder csi driver operator should enable the allowVolumeExpansion to support volume resize. \n1906276 - `oc image append` can\u0027t work with multi-arch image with --filter-by-os=\u0027.*\u0027\n1906318 - use proper term for Authorized SSH Keys\n1906335 - The lastTransitionTime, message, reason field of operatorcondition should be optional\n1906356 - Unify Clone PVC boot source flow with URL/Container boot source\n1906397 - IPA has incorrect kernel command line arguments\n1906441 - HorizontalNav and NavBar have invalid keys\n1906448 - Deploy using virtualmedia with provisioning network disabled fails - \u0027Failed to connect to the agent\u0027 in ironic-conductor log\n1906459 - openstack: Quota Validation fails if unlimited quotas are given to a project\n1906496 - [BUG] Thanos having possible memory leak consuming huge amounts of node\u0027s memory and killing them\n1906508 - TestHeaderNameCaseAdjust outputs nil error message on some failures\n1906511 - Root reprovisioning tests flaking often in CI\n1906517 - Validation is not robust enough and may prevent to generate install-confing. \n1906518 - Update snapshot API CRDs to v1\n1906519 - Update LSO CRDs to use v1\n1906570 - Number of disruptions caused by reboots on a cluster cannot be measured\n1906588 - [ci][sig-builds] nodes is forbidden: User \"e2e-test-jenkins-pipeline-xfghs-user\" cannot list resource \"nodes\" in API group \"\" at the cluster scope\n1906650 - Cannot collect network policy, EgressFirewall, egressip logs with gather_network_logs\n1906655 - [SDN]Cannot colloect ovsdb-server.log and ovs-vswitchd.log with gather_network_logs\n1906679 - quick start panel styles are not loaded\n1906683 - Kn resources are not showing in Topology if triggers has KSVC and IMC as subscriber\n1906684 - Event Source creation fails if user selects no app group and switch to yaml and then to form\n1906685 - SinkBinding is shown in topology view if underlying resource along with actual source created\n1906689 - user can pin to nav configmaps and secrets multiple times\n1906691 - Add doc which describes disabling helm chart repository\n1906713 - Quick starts not accesible for a developer user\n1906718 - helm chart \"provided by Redhat\" is misspelled\n1906732 - Machine API proxy support should be tested\n1906745 - Update Helm endpoints to use Helm 3.4.x\n1906760 - performance issues with topology constantly re-rendering\n1906766 - localized `Autoscaled` \u0026 `Autoscaling` pod texts overlap with the pod ring\n1906768 - Virtualization nav item is incorrectly placed in the Admin Workloads section\n1906769 - topology fails to load with non-kubeadmin user\n1906770 - shortcuts on mobiles view occupies a lot of space\n1906798 - Dev catalog customization doesn\u0027t update console-config ConfigMap\n1906806 - Allow installing extra packages in ironic container images\n1906808 - [test-disabled] ServiceAccounts should support OIDC discovery of service account issuer\n1906835 - Topology view shows add page before then showing full project workloads\n1906840 - ClusterOperator should not have status \"Updating\" if operator version is the same as the release version\n1906844 - EndpointSlice and EndpointSliceProxying feature gates should be disabled for openshift-sdn kube-proxy\n1906860 - Bump kube dependencies to v1.20 for Net Edge components\n1906864 - Quick Starts Tour: Need to adjust vertical spacing\n1906866 - Translations of Sample-Utils\n1906871 - White screen when sort by name in monitoring alerts page\n1906872 - Pipeline Tech Preview Badge Alignment\n1906875 - Provide an option to force backup even when API is not available. \n1906877 - Placeholder\u0027 value in search filter do not match column heading in Vulnerabilities\n1906879 - Add missing i18n keys\n1906880 - oidcdiscoveryendpoint controller invalidates all TokenRequest API tokens during install\n1906896 - No Alerts causes odd empty Table (Need no content message)\n1906898 - Missing User RoleBindings in the Project Access Web UI\n1906899 - Quick Start - Highlight Bounding Box Issue\n1906916 - Teach CVO about flowcontrol.apiserver.k8s.io/v1beta1\n1906933 - Cluster Autoscaler should have improved mechanisms for group identifiers\n1906935 - Delete resources when Provisioning CR is deleted\n1906968 - Must-gather should support collecting kubernetes-nmstate resources\n1906986 - Ensure failed pod adds are retried even if the pod object doesn\u0027t change\n1907199 - Need to upgrade machine-api-operator module version under cluster-api-provider-kubevirt\n1907202 - configs.imageregistry.operator.openshift.io cluster does not update its status fields after URL change\n1907211 - beta promotion of p\u0026f switched storage version to v1beta1, making downgrades impossible. \n1907269 - Tooltips data are different when checking stack or not checking stack for the same time\n1907280 - Install tour of OCS not available. \n1907282 - Topology page breaks with white screen\n1907286 - The default mhc machine-api-termination-handler couldn\u0027t watch spot instance\n1907287 - [csi-snapshot-webhook] should support both v1beta1 and v1 version when creating volumesnapshot/volumesnapshotcontent\n1907293 - Increase timeouts in e2e tests\n1907295 - Gherkin script for improve management for helm\n1907299 - Advanced Subscription Badge for KMS and Arbiter not present\n1907303 - Align VM template list items by baseline\n1907304 - Use PF styles for selected template card in VM Wizard\n1907305 - Drop \u0027ISO\u0027 from CDROM boot source message\n1907307 - Support and provider labels should be passed on between templates and sources\n1907310 - Pin action should be renamed to favorite\n1907312 - VM Template source popover is missing info about added date\n1907313 - ClusterOperator objects cannot be overriden with cvo-overrides\n1907328 - iproute-tc package is missing in ovn-kube image\n1907329 - CLUSTER_PROFILE env. variable is not used by the CVO\n1907333 - Node stuck in degraded state, mcp reports \"Failed to remove rollback: error running rpm-ostree cleanup -r: error: Timeout was reached\"\n1907373 - Rebase to kube 1.20.0\n1907375 - Bump to latest available 1.20.x k8s - workloads team\n1907378 - Gather netnamespaces networking info\n1907380 - kube-rbac-proxy exposes tokens, has excessive verbosity\n1907381 - OLM fails to deploy an operator if its deployment template contains a description annotation that doesn\u0027t match the CSV one\n1907390 - prometheus-adapter: panic after k8s 1.20 bump\n1907399 - build log icon link on topology nodes cause app to reload\n1907407 - Buildah version not accessible\n1907421 - [4.6.1]oc-image-mirror command failed on \"error: unable to copy layer\"\n1907453 - Dev Perspective -\u003e running vm details -\u003e resources -\u003e no data\n1907454 - Install PodConnectivityCheck CRD with CNO\n1907459 - \"The Boot source is also maintained by Red Hat.\" is always shown for all boot sources\n1907475 - Unable to estimate the error rate of ingress across the connected fleet\n1907480 - `Active alerts` section throwing forbidden error for users. \n1907518 - Kamelets/Eventsource should be shown to user if they have create access\n1907543 - Korean timestamps are shown when users\u0027 language preferences are set to German-en-en-US\n1907610 - Update kubernetes deps to 1.20\n1907612 - Update kubernetes deps to 1.20\n1907621 - openshift/installer: bump cluster-api-provider-kubevirt version\n1907628 - Installer does not set primary subnet consistently\n1907632 - Operator Registry should update its kubernetes dependencies to 1.20\n1907639 - pass dual-stack node IPs to kubelet in dual-stack clusters\n1907644 - fix up handling of non-critical annotations on daemonsets/deployments\n1907660 - Pod list does not render cell height correctly when pod names are too long (dynamic table rerendering issue?)\n1907670 - CVE-2020-27846 crewjam/saml: authentication bypass in saml authentication\n1907671 - Ingress VIP assigned to two infra nodes simultaneously - keepalived process running in pods seems to fail\n1907767 - [e2e][automation]update test suite for kubevirt plugin\n1907770 - Recent RHCOS 47.83 builds (from rhcos-47.83.202012072210-0 on) don\u0027t allow master and worker nodes to boot\n1907792 - The `overrides` of the OperatorCondition cannot block the operator upgrade\n1907793 - Surface support info in VM template details\n1907812 - 4.7 to 4.6 downgrade stuck in clusteroperator storage\n1907822 - [OCP on OSP] openshift-install panic when checking quota with install-config have no flavor set\n1907863 - Quickstarts status not updating when starting the tour\n1907872 - dual stack with an ipv6 network fails on bootstrap phase\n1907874 - QE - Design Gherkin Scenarios for epic ODC-5057\n1907875 - No response when try to expand pvc with an invalid size\n1907876 - Refactoring record package to make gatherer configurable\n1907877 - QE - Automation- pipelines builder scripts\n1907883 - Fix Pipleine creation without namespace issue\n1907888 - Fix pipeline list page loader\n1907890 - Misleading and incomplete alert message shown in pipeline-parameters and pipeline-resources form\n1907892 - Unable to edit application deployed using \"From Devfile\" option\n1907893 - navSortUtils.spec.ts unit test failure\n1907896 - When a workload is added, Topology does not place the new items well\n1907908 - VM Wizard always uses VirtIO for the VM rootdisk regardless what is defined in common-template\n1907924 - Enable madvdontneed in OpenShift Images\n1907929 - Enable madvdontneed in OpenShift System Components Part 2\n1907936 - NTO is not reporting nto_profile_set_total metrics correctly after reboot\n1907947 - The kubeconfig saved in tenantcluster shouldn\u0027t include anything that is not related to the current context\n1907948 - OCM-O bump to k8s 1.20\n1907952 - bump to k8s 1.20\n1907972 - Update OCM link to open Insights tab\n1907989 - DataVolumes was intorduced in common templates - VM creation fails in the UI\n1907998 - Gather kube_pod_resource_request/limit metrics as exposed in upstream KEP 1916\n1908001 - [CVE-2020-10749] Update github.com/containernetworking/plugins to v.0.8.6 in egress-router-cni\n1908014 - e2e-aws-ansible and e2e-aws-helm are broken in ocp-release-operator-sdk\n1908035 - dynamic-demo-plugin build does not generate dist directory\n1908135 - quick search modal is not centered over topology\n1908145 - kube-scheduler-recovery-controller container crash loop when router pod is co-scheduled\n1908159 - [AWS C2S] MCO fails to sync cloud config\n1908171 - GCP: Installation fails when installing cluster with n1-custom-4-16384custom type (n1-custom-4-16384)\n1908180 - Add source for template is stucking in preparing pvc\n1908217 - CI: Server-Side Apply should work for oauth.openshift.io/v1: has no tokens\n1908231 - [Migration] The pods ovnkube-node are in CrashLoopBackOff after SDN to OVN\n1908277 - QE - Automation- pipelines actions scripts\n1908280 - Documentation describing `ignore-volume-az` is incorrect\n1908296 - Fix pipeline builder form yaml switcher validation issue\n1908303 - [CVE-2020-28367 CVE-2020-28366] Remove CGO flag from rhel Dockerfile in Egress-Router-CNI\n1908323 - Create button missing for PLR in the search page\n1908342 - The new pv_collector_total_pv_count is not reported via telemetry\n1908344 - [vsphere-problem-detector] CheckNodeProviderID and CheckNodeDiskUUID have the same name\n1908347 - CVO overwrites ValidatingWebhookConfiguration for snapshots\n1908349 - Volume snapshot tests are failing after 1.20 rebase\n1908353 - QE - Automation- pipelines runs scripts\n1908361 - bump to k8s 1.20\n1908367 - QE - Automation- pipelines triggers scripts\n1908370 - QE - Automation- pipelines secrets scripts\n1908375 - QE - Automation- pipelines workspaces scripts\n1908381 - Go Dependency Fixes for Devfile Lib\n1908389 - Loadbalancer Sync failing on Azure\n1908400 - Tests-e2e, increase timeouts, re-add TestArchiveUploadedAndResultsReceived\n1908407 - Backport Upstream 95269 to fix potential crash in kubelet\n1908410 - Exclude Yarn from VSCode search\n1908425 - Create Role Binding form subject type and name are undefined when All Project is selected\n1908431 - When the marketplace-operator pod get\u0027s restarted, the custom catalogsources are gone, as well as the pods\n1908434 - Remove \u0026apos from metal3-plugin internationalized strings\n1908437 - Operator backed with no icon has no badge associated with the CSV tag\n1908459 - bump to k8s 1.20\n1908461 - Add bugzilla component to OWNERS file\n1908462 - RHCOS 4.6 ostree removed dhclient\n1908466 - CAPO AZ Screening/Validating\n1908467 - Zoom in and zoom out in topology package should be sentence case\n1908468 - [Azure][4.7] Installer can\u0027t properly parse instance type with non integer memory size\n1908469 - nbdb failed to come up while bringing up OVNKubernetes cluster\n1908471 - OLM should bump k8s dependencies to 1.20\n1908484 - oc adm release extract --cloud=aws --credentials-requests dumps all manifests\n1908493 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM\n1908545 - VM clone dialog does not open\n1908557 - [e2e][automation]Miss css id on bootsource and reviewcreate step on wizard\n1908562 - Pod readiness is not being observed in real world cases\n1908565 - [4.6] Cannot filter the platform/arch of the index image\n1908573 - Align the style of flavor\n1908583 - bootstrap does not run on additional networks if configured for master in install-config\n1908596 - Race condition on operator installation\n1908598 - Persistent Dashboard shows events for all provisioners\n1908641 - Go back to Catalog Page link on Virtual Machine page vanishes on empty state\n1908648 - Skip TestKernelType test on OKD, adjust TestExtensions\n1908650 - The title of customize wizard is inconsistent\n1908654 - cluster-api-provider: volumes and disks names shouldn\u0027t change by machine-api-operator\n1908675 - Reenable [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default [Suite:openshift/conformance/parallel] [Suite:k8s]\n1908687 - Option to save user settings separate when using local bridge (affects console developers only)\n1908697 - Show `kubectl diff ` command in the oc diff help page\n1908715 - Pressing the arrow up key when on topmost quick-search list item it should loop back to bottom\n1908716 - UI breaks on click of sidebar of ksvc (if revisions not up) in topology on 4.7 builds\n1908717 - \"missing unit character in duration\" error in some network dashboards\n1908746 - [Safari] Drop Shadow doesn\u0027t works as expected on hover on workload\n1908747 - stale S3 CredentialsRequest in CCO manifest\n1908758 - AWS: NLB timeout value is rejected by AWS cloud provider after 1.20 rebase\n1908830 - RHCOS 4.6 - Missing Initiatorname\n1908868 - Update empty state message for EventSources and Channels tab\n1908880 - 4.7 aws-serial CI: NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes\n1908883 - CVE-2020-29652 golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference\n1908888 - Dualstack does not work with multiple gateways\n1908889 - Bump CNO to k8s 1.20\n1908891 - TestDNSForwarding DNS operator e2e test is failing frequently\n1908914 - CNO: upgrade nodes before masters\n1908918 - Pipeline builder yaml view sidebar is not responsive\n1908960 - QE - Design Gherkin Scenarios\n1908971 - Gherkin Script for pipeline debt 4.7\n1908983 - i18n: Add Horizontal Pod Autoscaler action menu is not translated\n1908997 - Unsupported access mode should not be available when creating pvc by cinder-csi-driver/gcp-pd-csi-driver from web-console\n1908998 - [cinder-csi-driver] doesn\u0027t detect the credentials change\n1909004 - \"No datapoints found\" for RHEL node\u0027s filesystem graph\n1909005 - i18n: workloads list view heading is not translated\n1909012 - csi snapshot webhook does not block any invalid update for volumesnapshot and volumesnapshotcontent objects\n1909027 - Disks option of Sectected capacity chart shows HDD disk even on selection of SDD disk type\n1909043 - OCP + OCS 4.7 Internal - Storage cluster creation throws warning when zone=0 in VMware\n1909067 - Web terminal should keep latest output when connection closes\n1909070 - PLR and TR Logs component is not streaming as fast as tkn\n1909092 - Error Message should not confuse user on Channel form\n1909096 - OCP 4.7+OCS 4.7 - The Requested Cluster Capacity field needs to include the selected capacity in calculation in Review and Create Page\n1909108 - Machine API components should use 1.20 dependencies\n1909116 - Catalog Sort Items dropdown is not aligned on Firefox\n1909198 - Move Sink action option is not working\n1909207 - Accessibility Issue on monitoring page\n1909236 - Remove pinned icon overlap on resource name\n1909249 - Intermittent packet drop from pod to pod\n1909276 - Accessibility Issue on create project modal\n1909289 - oc debug of an init container no longer works\n1909290 - Logging may be broken due to mix of k8s.io/klog v1 and v2\n1909358 - registry.redhat.io/redhat/community-operator-index:latest only have hyperfoil-bundle\n1909453 - Boot disk RAID can corrupt ESP if UEFI firmware writes to it\n1909455 - Boot disk RAID will not boot if the primary disk enumerates but fails I/O\n1909464 - Build operator-registry with golang-1.15\n1909502 - NO_PROXY is not matched between bootstrap and global cluster setting which lead to desired master machineconfig is not found\n1909521 - Add kubevirt cluster type for e2e-test workflow\n1909527 - [IPI Baremetal] After upgrade from 4.6 to 4.7 metal3 pod does not get created\n1909587 - [OCP4] all of the OCP master nodes with soft-anti-affinity run on the same OSP node\n1909610 - Fix available capacity when no storage class selected\n1909678 - scale up / down buttons available on pod details side panel\n1909723 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder \u0026 base images to be consistent with ART\n1909730 - unbound variable error if EXTRA_PKGS_LIST is not defined\n1909739 - Arbiter request data changes\n1909744 - cluster-api-provider-openstack: Bump gophercloud\n1909790 - PipelineBuilder yaml view cannot be used for editing a pipeline\n1909791 - Update standalone kube-proxy config for EndpointSlice\n1909792 - Empty states for some details page subcomponents are not i18ned\n1909815 - Perspective switcher is only half-i18ned\n1909821 - OCS 4.7 LSO installation blocked because of Error \"Invalid value: \"integer\": spec.flexibleScaling in body\n1909836 - operator-install-global Cypress test was failing in OLM as it depends on an operator that isn\u0027t installed in CI\n1909864 - promote-release-openshift-machine-os-content-e2e-aws-4.5 is perm failing\n1909911 - [OVN]EgressFirewall caused a segfault\n1909943 - Upgrade from 4.6 to 4.7 stuck due to write /sys/devices/xxxx/block/sda/queue/scheduler: invalid argument\n1909958 - Support Quick Start Highlights Properly\n1909978 - ignore-volume-az = yes not working on standard storageClass\n1909981 - Improve statement in template select step\n1909992 - Fail to pull the bundle image when using the private index image\n1910024 - Reload issue in latest(4.7) UI code on 4.6 cluster locally in dev\n1910036 - QE - Design Gherkin Scenarios ODC-4504\n1910049 - UPI: ansible-galaxy is not supported\n1910127 - [UPI on oVirt]: Improve UPI Documentation\n1910140 - fix the api dashboard with changes in upstream kube 1.20\n1910160 - If two OperatorConditions include the same deployments they will keep updating the deployment\u0027s containers with the OPERATOR_CONDITION_NAME Environment Variable\n1910165 - DHCP to static lease script doesn\u0027t handle multiple addresses\n1910305 - [Descheduler] - The minKubeVersion should be 1.20.0\n1910409 - Notification drawer is not localized for i18n\n1910459 - Could not provision gcp volume if delete secret gcp-pd-cloud-credentials\n1910492 - KMS details are auto-populated on the screen in next attempt at Storage cluster creation\n1910501 - Installed Operators-\u003eOperand required: Clicking on cancel in Storage cluster page takes back to the Install Operator page\n1910533 - [OVN] It takes about 5 minutes for EgressIP failover to work\n1910581 - library-go: proxy ENV is not injected into csi-driver-controller which lead to storage operator never get ready\n1910666 - Creating a Source Secret from type SSH-Key should use monospace font for better usability\n1910738 - OCP 4.7 Installation fails on VMWare due to 1 worker that is degraded\n1910739 - Redfish-virtualmedia (idrac) deploy fails on \"The Virtual Media image server is already connected\"\n1910753 - Support Directory Path to Devfile\n1910805 - Missing translation for Pipeline status and breadcrumb text\n1910829 - Cannot delete a PVC if the dv\u0027s phase is WaitForFirstConsumer\n1910840 - Show Nonexistent command info in the `oc rollback -h` help page\n1910859 - breadcrumbs doesn\u0027t use last namespace\n1910866 - Unify templates string\n1910870 - Unify template dropdown action\n1911016 - Prometheus unable to mount NFS volumes after upgrading to 4.6\n1911129 - Monitoring charts renders nothing when switching from a Deployment to \"All workloads\"\n1911176 - [MSTR-998] Wrong text shown when hovering on lines of charts in API Performance dashboard\n1911212 - [MSTR-998] API Performance Dashboard \"Period\" drop-down has a choice \"$__auto_interval_period\" which can bring \"1:154: parse error: missing unit character in duration\"\n1911213 - Wrong and misleading warning for VMs that were created manually (not from template)\n1911257 - [aws-c2s] failed to create cluster, kube-cloud-config was not created\n1911269 - waiting for the build message present when build exists\n1911280 - Builder images are not detected for Dotnet, Httpd, NGINX\n1911307 - Pod Scale-up requires extra privileges in OpenShift web-console\n1911381 - \"Select Persistent Volume Claim project\" shows in customize wizard when select a source available template\n1911382 - \"source volumeMode (Block) and target volumeMode (Filesystem) do not match\" shows in VM Error\n1911387 - Hit error - \"Cannot read property \u0027value\u0027 of undefined\" while creating VM from template\n1911408 - [e2e][automation] Add auto-clone cli tests and new flow of VM creation\n1911418 - [v2v] The target storage class name is not displayed if default storage class is used\n1911434 - git ops empty state page displays icon with watermark\n1911443 - SSH Cretifiaction field should be validated\n1911465 - IOPS display wrong unit\n1911474 - Devfile Application Group Does Not Delete Cleanly (errors)\n1911487 - Pruning Deployments should use ReplicaSets instead of ReplicationController\n1911574 - Expose volume mode on Upload Data form\n1911617 - [CNV][UI] Failure to add source to VM template when no default storage class is defined\n1911632 - rpm-ostree command fail due to wrong options when updating ocp-4.6 to 4.7 on worker nodes with rt-kernel\n1911656 - using \u0027operator-sdk run bundle\u0027 to install operator successfully, but the command output said \u0027Failed to run bundle\u0027\u0027\n1911664 - [Negative Test] After deleting metal3 pod, scaling worker stuck on provisioning state\n1911782 - Descheduler should not evict pod used local storage by the PVC\n1911796 - uploading flow being displayed before submitting the form\n1912066 - The ansible type operator\u0027s manager container is not stable when managing the CR\n1912077 - helm operator\u0027s default rbac forbidden\n1912115 - [automation] Analyze job keep failing because of \u0027JavaScript heap out of memory\u0027\n1912237 - Rebase CSI sidecars for 4.7\n1912381 - [e2e][automation] Miss css ID on Create Network Attachment Definition page\n1912409 - Fix flow schema deployment\n1912434 - Update guided tour modal title\n1912522 - DNS Operator e2e test: TestCoreDNSImageUpgrade is fundamentally broken\n1912523 - Standalone pod status not updating in topology graph\n1912536 - Console Plugin CR for console-demo-plugin has wrong apiVersion\n1912558 - TaskRun list and detail screen doesn\u0027t show Pending status\n1912563 - p\u0026f: carry 97206: clean up executing request on panic\n1912565 - OLM macOS local build broken by moby/term dependency\n1912567 - [OCP on RHV] Node becomes to \u0027NotReady\u0027 status when shutdown vm from RHV UI only on the second deletion\n1912577 - 4.1/4.2-\u003e4.3-\u003e...-\u003e 4.7 upgrade is stuck during 4.6-\u003e4.7 with co/openshift-apiserver Degraded, co/network not Available and several other components pods CrashLoopBackOff\n1912590 - publicImageRepository not being populated\n1912640 - Go operator\u0027s controller pods is forbidden\n1912701 - Handle dual-stack configuration for NIC IP\n1912703 - multiple queries can\u0027t be plotted in the same graph under some conditons\n1912730 - Operator backed: In-context should support visual connector if SBO is not installed\n1912828 - Align High Performance VMs with High Performance in RHV-UI\n1912849 - VM from wizard - default flavor does not match the actual flavor set by common templates\n1912852 - VM from wizard - available VM templates - \"storage\" field is \"0 B\"\n1912888 - recycler template should be moved to KCM operator\n1912907 - Helm chart repository index can contain unresolvable relative URL\u0027s\n1912916 - Set external traffic policy to cluster for IBM platform\n1912922 - Explicitly specifying the operator generated default certificate for an ingress controller breaks the ingress controller\n1912938 - Update confirmation modal for quick starts\n1912942 - cluster-storage-operator: proxy ENV is not injected into vsphere-problem-detector deployment\n1912944 - cluster-storage-operator: proxy ENV is not injected into Manila CSI driver operator deployment\n1912945 - aws-ebs-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912946 - gcp-pd-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912947 - openstack-cinder-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912948 - csi-driver-manila-operator: proxy ENV is not injected into the CSI driver\n1912949 - ovirt-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912977 - rebase upstream static-provisioner\n1913006 - Remove etcd v2 specific alerts with etcd_http* metrics\n1913011 - [OVN] Pod\u0027s external traffic not use egressrouter macvlan ip as a source ip\n1913037 - update static-provisioner base image\n1913047 - baremetal clusteroperator progressing status toggles between true and false when cluster is in a steady state\n1913085 - Regression OLM uses scoped client for CRD installation\n1913096 - backport: cadvisor machine metrics are missing in k8s 1.19\n1913132 - The installation of Openshift Virtualization reports success early before it \u0027s succeeded eventually\n1913154 - Upgrading to 4.6.10 nightly failed with RHEL worker nodes: Failed to find /dev/disk/by-label/root\n1913196 - Guided Tour doesn\u0027t handle resizing of browser\n1913209 - Support modal should be shown for community supported templates\n1913226 - [Migration] The SDN migration rollback failed if customize vxlanPort\n1913249 - update info alert this template is not aditable\n1913285 - VM list empty state should link to virtualization quick starts\n1913289 - Rebase AWS EBS CSI driver for 4.7\n1913292 - OCS 4.7 Installation failed over vmware when arbiter was enabled, as flexibleScaling is also getting enabled\n1913297 - Remove restriction of taints for arbiter node\n1913306 - unnecessary scroll bar is present on quick starts panel\n1913325 - 1.20 rebase for openshift-apiserver\n1913331 - Import from git: Fails to detect Java builder\n1913332 - Pipeline visualization breaks the UI when multiple taskspecs are used\n1913343 - (release-4.7) Added changelog file for insights-operator\n1913356 - (release-4.7) Implemented gathering specific logs from openshift apiserver operator\n1913371 - Missing i18n key \"Administrator\" in namespace \"console-app\" and language \"en.\"\n1913386 - users can see metrics of namespaces for which they don\u0027t have rights when monitoring own services with prometheus user workloads\n1913420 - Time duration setting of resources is not being displayed\n1913536 - 4.6.9 -\u003e 4.7 upgrade hangs. RHEL 7.9 worker stuck on \"error enabling unit: Failed to execute operation: File exists\\\\n\\\"\n1913554 - Recording rule for ingress error fraction SLI is incorrect, uses irate instead of increase\n1913560 - Normal user cannot load template on the new wizard\n1913563 - \"Virtual Machine\" is not on the same line in create button when logged with normal user\n1913567 - Tooltip data should be same for line chart or stacked chart, display data value same as the table\n1913568 - Normal user cannot create template\n1913582 - [Migration]SDN to OVN migration stucks on MCO for rhel worker\n1913585 - Topology descriptive text fixes\n1913608 - Table data contains data value None after change time range in graph and change back\n1913651 - Improved Red Hat image and crashlooping OpenShift pod collection\n1913660 - Change location and text of Pipeline edit flow alert\n1913685 - OS field not disabled when creating a VM from a template\n1913716 - Include additional use of existing libraries\n1913725 - Refactor Insights Operator Plugin states\n1913736 - Regression: fails to deploy computes when using root volumes\n1913747 - Update operator to kubernetes 1.20.1 to pickup upstream fixes\n1913751 - add third-party network plugin test suite to openshift-tests\n1913783 - QE-To fix the merging pr issue, commenting the afterEach() block\n1913807 - Template support badge should not be shown for community supported templates\n1913821 - Need definitive steps about uninstalling descheduler operator\n1913851 - Cluster Tasks are not sorted in pipeline builder\n1913864 - BuildConfig YAML template references ruby ImageStreamTag that no longer exists\n1913951 - Update the Devfile Sample Repo to an Official Repo Host\n1913960 - Cluster Autoscaler should use 1.20 dependencies\n1913969 - Field dependency descriptor can sometimes cause an exception\n1914060 - Disk created from \u0027Import via Registry\u0027 cannot be used as boot disk\n1914066 - [sriov] sriov dp pod crash when delete ovs HW offload policy\n1914090 - Grafana - The resulting dataset is too large to graph (OCS RBD volumes being counted as disks)\n1914119 - vsphere problem detector operator has no permission to update storages.operator.openshift.io instances\n1914125 - Still using /dev/vde as default device path when create localvolume\n1914183 - Empty NAD page is missing link to quickstarts\n1914196 - target port in `from dockerfile` flow does nothing\n1914204 - Creating VM from dev perspective may fail with template not found error\n1914209 - Associate image secret name to pipeline serviceaccount imagePullSecrets\n1914212 - [e2e][automation] Add test to validate bootable disk souce\n1914250 - ovnkube-node fails on master nodes when both DHCPv6 and SLAAC addresses are configured on nodes\n1914284 - Upgrade to OCP 4.6.9 results in cluster-wide DNS and connectivity issues due to bad NetworkPolicy flows\n1914287 - Bring back selfLink\n1914301 - User VM Template source should show the same provider as template itself\n1914303 - linuxptp-daemon is not forwarding ptp4l stderr output to openshift logs\n1914309 - /terminal page when WTO not installed shows nonsensical error\n1914334 - order of getting started samples is arbitrary\n1914343 - [sig-imageregistry][Feature:ImageTriggers] Annotation trigger reconciles after the image is overwritten [Suite:openshift/conformance/parallel] timeout on s390x\n1914349 - Increase and decrease buttons in max and min pods in HPA page has distorted UI\n1914405 - Quick search modal should be opened when coming back from a selection\n1914407 - Its not clear that node-ca is running as non-root\n1914427 - Count of pods on the dashboard is incorrect\n1914439 - Typo in SRIOV port create command example\n1914451 - cluster-storage-operator pod running as root\n1914452 - oc image append, oc image extract outputs wrong suggestion to use --keep-manifest-list=true\n1914642 - Customize Wizard Storage tab does not pass validation\n1914723 - SamplesTBRInaccessibleOnBoot Alert has a misspelling\n1914793 - device names should not be translated\n1914894 - Warn about using non-groupified api version\n1914926 - webdriver-manager pulls incorrect version of ChomeDriver due to a bug\n1914932 - Put correct resource name in relatedObjects\n1914938 - PVC disk is not shown on customization wizard general tab\n1914941 - VM Template rootdisk is not deleted after fetching default disk bus\n1914975 - Collect logs from openshift-sdn namespace\n1915003 - No estimate of average node readiness during lifetime of a cluster\n1915027 - fix MCS blocking iptables rules\n1915041 - s3:ListMultipartUploadParts is relied on implicitly\n1915079 - Canary controller should not periodically rotate the canary route endpoint for performance reasons\n1915080 - Large number of tcp connections with shiftstack ocp cluster in about 24 hours\n1915085 - Pods created and rapidly terminated get stuck\n1915114 - [aws-c2s] worker machines are not create during install\n1915133 - Missing default pinned nav items in dev perspective\n1915176 - Update snapshot API CRDs to v1 in web-console when creating volumesnapshot related resource\n1915187 - Remove the \"Tech preview\" tag in web-console for volumesnapshot\n1915188 - Remove HostSubnet anonymization\n1915200 - [OCP 4.7+ OCS 4.6]Arbiter related Note should not show up during UI deployment\n1915217 - OKD payloads expect to be signed with production keys\n1915220 - Remove dropdown workaround for user settings\n1915235 - Failed to upgrade to 4.7 from 4.6 due to the machine-config failure\n1915262 - When deploying with assisted install the CBO operator is installed and enabled without metal3 pod\n1915277 - [e2e][automation]fix cdi upload form test\n1915295 - [BM][IP][Dualstack] Installation failed - operators report dial tcp 172.30.0.1:443: i/o timeout\n1915304 - Updating scheduling component builder \u0026 base images to be consistent with ART\n1915312 - Prevent schedule Linux openshift-network-diagnostics pod on Windows node\n1915318 - [Metal] bareMetal IPI - cannot interact with toolbox container after first execution only in parallel from different connection\n1915348 - [RFE] linuxptp operator needs to expose the uds_address_socket to be used by an application pod\n1915357 - Dev Catalog doesn\u0027t load anything if virtualization operator is installed\n1915379 - New template wizard should require provider and make support input a dropdown type\n1915408 - Failure in operator-registry kind e2e test\n1915416 - [Descheduler] descheduler evicts pod which does not have any ownerRef or descheduler evict annotation\n1915460 - Cluster name size might affect installations\n1915500 - [aws c2s] kube-controller-manager crash loops trying to fetch the AWS instance\n1915540 - Silent 4.7 RHCOS install failure on ppc64le\n1915579 - [Metal] redhat-support-tool became unavailable after tcpdump usage (BareMetal IPI)\n1915582 - p\u0026f: carry upstream pr 97860\n1915594 - [e2e][automation] Improve test for disk validation\n1915617 - Bump bootimage for various fixes\n1915624 - \"Please fill in the following field: Template provider\" blocks customize wizard\n1915627 - Translate Guided Tour text. \n1915643 - OCP4.6 to 4.7 upgrade failed due to manila csi driver operator sync error\n1915647 - Intermittent White screen when the connector dragged to revision\n1915649 - \"Template support\" pop up is not a warning; checkbox text should be rephrased\n1915654 - [e2e][automation] Add a verification for Afinity modal should hint \"Matching node found\"\n1915661 - Can\u0027t run the \u0027oc adm prune\u0027 command in a pod\n1915672 - Kuryr doesn\u0027t work with selfLink disabled. \n1915674 - Golden image PVC creation - storage size should be taken from the template\n1915685 - Message for not supported template is not clear enough\n1915760 - Need to increase timeout to wait rhel worker get ready\n1915793 - quick starts panel syncs incorrectly across browser windows\n1915798 - oauth connection errors for openshift console pods on an OVNKube OCP 4.7 cluster\n1915818 - vsphere-problem-detector: use \"_totals\" in metrics\n1915828 - Latest Dell firmware (04.40.00.00) fails to install IPI on BM using idrac-virtualmedia protocol\n1915859 - vsphere-problem-detector: does not report ESXi host version nor VM HW version\n1915871 - operator-sdk version in new downstream image should be v1.2.0-ocp not v4.7.0\n1915879 - Pipeline Dashboard tab Rename to Pipeline Metrics\n1915885 - Kuryr doesn\u0027t support workers running on multiple subnets\n1915898 - TaskRun log output shows \"undefined\" in streaming\n1915907 - test/cmd/builds.sh uses docker.io\n1915912 - sig-storage-csi-snapshotter image not available\n1915926 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder \u0026 base images to be consistent with ART\n1915929 - A11y Violation: svg-img-alt for time axis of Utilization Card on Cluster Dashboard\n1915939 - Resizing the browser window removes Web Terminal Icon\n1915945 - [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]\n1915959 - Baremetal cluster operator is included in a ROKS installation of 4.7\n1915962 - ROKS: manifest with machine health check fails to apply in 4.7\n1915972 - Global configuration breadcrumbs do not work as expected\n1915981 - Install ethtool and conntrack in container for debugging\n1915995 - \"Edit RoleBinding Subject\" action under RoleBinding list page kebab actions causes unhandled exception\n1915998 - Installer bootstrap node setting of additional subnets inconsistent with additional security groups\n1916021 - OLM enters infinite loop if Pending CSV replaces itself\n1916056 - Need Visual Web Terminal metric enabled for OCP monitoring telemetry\n1916081 - non-existant should be non-existent in CloudCredentialOperatorTargetNamespaceMissing alert\u0027s annotations\n1916099 - VM creation - customization wizard - user should be allowed to delete and re-create root disk\n1916126 - [e2e][automation] Help fix tests for vm guest-agent and next-run-configuration\n1916145 - Explicitly set minimum versions of python libraries\n1916164 - Update csi-driver-nfs builder \u0026 base images to be consistent with ART\n1916221 - csi-snapshot-controller-operator: bump dependencies for 4.7\n1916271 - Known issues should mention failure to apply soft-anti-affinity to masters beyond the third\n1916363 - [OVN] ovs-configuration.service reports as failed within all nodes using version 4.7.0-fc.2\n1916379 - error metrics from vsphere-problem-detector should be gauge\n1916382 - Can\u0027t create ext4 filesystems with Ignition\n1916384 - 4.5.15 and later cluster-version operator does not sync ClusterVersion status before exiting, leaving \u0027verified: false\u0027 even for verified updates\n1916401 - Deleting an ingress controller with a bad DNS Record hangs\n1916417 - [Kuryr] Must-gather does not have all Custom Resources information\n1916419 - [sig-devex][Feature:ImageEcosystem][Slow] openshift images should be SCL enabled returning s2i usage when running the image\n1916454 - teach CCO about upgradeability from 4.6 to 4.7\n1916486 - [OCP RHV] [Docs] Update RHV CSI provisioning section in OCP documenation\n1916502 - Boot disk mirroring fails with mdadm error\n1916524 - Two rootdisk shows on storage step\n1916580 - Default yaml is broken for VM and VM template\n1916621 - oc adm node-logs examples are wrong\n1916642 - [zh_CN] Redundant period in Secrets - Create drop down menu - Key value secret. \n1916692 - Possibly fails to destroy LB and thus cluster\n1916711 - Update Kube dependencies in MCO to 1.20.0\n1916747 - remove links to quick starts if virtualization operator isn\u0027t updated to 2.6\n1916764 - editing a workload with no application applied, will auto fill the app\n1916834 - Pipeline Metrics - Text Updates\n1916843 - collect logs from openshift-sdn-controller pod\n1916853 - cluster will not gracefully recover if openshift-etcd namespace is removed\n1916882 - OCS 4.7 LSO : wizard (Discover disks and create storageclass) does not show zone when topology.kubernetes.io/zone are added manually\n1916888 - OCS wizard Donor chart does not get updated when `Device Type` is edited\n1916938 - Using 4.6 install-config.yaml file with lbFloatingIP results in validation error \"Forbidden: cannot specify lbFloatingIP and apiFloatingIP together\"\n1916949 - ROKS: manifests in openshift-oauth-apiserver ns fails to create with non-existent namespace\n1917101 - [UPI on oVirt] - \u0027RHCOS image\u0027 topic isn\u0027t located in the right place in UPI document\n1917114 - Upgrade from 4.5.9 to 4.7 fails as authentication operator is Degraded due to \u0027\"ProxyConfigController\" controller failed to sync \"key\"\u0027 error\n1917117 - Common templates - disks screen: invalid disk name\n1917124 - Custom template - clone existing PVC - the name of the target VM\u0027s data volume is hard-coded; only one VM can be created\n1917146 - [oVirt] Consume 23-10 ovirt sdk- csi operator\n1917147 - [oVirt] csi operator panics if ovirt-engine suddenly becomes unavailable. \n1917148 - [oVirt] Consume 23-10 ovirt sdk\n1917239 - Monitoring time options overlaps monitoring tab navigation when Quickstart panel is opened\n1917272 - Should update the default minSize to 1Gi when create localvolumeset on web console\n1917303 - [automation][e2e] make kubevirt-plugin gating job mandatory\n1917315 - localvolumeset-local-provisoner-xxx pods are not killed after upgrading from 4.6 to 4.7\n1917327 - annotations.message maybe wrong for NTOPodsNotReady alert\n1917367 - Refactor periodic.go\n1917371 - Add docs on how to use the built-in profiler\n1917372 - Application metrics are shown on Metrics dashboard but not in linked Prometheus UI in OCP management console\n1917395 - pv-pool backing store name restriction should be at 43 characters from the ocs ui\n1917484 - [BM][IPI] Failed to scale down machineset\n1917522 - Deprecate --filter-by-os in oc adm catalog mirror\n1917537 - controllers continuously busy reconciling operator\n1917551 - use min_over_time for vsphere prometheus alerts\n1917585 - OLM Operator install page missing i18n\n1917587 - Manila CSI operator becomes degraded if user doesn\u0027t have permissions to list share types\n1917605 - Deleting an exgw causes pods to no longer route to other exgws\n1917614 - [aws c2s] ingress operator uses unavailable resourcegrouptaggings API\n1917656 - Add to Project/application for eventSources from topology shows 404\n1917658 - Show TP badge for sources powered by camel connectors in create flow\n1917660 - Editing parallelism of job get error info\n1917678 - Could not provision pv when no symlink and target found on rhel worker\n1917679 - Hide double CTA in admin pipelineruns tab\n1917683 - `NodeTextFileCollectorScrapeError` alert in OCP 4.6 cluster. \n1917759 - Console operator panics after setting plugin that does not exists to the console-operator config\n1917765 - ansible-operator version in downstream image should be v1.3.0 not v4.7.0\n1917770 - helm-operator version in downstream image should be v1.3.0 not v4.7.0\n1917799 - Gather s list of names and versions of installed OLM operators\n1917803 - [sig-storage] Pod Disks should be able to delete a non-existent PD without error\n1917814 - Show Broker create option in eventing under admin perspective\n1917838 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types\n1917872 - [oVirt] rebase on latest SDK 2021-01-12\n1917911 - network-tools needs ovnkube-trace binary from ovn-kubernetes image\n1917938 - upgrade version of dnsmasq package\n1917942 - Canary controller causes panic in ingress-operator\n1918019 - Undesired scrollbars in markdown area of QuickStart\n1918068 - Flaky olm integration tests\n1918085 - reversed name of job and namespace in cvo log\n1918112 - Flavor is not editable if a customize VM is created from cli\n1918129 - Update IO sample archive with missing resources \u0026 remove IP anonymization from clusteroperator resources\n1918132 - i18n: Volume Snapshot Contents menu is not translated\n1918133 - [e2e][automation] Fix ocp 4.7 existing tests - part2\n1918140 - Deployment openstack-cinder-csi-driver-controller and openstack-manila-csi-controllerplugin doesn\u0027t be installed on OSP\n1918153 - When `\u0026` character is set as an environment variable in a build config it is getting converted as `\\u0026`\n1918185 - Capitalization on PLR details page\n1918287 - [ovirt] ovirt csi driver is flooding RHV with API calls and spam the event UI with new connections\n1918318 - Kamelet connector\u0027s are not shown in eventing section under Admin perspective\n1918351 - Gather SAP configuration (SCC \u0026 ClusterRoleBinding)\n1918375 - [calico] rbac-proxy container in kube-proxy fails to create tokenreviews\n1918395 - [ovirt] increase livenessProbe period\n1918415 - MCD nil pointer on dropins\n1918438 - [ja_JP, zh_CN] Serverless i18n misses\n1918440 - Kernel Arguments get reapplied even when no new kargs has been added in MachineConfig\n1918471 - CustomNoUpgrade Feature gates are not working correctly\n1918558 - Supermicro nodes boot to PXE upon reboot after successful deployment to disk\n1918622 - Updating ose-jenkins-agent-maven builder \u0026 base images to be consistent with ART\n1918623 - Updating ose-jenkins-agent-nodejs-12 builder \u0026 base images to be consistent with ART\n1918625 - Updating ose-jenkins-agent-nodejs-10 builder \u0026 base images to be consistent with ART\n1918635 - Updating openshift-jenkins-2 builder \u0026 base images to be consistent with ART #1197\n1918639 - Event listener with triggerRef crashes the console\n1918648 - Subscription page doesn\u0027t show InstallPlan correctly\n1918716 - Manilacsi becomes degraded even though it is not available with the underlying Openstack\n1918748 - helmchartrepo is not http(s)_proxy-aware\n1918757 - Consistant fallures of features/project-creation.feature Cypress test in CI\n1918803 - Need dedicated details page w/ global config breadcrumbs for \u0027KnativeServing\u0027 plugin\n1918826 - Insights popover icons are not horizontally aligned\n1918879 - need better debug for bad pull secrets\n1918958 - The default NMstate instance from the operator is incorrect\n1919097 - Close bracket \")\" missing at the end of the sentence in the UI\n1919231 - quick search modal cut off on smaller screens\n1919259 - Make \"Add x\" singular in Pipeline Builder\n1919260 - VM Template list actions should not wrap\n1919271 - NM prepender script doesn\u0027t support systemd-resolved\n1919341 - Updating ose-jenkins-agent-maven builder \u0026 base images to be consistent with ART\n1919360 - Need managed-cluster-info metric enabled for OCP monitoring telemetry\n1919379 - dotnet logo out of date\n1919387 - Console login fails with no error when it can\u0027t write to localStorage\n1919396 - A11y Violation: svg-img-alt on Pod Status ring\n1919407 - OpenStack IPI has three-node control plane limitation, but InstallConfigs aren\u0027t verified\n1919750 - Search InstallPlans got Minified React error\n1919778 - Upgrade is stuck in insights operator Degraded with \"Source clusterconfig could not be retrieved\" until insights operator pod is manually deleted\n1919823 - OCP 4.7 Internationalization Chinese tranlate issue\n1919851 - Visualization does not render when Pipeline \u0026 Task share same name\n1919862 - The tip information for `oc new-project --skip-config-write` is wrong\n1919876 - VM created via customize wizard cannot inherit template\u0027s PVC attributes\n1919877 - Click on KSVC breaks with white screen\n1919879 - The toolbox container name is changed from \u0027toolbox-root\u0027 to \u0027toolbox-\u0027 in a chroot environment\n1919945 - user entered name value overridden by default value when selecting a git repository\n1919968 - [release-4.7] Undiagnosed panic detected in pod runtime.go:76: invalid memory address or nil pointer dereference\n1919970 - NTO does not update when the tuned profile is updated. \n1919999 - Bump Cluster Resource Operator Golang Versions\n1920027 - machine-config-operator consistently failing during 4.6 to 4.7 upgrades and clusters do not install successfully with proxy configuration\n1920200 - user-settings network error results in infinite loop of requests\n1920205 - operator-registry e2e tests not working properly\n1920214 - Bump golang to 1.15 in cluster-resource-override-admission\n1920248 - re-running the pipelinerun with pipelinespec crashes the UI\n1920320 - VM template field is \"Not available\" if it\u0027s created from common template\n1920367 - When creating localvolumeset instance from the web console, the title for setting volumeMode is `Disk Mode`\n1920368 - Fix containers creation issue resulting in runc running on Guaranteed Pod CPUs\n1920390 - Monitoring \u003e Metrics graph shifts to the left when clicking the \"Stacked\" option and when toggling data series lines on / off\n1920426 - Egress Router CNI OWNERS file should have ovn-k team members\n1920427 - Need to update `oc login` help page since we don\u0027t support prompt interactively for the username\n1920430 - [V2V] [UI] Browser window becomes empty when running import wizard for the first time\n1920438 - openshift-tuned panics on turning debugging on/off. \n1920445 - e2e-gcp-ovn-upgrade job is actually using openshift-sdn\n1920481 - kuryr-cni pods using unreasonable amount of CPU\n1920509 - wait for port 6443 to be open in the kube-scheduler container; use ss instead of lsof\n1920524 - Topology graph crashes adding Open Data Hub operator\n1920526 - catalog operator causing CPU spikes and bad etcd performance\n1920551 - Boot Order is not editable for Templates in \"openshift\" namespace\n1920555 - bump cluster-resource-override-admission api dependencies\n1920571 - fcp multipath will not recover failed paths automatically\n1920619 - Remove default scheduler profile value\n1920655 - Console should not show the Create Autoscaler link in cluster settings when the CRD is not present\n1920674 - MissingKey errors in bindings namespace\n1920684 - Text in language preferences modal is misleading\n1920695 - CI is broken because of bad image registry reference in the Makefile\n1920756 - update generic-admission-server library to get the system:masters authorization optimization\n1920769 - [Upgrade] OCP upgrade from 4.6.13 to 4.7.0-fc.4 for \"network-check-target\" failed when \"defaultNodeSelector\" is set\n1920771 - i18n: Delete persistent volume claim drop down is not translated\n1920806 - [OVN]Nodes lost network connection after reboot on the vSphere UPI\n1920912 - Unable to power off BMH from console\n1920981 - When OCS was deployed with arbiter mode enable add capacity is increasing the count by \"2\"\n1920984 - [e2e][automation] some menu items names are out dated\n1921013 - Gather PersistentVolume definition (if any) used in image registry config\n1921023 - Do not enable Flexible Scaling to true for Internal mode clusters(revert to 4.6 behavior)\n1921087 - \u0027start next quick start\u0027 link doesn\u0027t work and is unintuitive\n1921088 - test-cmd is failing on volumes.sh pretty consistently\n1921248 - Clarify the kubelet configuration cr description\n1921253 - Text filter default placeholder text not internationalized\n1921258 - User Preferences: Active perspective and project change in the current window when selected in a different window\n1921275 - Panic in authentication-operator in (*deploymentController).updateOperatorDeploymentInfo\n1921277 - Fix Warning and Info log statements to handle arguments\n1921281 - oc get -o yaml --export returns \"error: unknown flag: --export\"\n1921458 - [SDK] Gracefully handle the `run bundle-upgrade` if the lower version operator doesn\u0027t exist\n1921556 - [OCS with Vault]: OCS pods didn\u0027t comeup after deploying with Vault details from UI\n1921572 - For external source (i.e GitHub Source) form view as well shows yaml\n1921580 - [e2e][automation]Test VM detail view actions dropdown does not pass\n1921610 - Pipeline metrics font size inconsistency\n1921644 - [e2e][automation] tests errors with wrong cloudInit new line syntax\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1921655 - [OSP] Incorrect error handling during cloudinfo generation\n1921713 - [e2e][automation] fix failing VM migration tests\n1921762 - Serving and Eventing breadcrumbs should direct users back to tabbed page view\n1921774 - delete application modal errors when a resource cannot be found\n1921806 - Explore page APIResourceLinks aren\u0027t i18ned\n1921823 - CheckBoxControls not internationalized\n1921836 - AccessTableRows don\u0027t internationalize \"User\" or \"Group\"\n1921857 - Test flake when hitting router in e2e tests due to one router not being up to date\n1921880 - Dynamic plugins are not initialized on console load in production mode\n1921911 - Installer PR #4589 is causing leak of IAM role policy bindings\n1921921 - \"Global Configuration\" breadcrumb does not use sentence case\n1921949 - Console bug - source code URL broken for gitlab self-hosted repositories\n1921954 - Subscription-related constraints in ResolutionFailed events are misleading\n1922015 - buttons in modal header are invisible on Safari\n1922021 - Nodes terminal page \u0027Expand\u0027 \u0027Collapse\u0027 button not translated\n1922050 - [e2e][automation] Improve vm clone tests\n1922066 - Cannot create VM from custom template which has extra disk\n1922098 - Namespace selection dialog is not closed after select a namespace\n1922099 - Updated Readme documentation for QE code review and setup\n1922146 - Egress Router CNI doesn\u0027t have logging support. \n1922267 - Collect specific ADFS error\n1922292 - Bump RHCOS boot images for 4.7\n1922454 - CRI-O doesn\u0027t enable pprof by default\n1922473 - reconcile LSO images for 4.8\n1922573 - oc returns an error while using -o jsonpath when there is no resource found in the namespace\n1922782 - Source registry missing docker:// in yaml\n1922907 - Interop UI Tests - step implementation for updating feature files\n1922911 - Page crash when click the \"Stacked\" checkbox after clicking the data series toggle buttons\n1922991 - \"verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build\" test fails on OKD\n1923003 - WebConsole Insights widget showing \"Issues pending\" when the cluster doesn\u0027t report anything\n1923098 - [vsphere-problem-detector-operator] Need permission to access replicasets.apps resources\n1923102 - [vsphere-problem-detector-operator] pod\u0027s version is not correct\n1923245 - [Assisted-4.7] [Staging][Minimal-ISO] nodes fails to boot\n1923674 - k8s 1.20 vendor dependencies\n1923721 - PipelineRun running status icon is not rotating\n1923753 - Increase initialDelaySeconds for ovs-daemons container in the ovs-node daemonset for upgrade scenarios\n1923774 - Docker builds failing for openshift/cluster-resource-override-admission-operator\n1923802 - ci/prow/e2e-aws-olm build failing for openshift/cluster-resource-override-admission-operator\n1923874 - Unable to specify values with % in kubeletconfig\n1923888 - Fixes error metadata gathering\n1923892 - Update arch.md after refactor. \n1923894 - \"installed\" operator status in operatorhub page does not reflect the real status of operator\n1923895 - Changelog generation. \n1923911 - [e2e][automation] Improve tests for vm details page and list filter\n1923945 - PVC Name and Namespace resets when user changes os/flavor/workload\n1923951 - EventSources shows `undefined` in project\n1923973 - Dynamic plugin demo README does not contain info how to enable the ConsolePlugins\n1924046 - Localhost: Refreshing on a Project removes it from nav item urls\n1924078 - Topology quick search View all results footer should be sticky. \n1924081 - NTO should ship the latest Tuned daemon release 2.15\n1924084 - backend tests incorrectly hard-code artifacts dir\n1924128 - [sig-builds][Feature:Builds] verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build\n1924135 - Under sufficient load, CRI-O may segfault\n1924143 - Code Editor Decorator url is broken for Bitbucket repos\n1924188 - Language selector dropdown doesn\u0027t always pre-select the language\n1924365 - Add extra disk for VM which use boot source PXE\n1924383 - Degraded network operator during upgrade to 4.7.z\n1924387 - [ja_JP][zh_CN] Incorrect warning message for deleting namespace on Delete Pod dialog box. \n1924480 - non cluster admin can not take VM snapshot: An error occurred, cannot set blockOwnerDeletion if an ownerReference refers to a resource you can\u0027t set finalizers on\n1924583 - Deprectaed templates are listed in the Templates screen\n1924870 - pick upstream pr#96901: plumb context with request deadline\n1924955 - Images from Private external registry not working in deploy Image\n1924961 - k8sutil.TrimDNS1123Label creates invalid values\n1924985 - Build egress-router-cni for both RHEL 7 and 8\n1925020 - Console demo plugin deployment image shoult not point to dockerhub\n1925024 - Remove extra validations on kafka source form view net section\n1925039 - [e2e] Fix Test - ID(CNV-5327) Change Custom Flavor while VM is running\n1925072 - NTO needs to ship the current latest stalld v1.7.0\n1925163 - Missing info about dev catalog in boot source template column\n1925200 - Monitoring Alert icon is missing on the workload in Topology view\n1925262 - apiserver getting 2 SIGTERM signals which was immediately making it exit code 1\n1925319 - bash syntax error in configure-ovs.sh script\n1925408 - Remove StatefulSet gatherer and replace it with gathering corresponding config map data\n1925516 - Pipeline Metrics Tooltips are overlapping data\n1925562 - Add new ArgoCD link from GitOps application environments page\n1925596 - Gitops details page image and commit id text overflows past card boundary\n1926556 - \u0027excessive etcd leader changes\u0027 test case failing in serial job because prometheus data is wiped by machine set test\n1926588 - The tarball of operator-sdk is not ready for ocp4.7\n1927456 - 4.7 still points to 4.6 catalog images\n1927500 - API server exits non-zero on 2 SIGTERM signals\n1929278 - Monitoring workloads using too high a priorityclass\n1929645 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n1929920 - Cluster monitoring documentation link is broken - 404 not found\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-10103\nhttps://access.redhat.com/security/cve/CVE-2018-10105\nhttps://access.redhat.com/security/cve/CVE-2018-14461\nhttps://access.redhat.com/security/cve/CVE-2018-14462\nhttps://access.redhat.com/security/cve/CVE-2018-14463\nhttps://access.redhat.com/security/cve/CVE-2018-14464\nhttps://access.redhat.com/security/cve/CVE-2018-14465\nhttps://access.redhat.com/security/cve/CVE-2018-14466\nhttps://access.redhat.com/security/cve/CVE-2018-14467\nhttps://access.redhat.com/security/cve/CVE-2018-14468\nhttps://access.redhat.com/security/cve/CVE-2018-14469\nhttps://access.redhat.com/security/cve/CVE-2018-14470\nhttps://access.redhat.com/security/cve/CVE-2018-14553\nhttps://access.redhat.com/security/cve/CVE-2018-14879\nhttps://access.redhat.com/security/cve/CVE-2018-14880\nhttps://access.redhat.com/security/cve/CVE-2018-14881\nhttps://access.redhat.com/security/cve/CVE-2018-14882\nhttps://access.redhat.com/security/cve/CVE-2018-16227\nhttps://access.redhat.com/security/cve/CVE-2018-16228\nhttps://access.redhat.com/security/cve/CVE-2018-16229\nhttps://access.redhat.com/security/cve/CVE-2018-16230\nhttps://access.redhat.com/security/cve/CVE-2018-16300\nhttps://access.redhat.com/security/cve/CVE-2018-16451\nhttps://access.redhat.com/security/cve/CVE-2018-16452\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2019-3884\nhttps://access.redhat.com/security/cve/CVE-2019-5018\nhttps://access.redhat.com/security/cve/CVE-2019-6977\nhttps://access.redhat.com/security/cve/CVE-2019-6978\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9455\nhttps://access.redhat.com/security/cve/CVE-2019-9458\nhttps://access.redhat.com/security/cve/CVE-2019-11068\nhttps://access.redhat.com/security/cve/CVE-2019-12614\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13225\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15165\nhttps://access.redhat.com/security/cve/CVE-2019-15166\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-15917\nhttps://access.redhat.com/security/cve/CVE-2019-15925\nhttps://access.redhat.com/security/cve/CVE-2019-16167\nhttps://access.redhat.com/security/cve/CVE-2019-16168\nhttps://access.redhat.com/security/cve/CVE-2019-16231\nhttps://access.redhat.com/security/cve/CVE-2019-16233\nhttps://access.redhat.com/security/cve/CVE-2019-16935\nhttps://access.redhat.com/security/cve/CVE-2019-17450\nhttps://access.redhat.com/security/cve/CVE-2019-17546\nhttps://access.redhat.com/security/cve/CVE-2019-18197\nhttps://access.redhat.com/security/cve/CVE-2019-18808\nhttps://access.redhat.com/security/cve/CVE-2019-18809\nhttps://access.redhat.com/security/cve/CVE-2019-19046\nhttps://access.redhat.com/security/cve/CVE-2019-19056\nhttps://access.redhat.com/security/cve/CVE-2019-19062\nhttps://access.redhat.com/security/cve/CVE-2019-19063\nhttps://access.redhat.com/security/cve/CVE-2019-19068\nhttps://access.redhat.com/security/cve/CVE-2019-19072\nhttps://access.redhat.com/security/cve/CVE-2019-19221\nhttps://access.redhat.com/security/cve/CVE-2019-19319\nhttps://access.redhat.com/security/cve/CVE-2019-19332\nhttps://access.redhat.com/security/cve/CVE-2019-19447\nhttps://access.redhat.com/security/cve/CVE-2019-19524\nhttps://access.redhat.com/security/cve/CVE-2019-19533\nhttps://access.redhat.com/security/cve/CVE-2019-19537\nhttps://access.redhat.com/security/cve/CVE-2019-19543\nhttps://access.redhat.com/security/cve/CVE-2019-19602\nhttps://access.redhat.com/security/cve/CVE-2019-19767\nhttps://access.redhat.com/security/cve/CVE-2019-19770\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-19956\nhttps://access.redhat.com/security/cve/CVE-2019-20054\nhttps://access.redhat.com/security/cve/CVE-2019-20218\nhttps://access.redhat.com/security/cve/CVE-2019-20386\nhttps://access.redhat.com/security/cve/CVE-2019-20387\nhttps://access.redhat.com/security/cve/CVE-2019-20388\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20636\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-20812\nhttps://access.redhat.com/security/cve/CVE-2019-20907\nhttps://access.redhat.com/security/cve/CVE-2019-20916\nhttps://access.redhat.com/security/cve/CVE-2020-0305\nhttps://access.redhat.com/security/cve/CVE-2020-0444\nhttps://access.redhat.com/security/cve/CVE-2020-1716\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-1751\nhttps://access.redhat.com/security/cve/CVE-2020-1752\nhttps://access.redhat.com/security/cve/CVE-2020-1971\nhttps://access.redhat.com/security/cve/CVE-2020-2574\nhttps://access.redhat.com/security/cve/CVE-2020-2752\nhttps://access.redhat.com/security/cve/CVE-2020-2922\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3898\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-6405\nhttps://access.redhat.com/security/cve/CVE-2020-7595\nhttps://access.redhat.com/security/cve/CVE-2020-7774\nhttps://access.redhat.com/security/cve/CVE-2020-8177\nhttps://access.redhat.com/security/cve/CVE-2020-8492\nhttps://access.redhat.com/security/cve/CVE-2020-8563\nhttps://access.redhat.com/security/cve/CVE-2020-8566\nhttps://access.redhat.com/security/cve/CVE-2020-8619\nhttps://access.redhat.com/security/cve/CVE-2020-8622\nhttps://access.redhat.com/security/cve/CVE-2020-8623\nhttps://access.redhat.com/security/cve/CVE-2020-8624\nhttps://access.redhat.com/security/cve/CVE-2020-8647\nhttps://access.redhat.com/security/cve/CVE-2020-8648\nhttps://access.redhat.com/security/cve/CVE-2020-8649\nhttps://access.redhat.com/security/cve/CVE-2020-9327\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-10029\nhttps://access.redhat.com/security/cve/CVE-2020-10732\nhttps://access.redhat.com/security/cve/CVE-2020-10749\nhttps://access.redhat.com/security/cve/CVE-2020-10751\nhttps://access.redhat.com/security/cve/CVE-2020-10763\nhttps://access.redhat.com/security/cve/CVE-2020-10773\nhttps://access.redhat.com/security/cve/CVE-2020-10774\nhttps://access.redhat.com/security/cve/CVE-2020-10942\nhttps://access.redhat.com/security/cve/CVE-2020-11565\nhttps://access.redhat.com/security/cve/CVE-2020-11668\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-12465\nhttps://access.redhat.com/security/cve/CVE-2020-12655\nhttps://access.redhat.com/security/cve/CVE-2020-12659\nhttps://access.redhat.com/security/cve/CVE-2020-12770\nhttps://access.redhat.com/security/cve/CVE-2020-12826\nhttps://access.redhat.com/security/cve/CVE-2020-13249\nhttps://access.redhat.com/security/cve/CVE-2020-13630\nhttps://access.redhat.com/security/cve/CVE-2020-13631\nhttps://access.redhat.com/security/cve/CVE-2020-13632\nhttps://access.redhat.com/security/cve/CVE-2020-14019\nhttps://access.redhat.com/security/cve/CVE-2020-14040\nhttps://access.redhat.com/security/cve/CVE-2020-14381\nhttps://access.redhat.com/security/cve/CVE-2020-14382\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-14422\nhttps://access.redhat.com/security/cve/CVE-2020-15157\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-15862\nhttps://access.redhat.com/security/cve/CVE-2020-15999\nhttps://access.redhat.com/security/cve/CVE-2020-16166\nhttps://access.redhat.com/security/cve/CVE-2020-24490\nhttps://access.redhat.com/security/cve/CVE-2020-24659\nhttps://access.redhat.com/security/cve/CVE-2020-25211\nhttps://access.redhat.com/security/cve/CVE-2020-25641\nhttps://access.redhat.com/security/cve/CVE-2020-25658\nhttps://access.redhat.com/security/cve/CVE-2020-25661\nhttps://access.redhat.com/security/cve/CVE-2020-25662\nhttps://access.redhat.com/security/cve/CVE-2020-25681\nhttps://access.redhat.com/security/cve/CVE-2020-25682\nhttps://access.redhat.com/security/cve/CVE-2020-25683\nhttps://access.redhat.com/security/cve/CVE-2020-25684\nhttps://access.redhat.com/security/cve/CVE-2020-25685\nhttps://access.redhat.com/security/cve/CVE-2020-25686\nhttps://access.redhat.com/security/cve/CVE-2020-25687\nhttps://access.redhat.com/security/cve/CVE-2020-25694\nhttps://access.redhat.com/security/cve/CVE-2020-25696\nhttps://access.redhat.com/security/cve/CVE-2020-26160\nhttps://access.redhat.com/security/cve/CVE-2020-27813\nhttps://access.redhat.com/security/cve/CVE-2020-27846\nhttps://access.redhat.com/security/cve/CVE-2020-28362\nhttps://access.redhat.com/security/cve/CVE-2020-29652\nhttps://access.redhat.com/security/cve/CVE-2021-2007\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYDZ+bNzjgjWX9erEAQghXg//awGwjQxJ5LEZWBTdgyuCa8mHEi2rop5T\nlmebolBMNRSbo9gI8LMSHlvIBBFiV4CuFvfxE0AVLNentfzOTH11TxNWe1KQYt4H\nEmcGHPeHWTxKDkvAHtVcWXy9WN3y5d4lHSaq6AR1nHRPcj/k1upyx22kotpnYxN8\n4d49PjFTO3YbmdYpNLVJ9nY8izqUpTfM7YSyj6ANZSlaYc5Z215o6TPo6e3wobf4\nmWu+VfDS0v+/AbGhQhO2sQ7r2ysJ85MB7c62cxck4a51KiA0NKd4xr0TAA4KHnNL\nISHFzi5QYXu+meE+9wYRo1ZjJ5fbPj41+1TJbR6O4CbP0xQiFpcUSipNju3rGSGy\nAe5G/QGT8J7HzOjlKVvY3SFu/odENR6c+xUIr7IB/FBlu7DdPF2XxMZDQD4DKHEk\n4aiDbuiEL3Yf78Ic1RqPPmrj9plIwprVFQz+k3JaQXKD+1dBxO6tk+nVu2/5xNbM\nuR03hrthYYIpdXLSWU4lzq8j3kQ9wZ4j/m2o6/K6eHNl9PyqAG5jfQv9bVf8E3oG\nkrzc/JLvOfHNEQ/oJs/v/DFDmnAxshCCtGWlpLJ5J0pcD3EePsrPNs1QtQurVrMv\nRjfBCWKOij53+BinrMKHdsHxfur7GCFCIQCVaLIv6GUjX2NWI0voIVA8JkrFNNp6\nMcvuEaxco7U=\n=sw8i\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Solution:\n\nFor information on upgrading Ansible Tower, reference the Ansible Tower\nUpgrade and Migration Guide:\nhttps://docs.ansible.com/ansible-tower/latest/html/upgrade-migration-guide/\nindex.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1790277 - CVE-2019-20372 nginx: HTTP request smuggling in configurations with URL redirect used as error_page\n1828406 - CVE-2020-11022 jquery: Cross-site scripting due to improper injQuery.htmlPrefilter method\n1850004 - CVE-2020-11023 jquery: Passing HTML containing \u003coption\u003e elements to manipulation methods could result in untrusted code execution\n1911314 - CVE-2020-35678 python-autobahn: allows redirect header injection\n1928847 - CVE-2021-20253 ansible-tower: Privilege escalation via job isolation escape\n\n5. This software, such as Apache HTTP Server, is\ncommon to multiple JBoss middleware products, and is packaged under Red Hat\nJBoss Core Services to allow for faster distribution of updates, and for a\nmore consistent update experience. \n\nThis release adds the new Apache HTTP Server 2.4.37 Service Pack 6 packages\nthat are part of the JBoss Core Services offering. Refer to the Release Notes for information on the most\nsignificant bug fixes and enhancements included in this release. Solution:\n\nBefore applying the update, back up your existing installation, including\nall applications, configuration files, databases and database settings, and\nso on. \n\nThe References section of this erratum contains a download link for the\nupdate. You must be logged in to download the update. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n1914774 - CVE-2021-20178 ansible: user data leak in snmp_facts module\n1915808 - CVE-2021-20180 ansible module: bitbucket_pipeline_variable exposes secured values\n1916813 - CVE-2021-20191 ansible: multiple modules expose secured values\n1925002 - CVE-2021-20228 ansible: basic.py no_log with fallback option\n1939349 - CVE-2021-3447 ansible: multiple modules expose secured values\n\n5. ==========================================================================\nUbuntu Security Notice USN-4662-1\nDecember 08, 2020\n\nopenssl, openssl1.0 vulnerability\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 20.10\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS\n- Ubuntu 16.04 LTS\n\nSummary:\n\nOpenSSL could be made to crash if it processed specially crafted input. \n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 20.10:\n libssl1.1 1.1.1f-1ubuntu4.1\n\nUbuntu 20.04 LTS:\n libssl1.1 1.1.1f-1ubuntu2.1\n\nUbuntu 18.04 LTS:\n libssl1.0.0 1.0.2n-1ubuntu5.5\n libssl1.1 1.1.1-1ubuntu2.1~18.04.7\n\nUbuntu 16.04 LTS:\n libssl1.0.0 1.0.2g-1ubuntu4.18\n\nAfter a standard system update you need to reboot your computer to make all\nthe necessary changes. 7) - aarch64, ppc64le, s390x\n\n3. \n\nThis issue was reported to OpenSSL on 9th November 2020 by David Benjamin\n(Google). Initial analysis was performed by David Benjamin with additional\nanalysis by Matt Caswell (OpenSSL). The fix was developed by Matt Caswell. \n\nNote\n====\n\nOpenSSL 1.0.2 is out of support and no longer receiving public updates. \n\nReferences\n==========\n\nURL for this Security Advisory:\nhttps://www.openssl.org/news/secadv/20201208.txt\n\nNote: the online version of the advisory may be updated with additional details\nover time. \n\nFor details of OpenSSL severity classifications please see:\nhttps://www.openssl.org/policies/secpolicy.html\n", "sources": [ { "db": "NVD", "id": "CVE-2020-1971" }, { "db": "JVNDB", "id": "JVNDB-2020-009865" }, { "db": "VULHUB", "id": "VHN-173115" }, { "db": "VULMON", "id": "CVE-2020-1971" }, { "db": "PACKETSTORM", "id": "160654" }, { "db": "PACKETSTORM", "id": "160644" }, { "db": "PACKETSTORM", "id": "161546" }, { "db": "PACKETSTORM", "id": "161727" }, { "db": "PACKETSTORM", "id": "161382" }, { "db": "PACKETSTORM", "id": "162142" }, { "db": "PACKETSTORM", "id": "160414" }, { "db": "PACKETSTORM", "id": "160651" }, { "db": "PACKETSTORM", "id": "169642" } ], "trust": 2.61 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2020-1971", "trust": 3.7 }, { "db": "TENABLE", "id": "TNS-2021-10", "trust": 1.1 }, { "db": "TENABLE", "id": "TNS-2021-09", "trust": 1.1 }, { "db": "TENABLE", "id": "TNS-2020-11", "trust": 1.1 }, { "db": "SIEMENS", "id": "SSA-389290", "trust": 1.1 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/09/14/2", "trust": 1.1 }, { "db": "PULSESECURE", "id": "SA44676", "trust": 1.1 }, { "db": "JVN", "id": "JVNVU91053554", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU91198149", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU90348129", "trust": 0.8 }, { "db": "ICS CERT", "id": "ICSA-24-046-02", "trust": 0.8 }, { "db": "ICS CERT", "id": "ICSA-21-336-06", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2020-009865", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "160644", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "161382", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "161727", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "160654", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "160651", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "162142", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "160414", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "160605", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161003", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161388", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161525", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "160916", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "160499", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161379", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162130", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "160636", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161004", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161387", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "160638", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "160569", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "160704", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161916", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161389", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "160523", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161390", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "160961", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "160561", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "160639", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161011", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "160882", "trust": 0.1 }, { "db": "VULHUB", "id": "VHN-173115", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2020-1971", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161546", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169642", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-173115" }, { "db": "VULMON", "id": "CVE-2020-1971" }, { "db": "JVNDB", "id": "JVNDB-2020-009865" }, { "db": "PACKETSTORM", "id": "160654" }, { "db": "PACKETSTORM", "id": "160644" }, { "db": "PACKETSTORM", "id": "161546" }, { "db": "PACKETSTORM", "id": "161727" }, { "db": "PACKETSTORM", "id": "161382" }, { "db": "PACKETSTORM", "id": "162142" }, { "db": "PACKETSTORM", "id": "160414" }, { "db": "PACKETSTORM", "id": "160651" }, { "db": "PACKETSTORM", "id": "169642" }, { "db": "NVD", "id": "CVE-2020-1971" } ] }, "id": "VAR-202012-1527", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-173115" } ], "trust": 0.44999999999999996 }, "last_update_date": "2024-07-23T19:19:40.435000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "hitachi-sec-2023-126", "trust": 0.8, "url": "https://github.com/openssl/openssl/commit/f960d81215ebf3f65e03d4d5d857fb9b666d6920" }, { "title": "Red Hat: Low: Red Hat JBoss Web Server 3.1 Service Pack 11 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210491 - security advisory" }, { "title": "Red Hat: Important: openssl security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20205422 - security advisory" }, { "title": "Red Hat: Important: openssl security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210056 - security advisory" }, { "title": "Red Hat: Important: openssl security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20205640 - security advisory" }, { "title": "Red Hat: Important: openssl security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20205639 - security advisory" }, { "title": "Red Hat: Important: openssl security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20205642 - security advisory" }, { "title": "Red Hat: Important: openssl security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20205588 - security advisory" }, { "title": "Red Hat: Low: Red Hat JBoss Web Server 3.1 Service Pack 11 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210489 - security advisory" }, { "title": "Red Hat: Low: Red Hat JBoss Core Services Apache HTTP Server 2.4.37 SP6 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210488 - security advisory" }, { "title": "Red Hat: Low: Red Hat JBoss Core Services Apache HTTP Server 2.4.37 SP6 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210486 - security advisory" }, { "title": "Red Hat: Important: openssl security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20205637 - security advisory" }, { "title": "Red Hat: Important: openssl security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20205476 - security advisory" }, { "title": "Red Hat: Important: openssl security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20205566 - security advisory" }, { "title": "Red Hat: Important: openssl security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20205641 - security advisory" }, { "title": "Red Hat: Important: openssl security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20205623 - security advisory" }, { "title": "Debian Security Advisories: DSA-4807-1 openssl -- security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=3bd12ad190825227a63dd6fa019b384e" }, { "title": "Amazon Linux AMI: ALAS-2020-1456", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=alas-2020-1456" }, { "title": "Red Hat: Moderate: Red Hat JBoss Web Server 5.4.1 Security Update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210494 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat JBoss Web Server 5.4.1 Security Update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210495 - security advisory" }, { "title": "Arch Linux Advisories: [ASA-202012-24] openssl: denial of service", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=asa-202012-24" }, { "title": "IBM: Security Bulletin: Vulnerability in Fabric OS used by IBM b-type SAN directors and switches.", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=d1909a4b2d683c22ce1b861676950187" }, { "title": "Amazon Linux 2: ALAS2-2020-1573", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2020-1573" }, { "title": "IBM: Security Bulletin: z/TPF is affected by an OpenSSL vulnerability", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=96f04546c2e8544a0cc0bdb9c9cff51b" }, { "title": "IBM: Security Bulletin: Vulnerability in OpenSSL affects IBM Integrated Analytics System", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=280f45996e27a28446f71e76b35820ac" }, { "title": "IBM: Security Bulletin: App Connect Enterprise Certified Container may be vulnerable to a denial of service vulnerability (CVE-2020-1971)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=ceec38a4be2281800c7fde6f0ec38787" }, { "title": "Red Hat: Important: Red Hat Ceph Storage 4.2 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210083 - security advisory" }, { "title": "IBM: Security Bulletin: Security Vulnerabilities affect IBM Cloud Pak for Data \u00e2\u20ac\u201c OpenSSL", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=c6da717c9ae4244c7a1befa91a3a66b0" }, { "title": "Arch Linux Issues: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2020-1971 log" }, { "title": "Red Hat: Moderate: OpenShift Virtualization 2.5.3 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210187 - security advisory" }, { "title": "IBM: Security Bulletin: Vulnerabilities in OpenSSL affect IBM Spectrum Protect Backup-Archive Client NetApp Services (CVE-2020-1971, CVE-2021-23840, CVE-2021-23841)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=a027ad7ae5a4bc5a118b5baf7f42aa8c" }, { "title": "Red Hat: Important: OpenShift Container Platform 4.6.9 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20205614 - security advisory" }, { "title": "IBM: Security Bulletin: IBM Cloud Pak for Integration is vulnerable to Node.js vulnerabilities (CVE-2020-1971, CVE-2020-8265, and CVE-2020-8287)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=8cc5ff875578b52e2a6a6ad258f01034" }, { "title": "Red Hat: Moderate: OpenShift Container Platform 4.6.12 extras and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210039 - security advisory" }, { "title": "IBM: Security Bulletin: IBM App Connect Enterprise Certified Container may be vulnerable to multiple denial of service and HTTP request smuggling vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=eddc867c302ff9f5218add2962e3491b" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.1.3 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210607 - security advisory" }, { "title": "Hitachi Security Advisories: Vulnerability in JP1/Automatic Job Management System 3", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2021-116" }, { "title": "Tenable Security Advisories: [R1] Tenable.sc 5.17.0 Fixes Multiple Vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=tns-2020-11" }, { "title": "Red Hat: Moderate: OpenShift Container Platform 4.6.12 bug fix and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210037 - security advisory" }, { "title": "Tenable Security Advisories: [R1] Nessus Agent 8.2.2 Fixes Multiple Vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=tns-2020-13" }, { "title": "Tenable Security Advisories: [R1] Nessus 8.13.1 Fixes Multiple Vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=tns-2020-12" }, { "title": "Red Hat: Moderate: Release of OpenShift Serverless 1.12.0", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210146 - security advisory" }, { "title": "Hitachi Security Advisories: Multiple Vulnerabilities in Hitachi Ops Center Analyzer viewpoint", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2021-119" }, { "title": "Tenable Security Advisories: [R1] Nessus Network Monitor 5.13.1 Fixes Multiple Third-party Vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=tns-2021-09" }, { "title": "Tenable Security Advisories: [R1] LCE 6.0.9 Fixes Multiple Third-party Vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=tns-2021-10" }, { "title": "Hitachi Security Advisories: Multiple Vulnerabilities in Hitachi Command Suite, Hitachi Automation Director, Hitachi Configuration Manager, Hitachi Infrastructure Analytics Advisor and Hitachi Ops Center", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2023-126" }, { "title": "Red Hat: Moderate: Red Hat Quay v3.3.3 bug fix and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210050 - security advisory" }, { "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210190 - security advisory" }, { "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210436 - security advisory" }, { "title": "Siemens Security Advisories: Siemens Security Advisory", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=4a9822530e6b610875f83ffc10e02aba" }, { "title": "Siemens Security Advisories: Siemens Security Advisory", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=ec6577109e640dac19a6ddb978afe82d" }, { "title": "CVE-2020-1971", "trust": 0.1, "url": "https://github.com/mbhudson/cve-2020-1971 " }, { "title": "tools", "trust": 0.1, "url": "https://github.com/scott-leung/tools " } ], "sources": [ { "db": "VULMON", "id": "CVE-2020-1971" }, { "db": "JVNDB", "id": "JVNDB-2020-009865" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-476", "trust": 1.1 }, { "problemtype": "NULL Pointer dereference (CWE-476) [IPA evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "VULHUB", "id": "VHN-173115" }, { "db": "JVNDB", "id": "JVNDB-2020-009865" }, { "db": "NVD", "id": "CVE-2020-1971" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-1971" }, { "trust": 1.2, "url": "https://www.openssl.org/news/secadv/20201208.txt" }, { "trust": 1.1, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-389290.pdf" }, { "trust": 1.1, "url": "https://kb.pulsesecure.net/articles/pulse_security_advisories/sa44676" }, { "trust": 1.1, "url": "https://security.netapp.com/advisory/ntap-20201218-0005/" }, { "trust": 1.1, "url": "https://security.netapp.com/advisory/ntap-20210513-0002/" }, { "trust": 1.1, "url": "https://www.tenable.com/security/tns-2020-11" }, { "trust": 1.1, "url": "https://www.tenable.com/security/tns-2021-09" }, { "trust": 1.1, "url": "https://www.tenable.com/security/tns-2021-10" }, { "trust": 1.1, "url": "https://www.debian.org/security/2020/dsa-4807" }, { "trust": 1.1, "url": "https://security.freebsd.org/advisories/freebsd-sa-20:33.openssl.asc" }, { "trust": 1.1, "url": "https://security.gentoo.org/glsa/202012-13" }, { "trust": 1.1, "url": "https://www.oracle.com//security-alerts/cpujul2021.html" }, { "trust": 1.1, "url": "https://www.oracle.com/security-alerts/cpuapr2021.html" }, { "trust": 1.1, "url": "https://www.oracle.com/security-alerts/cpuapr2022.html" }, { "trust": 1.1, "url": "https://www.oracle.com/security-alerts/cpujan2021.html" }, { "trust": 1.1, "url": "https://www.oracle.com/security-alerts/cpuoct2021.html" }, { "trust": 1.1, "url": "https://lists.debian.org/debian-lts-announce/2020/12/msg00020.html" }, { "trust": 1.1, "url": "https://lists.debian.org/debian-lts-announce/2020/12/msg00021.html" }, { "trust": 1.1, "url": "http://www.openwall.com/lists/oss-security/2021/09/14/2" }, { "trust": 1.0, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=2154ab83e14ede338d2ede9bbe5cdfce5d5a6c9e" }, { "trust": 1.0, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=f960d81215ebf3f65e03d4d5d857fb9b666d6920" }, { "trust": 1.0, "url": "https://lists.apache.org/thread.html/r63c6f2dd363d9b514d0a4bcf624580616a679898cc14c109a49b750c%40%3cdev.tomcat.apache.org%3e" }, { "trust": 1.0, "url": "https://lists.apache.org/thread.html/rbb769f771711fb274e0a4acb1b5911c8aab544a6ac5e8c12d40c5143%40%3ccommits.pulsar.apache.org%3e" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/dgsi34y5lq5ryxn4m2i5zqt65lfvdouu/" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/pwpssznzobju2yr6z4tghxkyw3yp5qg7/" }, { "trust": 1.0, "url": "https://security.netapp.com/advisory/ntap-20240621-0006/" }, { "trust": 0.8, "url": "http://jvn.jp/cert/jvnvu91053554" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu90348129/" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu91198149/index.html" }, { "trust": 0.8, "url": "https://www.jpcert.or.jp/at/2020/at200048.html" }, { "trust": 0.8, "url": "https://us-cert.cisa.gov/ics/advisories/icsa-21-336-06" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-24-046-02" }, { "trust": 0.7, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2020-1971" }, { "trust": 0.7, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.4, "url": "https://www.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-8177" }, { "trust": 0.4, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.3, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-20907" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-20388" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-7595" }, { "trust": 0.3, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-19956" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-15903" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2018-20843" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-16166" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-15862" }, { "trust": 0.2, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-14422" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-15999" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-17546" }, { "trust": 0.2, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17006" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-12749" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-12401" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12402" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14866" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20228" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-17006" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-11719" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20388" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12401" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-17023" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17023" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-12749" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-6829" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-14866" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-12403" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12400" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19956" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11756" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-11756" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-12243" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-12400" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20191" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-11727" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12243" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11719" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20180" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11727" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12403" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20178" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-17498" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17498" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20907" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-12402" }, { "trust": 0.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=2154ab83e14ede338d2ede9bbe5cdfce5d5a6c9e" }, { "trust": 0.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=f960d81215ebf3f65e03d4d5d857fb9b666d6920" }, { "trust": 0.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/pwpssznzobju2yr6z4tghxkyw3yp5qg7/" }, { "trust": 0.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/dgsi34y5lq5ryxn4m2i5zqt65lfvdouu/" }, { "trust": 0.1, "url": "https://lists.apache.org/thread.html/rbb769f771711fb274e0a4acb1b5911c8aab544a6ac5e8c12d40c5143@%3ccommits.pulsar.apache.org%3e" }, { "trust": 0.1, "url": "https://lists.apache.org/thread.html/r63c6f2dd363d9b514d0a4bcf624580616a679898cc14c109a49b750c@%3cdev.tomcat.apache.org%3e" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2020:5614" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2020:5615" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27836" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15862" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16166" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27836" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8177" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.6/updating/updating-cluster" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2020:5641" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19770" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-11668" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25662" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8624" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-16300" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14466" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-10105" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25684" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.7/updating/updating-cluster" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13050" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24490" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-2007" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9925" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-15166" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19072" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9802" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8649" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20218" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26160" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-12655" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-16230" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-9458" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9895" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8625" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13225" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13249" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27846" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19068" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-15165" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20636" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-15925" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14382" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8812" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-18808" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3899" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-18809" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8819" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10103" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14467" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14469" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11068" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3867" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-16229" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8720" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9893" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19221" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8808" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14553" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3902" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14465" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-14882" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20054" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8623" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-16227" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-12826" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-18197" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-1751" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3900" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8566" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25683" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25211" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-14461" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19602" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14881" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9805" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-14464" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8820" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9807" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8769" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8710" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8813" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10773" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25661" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9850" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14463" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10749" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25641" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-6977" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8811" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8647" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16228" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14879" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29652" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-16168" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9803" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-15917" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9862" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24659" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10774" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-14469" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-7774" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9327" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10105" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-14880" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3885" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-17450" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15503" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-16935" }, { "trust": 0.1, "url": "https://\u0027" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14461" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20916" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-0305" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-5018" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-12659" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-1716" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10018" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20812" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2020:5633" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-14468" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15157" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8835" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-6978" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25658" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-0444" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8764" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-14466" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8844" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-16233" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3865" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14882" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-1730" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16452" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3864" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16227" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25694" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14464" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19906" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-14553" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-2752" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16230" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20386" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20387" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14391" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14468" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-14467" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-14462" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19543" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3862" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14880" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25682" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-14881" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-2574" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3901" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10751" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16300" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-3884" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10763" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8823" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14462" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-1752" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16229" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10942" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8622" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28362" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3895" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19062" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8492" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-11793" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20454" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19046" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-12465" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19447" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25696" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9894" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25685" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8816" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-16231" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13627" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-6405" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8771" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-16451" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3897" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14381" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-10103" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-16228" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9806" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19056" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19524" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-14463" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8814" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-14889" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8648" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-12770" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8743" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19767" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3121" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19533" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9915" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25686" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8815" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19537" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-2922" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13632" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25687" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-16167" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10029" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16451" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8783" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-9455" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20807" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-11565" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13630" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19332" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-12614" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14040" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-14879" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14019" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14470" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25681" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19063" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-14470" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8619" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27813" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-14465" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-11068" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19319" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8766" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8563" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10732" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13631" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-16452" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8846" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3868" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3894" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8782" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3898" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2020:5634" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12723" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11023" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20372" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10878" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20253" }, { "trust": 0.1, "url": "https://docs.ansible.com/ansible-tower/latest/html/upgrade-migration-guide/" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-11023" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:0778" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11022" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-12723" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10543" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2016-5766" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10878" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-5766" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20372" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-11022" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10543" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-35678" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:0488" }, { "trust": 0.1, "url": "https://access.redhat.com/security/updates/classification/#low" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_jboss_core_services/2.4.37/" }, { "trust": 0.1, "url": "https://access.redhat.com/jbossnetwork/restricted/listsoftware.html?product=core.service.openssl\u0026downloadtype=securitypatches\u0026version=1.1.1c" }, { "trust": 0.1, "url": "https://access.redhat.com/jbossnetwork/restricted/listsoftware.html?product=core.service.apachehttp\u0026downloadtype=securitypatches\u0026version=2.4.37" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:1079" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5188" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8625" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2017-12652" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17546" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14973" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2017-12652" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3156" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3447" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-5313" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5094" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-5188" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15999" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-5094" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-14973" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-5313" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14422" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/openssl1.0/1.0.2n-1ubuntu5.5" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/openssl/1.1.1f-1ubuntu2.1" }, { "trust": 0.1, "url": "https://usn.ubuntu.com/4662-1" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/openssl/1.1.1-1ubuntu2.1~18.04.7" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/openssl/1.0.2g-1ubuntu4.18" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/openssl/1.1.1f-1ubuntu4.1" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2020:5642" }, { "trust": 0.1, "url": "https://www.openssl.org/policies/secpolicy.html" }, { "trust": 0.1, "url": "https://www.openssl.org/support/contracts.html" } ], "sources": [ { "db": "VULHUB", "id": "VHN-173115" }, { "db": "JVNDB", "id": "JVNDB-2020-009865" }, { "db": "PACKETSTORM", "id": "160654" }, { "db": "PACKETSTORM", "id": "160644" }, { "db": "PACKETSTORM", "id": "161546" }, { "db": "PACKETSTORM", "id": "161727" }, { "db": "PACKETSTORM", "id": "161382" }, { "db": "PACKETSTORM", "id": "162142" }, { "db": "PACKETSTORM", "id": "160414" }, { "db": "PACKETSTORM", "id": "160651" }, { "db": "PACKETSTORM", "id": "169642" }, { "db": "NVD", "id": "CVE-2020-1971" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-173115" }, { "db": "VULMON", "id": "CVE-2020-1971" }, { "db": "JVNDB", "id": "JVNDB-2020-009865" }, { "db": "PACKETSTORM", "id": "160654" }, { "db": "PACKETSTORM", "id": "160644" }, { "db": "PACKETSTORM", "id": "161546" }, { "db": "PACKETSTORM", "id": "161727" }, { "db": "PACKETSTORM", "id": "161382" }, { "db": "PACKETSTORM", "id": "162142" }, { "db": "PACKETSTORM", "id": "160414" }, { "db": "PACKETSTORM", "id": "160651" }, { "db": "PACKETSTORM", "id": "169642" }, { "db": "NVD", "id": "CVE-2020-1971" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2020-12-08T00:00:00", "db": "VULHUB", "id": "VHN-173115" }, { "date": "2020-12-08T00:00:00", "db": "VULMON", "id": "CVE-2020-1971" }, { "date": "2020-12-10T00:00:00", "db": "JVNDB", "id": "JVNDB-2020-009865" }, { "date": "2020-12-21T20:24:33", "db": "PACKETSTORM", "id": "160654" }, { "date": "2020-12-21T17:38:24", "db": "PACKETSTORM", "id": "160644" }, { "date": "2021-02-25T15:29:25", "db": "PACKETSTORM", "id": "161546" }, { "date": "2021-03-09T16:25:11", "db": "PACKETSTORM", "id": "161727" }, { "date": "2021-02-11T15:19:41", "db": "PACKETSTORM", "id": "161382" }, { "date": "2021-04-09T15:06:13", "db": "PACKETSTORM", "id": "162142" }, { "date": "2020-12-09T16:09:14", "db": "PACKETSTORM", "id": "160414" }, { "date": "2020-12-21T20:17:29", "db": "PACKETSTORM", "id": "160651" }, { "date": "2020-12-08T12:12:12", "db": "PACKETSTORM", "id": "169642" }, { "date": "2020-12-08T16:15:11.730000", "db": "NVD", "id": "CVE-2020-1971" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-08-29T00:00:00", "db": "VULHUB", "id": "VHN-173115" }, { "date": "2023-11-07T00:00:00", "db": "VULMON", "id": "CVE-2020-1971" }, { "date": "2024-02-19T06:01:00", "db": "JVNDB", "id": "JVNDB-2020-009865" }, { "date": "2024-06-21T19:15:16.170000", "db": "NVD", "id": "CVE-2020-1971" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "PACKETSTORM", "id": "160414" } ], "trust": 0.1 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "OpenSSL\u00a0 In \u00a0NULL\u00a0 Pointer reference vulnerability", "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-009865" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "overflow, memory leak", "sources": [ { "db": "PACKETSTORM", "id": "161546" } ], "trust": 0.1 } }
var-202103-1463
Vulnerability from variot
The X509_V_FLAG_X509_STRICT flag enables additional security checks of the certificates present in a certificate chain. It is not set by default. Starting from OpenSSL version 1.1.1h a check to disallow certificates in the chain that have explicitly encoded elliptic curve parameters was added as an additional strict check. An error in the implementation of this check meant that the result of a previous check to confirm that certificates in the chain are valid CA certificates was overwritten. This effectively bypasses the check that non-CA certificates must not be able to issue other certificates. If a "purpose" has been configured then there is a subsequent opportunity for checks that the certificate is a valid CA. All of the named "purpose" values implemented in libcrypto perform this check. Therefore, where a purpose is set the certificate chain will still be rejected even when the strict flag has been used. A purpose is set by default in libssl client and server certificate verification routines, but it can be overridden or removed by an application. In order to be affected, an application must explicitly set the X509_V_FLAG_X509_STRICT verification flag and either not set a purpose for the certificate verification or, in the case of TLS client or server applications, override the default purpose. OpenSSL versions 1.1.1h and newer are affected by this issue. Users of these versions should upgrade to OpenSSL 1.1.1k. OpenSSL 1.0.2 is not impacted by this issue. Fixed in OpenSSL 1.1.1k (Affected 1.1.1h-1.1.1j). OpenSSL is an open source general encryption library of the Openssl team that can implement the Secure Sockets Layer (SSLv2/v3) and Transport Layer Security (TLSv1) protocols. The product supports a variety of encryption algorithms, including symmetric ciphers, hash algorithms, secure hash algorithms, etc. On March 25, 2021, the OpenSSL Project released a security advisory, OpenSSL Security Advisory [25 March 2021], that disclosed two vulnerabilities. Exploitation of these vulnerabilities could allow an malicious user to use a valid non-certificate authority (CA) certificate to act as a CA and sign a certificate for an arbitrary organization, user or device, or to cause a denial of service (DoS) condition. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana gement_for_kubernetes/2.3/html/release_notes/
Security:
-
fastify-reply-from: crafted URL allows prefix scape of the proxied backend service (CVE-2021-21321)
-
fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service (CVE-2021-21322)
-
nodejs-netmask: improper input validation of octal input data (CVE-2021-28918)
-
redis: Integer overflow via STRALGO LCS command (CVE-2021-29477)
-
redis: Integer overflow via COPY command for large intsets (CVE-2021-29478)
-
nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)
-
nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions (CVE-2020-28500)
-
golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing
-
-u- extension (CVE-2020-28851)
-
golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag (CVE-2020-28852)
-
nodejs-ansi_up: XSS due to insufficient URL sanitization (CVE-2021-3377)
-
oras: zip-slip vulnerability via oras-pull (CVE-2021-21272)
-
redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms (CVE-2021-21309)
-
nodejs-lodash: command injection via template (CVE-2021-23337)
-
nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() (CVE-2021-23362)
-
browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS) (CVE-2021-23364)
-
nodejs-postcss: Regular expression denial of service during source map parsing (CVE-2021-23368)
-
nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option (CVE-2021-23369)
-
nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js (CVE-2021-23382)
-
nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option (CVE-2021-23383)
-
openssl: integer overflow in CipherUpdate (CVE-2021-23840)
-
openssl: NULL pointer dereference in X509_issuer_and_serial_hash() (CVE-2021-23841)
-
nodejs-ua-parser-js: ReDoS via malicious User-Agent header (CVE-2021-27292)
-
grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call (CVE-2021-27358)
-
nodejs-is-svg: ReDoS via malicious string (CVE-2021-28092)
-
nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character (CVE-2021-29418)
-
ulikunitz/xz: Infinite loop in readUvarint allows for denial of service (CVE-2021-29482)
-
normalize-url: ReDoS for data URLs (CVE-2021-33502)
-
nodejs-trim-newlines: ReDoS in .end() method (CVE-2021-33623)
-
nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe (CVE-2021-23343)
-
html-parse-stringify: Regular Expression DoS (CVE-2021-23346)
-
openssl: incorrect SSLv2 rollback protection (CVE-2021-23839)
For more details about the security issues, including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE pages listed in the References section.
Bugs:
-
RFE Make the source code for the endpoint-metrics-operator public (BZ# 1913444)
-
cluster became offline after apiserver health check (BZ# 1942589)
-
Bugs fixed (https://bugzilla.redhat.com/):
1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension 1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag 1913444 - RFE Make the source code for the endpoint-metrics-operator public 1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull 1927520 - RHACM 2.3.0 images 1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection 1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash() 1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate 1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms 1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization 1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string 1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application 1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header 1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call 1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS 1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service 1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service 1942589 - cluster became offline after apiserver health check 1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() 1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character 1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data 1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service 1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option 1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing 1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js 1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service 1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS) 1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option 1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe 1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command 1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets 1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs 1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method 1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions 1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id 1983131 - Defragmenting an etcd member doesn't reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters
Bug fix:
-
RHACM 2.0.10 images (BZ #1940452)
-
Bugs fixed (https://bugzilla.redhat.com/):
1940452 - RHACM 2.0.10 images 1944286 - CVE-2021-23358 nodejs-underscore: Arbitrary code execution via the template function
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: OpenShift Container Platform 4.10.3 security update Advisory ID: RHSA-2022:0056-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2022:0056 Issue date: 2022-03-10 CVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 CVE-2022-24407 =====================================================================
- Summary:
Red Hat OpenShift Container Platform release 4.10.3 is now available with updates to packages and images that fix several bugs and add enhancements.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.10.3. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHSA-2022:0055
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html
Security Fix(es):
- gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)
- grafana: Snapshot authentication bypass (CVE-2021-39226)
- golang: net/http: limit growth of header canonicalization cache (CVE-2021-44716)
- nodejs-axios: Regular expression denial of service in trim function (CVE-2021-3749)
- golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)
- grafana: Forward OAuth Identity Token can allow users to access some data sources (CVE-2022-21673)
- grafana: directory traversal vulnerability (CVE-2021-43813)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-x86_64
The image digest is sha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-s390x
The image digest is sha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le
The image digest is sha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c
All OpenShift Container Platform 4.10 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.10 see the following documentation, which will be updated shortly for this release, for moderate instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1808240 - Always return metrics value for pods under the user's namespace
1815189 - feature flagged UI does not always become available after operator installation
1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters
1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly
1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal
1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered
1878925 - 'oc adm upgrade --to ...' rejects versions which occur only in history, while the cluster-version operator supports history fallback
1880738 - origin e2e test deletes original worker
1882983 - oVirt csi driver should refuse to provision RWX and ROX PV
1886450 - Keepalived router id check not documented for RHV/VMware IPI
1889488 - The metrics endpoint for the Scheduler is not protected by RBAC
1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom
1896474 - Path based routing is broken for some combinations
1897431 - CIDR support for additional network attachment with the bridge CNI plug-in
1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes
1907433 - Excessive logging in image operator
1909906 - The router fails with PANIC error when stats port already in use
1911173 - [MSTR-998] Many charts' legend names show {{}} instead of words
1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting.
1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)
1917893 - [ovirt] install fails: due to terraform error "Cannot attach Virtual Disk: Disk is locked" on vm resource
1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name
1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation
1926522 - oc adm catalog does not clean temporary files
1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes.
1928141 - kube-storage-version-migrator constantly reporting type "Upgradeable" status Unknown
1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it's storageclass is not yet finished, confusing users
1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x
1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade
1937085 - RHV UPI inventory playbook missing guarantee_memory
1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion
1938236 - vsphere-problem-detector does not support overriding log levels via storage CR
1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods
1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer
1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
1942913 - ThanosSidecarUnhealthy isn't resilient to WAL replays.
1943363 - [ovn] CNO should gracefully terminate ovn-northd
1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17
1948080 - authentication should not set Available=False APIServices_Error with 503s
1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set
1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0
1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer
1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs
1953264 - "remote error: tls: bad certificate" logs in prometheus-operator container
1955300 - Machine config operator reports unavailable for 23m during upgrade
1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set
1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set
1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters
1956496 - Needs SR-IOV Docs Upstream
1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret
1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid
1956964 - upload a boot-source to OpenShift virtualization using the console
1957547 - [RFE]VM name is not auto filled in dev console
1958349 - ovn-controller doesn't release the memory after cluster-density run
1959352 - [scale] failed to get pod annotation: timed out waiting for annotations
1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not
1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]
1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects
1961391 - String updates
1961509 - DHCP daemon pod should have CPU and memory requests set but not limits
1962066 - Edit machine/machineset specs not working
1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent
1963053 - oc whoami --show-console
should show the web console URL, not the server api URL
1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters
1964327 - Support containers with name:tag@digest
1964789 - Send keys and disconnect does not work for VNC console
1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7
1966445 - Unmasking a service doesn't work if it masked using MCO
1966477 - Use GA version in KAS/OAS/OauthAS to avoid: "audit.k8s.io/v1beta1" is deprecated and will be removed in a future release, use "audit.k8s.io/v1" instead
1966521 - kube-proxy's userspace implementation consumes excessive CPU
1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up
1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount
1970218 - MCO writes incorrect file contents if compression field is specified
1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]
1970805 - Cannot create build when docker image url contains dir structure
1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io
1972827 - image registry does not remain available during upgrade
1972962 - Should set the minimum value for the --max-icsp-size
flag of oc adm catalog mirror
1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run
1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established
1976301 - [ci] e2e-azure-upi is permafailing
1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change.
1976674 - CCO didn't set Upgradeable to False when cco mode is configured to Manual on azure platform
1976894 - Unidling a StatefulSet does not work as expected
1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases
1977414 - Build Config timed out waiting for condition 400: Bad Request
1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus
1978528 - systemd-coredump started and failed intermittently for unknown reasons
1978581 - machine-config-operator: remove runlevel from mco namespace
1979562 - Cluster operators: don't show messages when neither progressing, degraded or unavailable
1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9
1979966 - OCP builds always fail when run on RHEL7 nodes
1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading
1981549 - Machine-config daemon does not recover from broken Proxy configuration
1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]
1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues
1982063 - 'Control Plane' is not translated in Simplified Chinese language in Home->Overview page
1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands
1982662 - Workloads - DaemonSets - Add storage: i18n misses
1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE "/secrets/encryption-config" on single node clusters
1983758 - upgrades are failing on disruptive tests
1983964 - Need Device plugin configuration for the NIC "needVhostNet" & "isRdma"
1984592 - global pull secret not working in OCP4.7.4+ for additional private registries
1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs
1985486 - Cluster Proxy not used during installation on OSP with Kuryr
1985724 - VM Details Page missing translations
1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted
1985933 - Downstream image registry recommendation
1985965 - oVirt CSI driver does not report volume stats
1986216 - [scale] SNO: Slow Pod recovery due to "timed out waiting for OVS port binding"
1986237 - "MachineNotYetDeleted" in Pending state , alert not fired
1986239 - crictl create fails with "PID namespace requested, but sandbox infra container invalid"
1986302 - console continues to fetch prometheus alert and silences for normal user
1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI
1986338 - error creating list of resources in Import YAML
1986502 - yaml multi file dnd duplicates previous dragged files
1986819 - fix string typos for hot-plug disks
1987044 - [OCPV48] Shutoff VM is being shown as "Starting" in WebUI when using spec.runStrategy Manual/RerunOnFailure
1987136 - Declare operatorframework.io/arch. labels for all operators
1987257 - Go-http-client user-agent being used for oc adm mirror requests
1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold
1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP
1988406 - SSH key dropped when selecting "Customize virtual machine" in UI
1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade
1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with "Unable to connect to the server"
1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs
1989438 - expected replicas is wrong
1989502 - Developer Catalog is disappearing after short time
1989843 - 'More' and 'Show Less' functions are not translated on several page
1990014 - oc debug does not work for Windows pods
1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created
1990193 - 'more' and 'Show Less' is not being translated on Home -> Search page
1990255 - Partial or all of the Nodes/StorageClasses don't appear back on UI after text is removed from search bar
1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI
1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi- symlinks
1990556 - get-resources.sh doesn't honor the no_proxy settings even with no_proxy var
1990625 - Ironic agent registers with SLAAC address with privacy-stable
1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time
1991067 - github.com can not be resolved inside pods where cluster is running on openstack.
1991573 - Enable typescript strictNullCheck on network-policies files
1991641 - Baremetal Cluster Operator still Available After Delete Provisioning
1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator
1991819 - Misspelled word "ocurred" in oc inspect cmd
1991942 - Alignment and spacing fixes
1992414 - Two rootdisks show on storage step if 'This is a CD-ROM boot source' is checked
1992453 - The configMap failed to save on VM environment tab
1992466 - The button 'Save' and 'Reload' are not translated on vm environment tab
1992475 - The button 'Open console in New Window' and 'Disconnect' are not translated on vm console tab
1992509 - Could not customize boot source due to source PVC not found
1992541 - all the alert rules' annotations "summary" and "description" should comply with the OpenShift alerting guidelines
1992580 - storageProfile should stay with the same value by check/uncheck the apply button
1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply
1992777 - [IBMCLOUD] Default "ibm_iam_authorization_policy" is not working as expected in all scenarios
1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed)
1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing
1994094 - Some hardcodes are detected at the code level in OpenShift console components
1994142 - Missing required cloud config fields for IBM Cloud
1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools
1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart
1995335 - [SCALE] ovnkube CNI: remove ovs flows check
1995493 - Add Secret to workload button and Actions button are not aligned on secret details page
1995531 - Create RDO-based Ironic image to be promoted to OKD
1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator
1995887 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs
1995924 - CMO should report Upgradeable: false
when HA workload is incorrectly spread
1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole
1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN
1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down
1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page
1996647 - Provide more useful degraded message in auth operator on DNS errors
1996736 - Large number of 501 lr-policies in INCI2 env
1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes
1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP
1996928 - Enable default operator indexes on ARM
1997028 - prometheus-operator update removes env var support for thanos-sidecar
1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used
1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller.
1997245 - "Subscription already exists in openshift-storage namespace" error message is seen while installing odf-operator via UI
1997269 - Have to refresh console to install kube-descheduler
1997478 - Storage operator is not available after reboot cluster instances
1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
1997967 - storageClass is not reserved from default wizard to customize wizard
1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order
1998038 - [e2e][automation] add tests for UI for VM disk hot-plug
1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus
1998174 - Create storageclass gp3-csi after install ocp cluster on aws
1998183 - "r: Bad Gateway" info is improper
1998235 - Firefox warning: Cookie “csrf-token” will be soon rejected
1998377 - Filesystem table head is not full displayed in disk tab
1998378 - Virtual Machine is 'Not available' in Home -> Overview -> Cluster inventory
1998519 - Add fstype when create localvolumeset instance on web console
1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses
1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page
1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable
1999091 - Console update toast notification can appear multiple times
1999133 - removing and recreating static pod manifest leaves pod in error state
1999246 - .indexignore is not ingore when oc command load dc configuration
1999250 - ArgoCD in GitOps operator can't manage namespaces
1999255 - ovnkube-node always crashes out the first time it starts
1999261 - ovnkube-node log spam (and security token leak?)
1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -> Operator Installation page
1999314 - console-operator is slow to mark Degraded as False once console starts working
1999425 - kube-apiserver with "[SHOULD NOT HAPPEN] failed to update managedFields" err="failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)
1999556 - "master" pool should be updated before the CVO reports available at the new version occurred
1999578 - AWS EFS CSI tests are constantly failing
1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages
1999619 - cloudinit is malformatted if a user sets a password during VM creation flow
1999621 - Empty ssh_authorized_keys entry is added to VM's cloudinit if created from a customize flow
1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined
1999668 - openshift-install destroy cluster panic's when given invalid credentials to cloud provider (Azure Stack Hub)
1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource
1999771 - revert "force cert rotation every couple days for development" in 4.10
1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function
1999796 - Openshift Console Helm
tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace.
1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions
1999903 - Click "This is a CD-ROM boot source" ticking "Use template size PVC" on pvc upload form
1999983 - No way to clear upload error from template boot source
2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter
2000096 - Git URL is not re-validated on edit build-config form reload
2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig
2000236 - Confusing usage message from dynkeepalived CLI
2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported
2000430 - bump cluster-api-provider-ovirt version in installer
2000450 - 4.10: Enable static PV multi-az test
2000490 - All critical alerts shipped by CMO should have links to a runbook
2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)
2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster
2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled
2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console
2000754 - IPerf2 tests should be lower
2000846 - Structure logs in the entire codebase of Local Storage Operator
2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24
2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM
2000938 - CVO does not respect changes to a Deployment strategy
2000963 - 'Inline-volume (default fs)] volumes should store data' tests are failing on OKD with updated selinux-policy
2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don't have snapshot and should be fullClone
2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole
2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api
2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error
2001337 - Details Card in ODF Dashboard mentions OCS
2001339 - fix text content hotplug
2001413 - [e2e][automation] add/delete nic and disk to template
2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log
2001442 - Empty termination.log file for the kube-apiserver has too permissive mode
2001479 - IBM Cloud DNS unable to create/update records
2001566 - Enable alerts for prometheus operator in UWM
2001575 - Clicking on the perspective switcher shows a white page with loader
2001577 - Quick search placeholder is not displayed properly when the search string is removed
2001578 - [e2e][automation] add tests for vm dashboard tab
2001605 - PVs remain in Released state for a long time after the claim is deleted
2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options
2001620 - Cluster becomes degraded if it can't talk to Manila
2001760 - While creating 'Backing Store', 'Bucket Class', 'Namespace Store' user is navigated to 'Installed Operators' page after clicking on ODF
2001761 - Unable to apply cluster operator storage for SNO on GCP platform.
2001765 - Some error message in the log of diskmaker-manager caused confusion
2001784 - show loading page before final results instead of showing a transient message No log files exist
2001804 - Reload feature on Environment section in Build Config form does not work properly
2001810 - cluster admin unable to view BuildConfigs in all namespaces
2001817 - Failed to load RoleBindings list that will lead to ‘Role name’ is not able to be selected on Create RoleBinding page as well
2001823 - OCM controller must update operator status
2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start
2001835 - Could not select image tag version when create app from dev console
2001855 - Add capacity is disabled for ocs-storagecluster
2001856 - Repeating event: MissingVersion no image found for operand pod
2001959 - Side nav list borders don't extend to edges of container
2002007 - Layout issue on "Something went wrong" page
2002010 - ovn-kube may never attempt to retry a pod creation
2002012 - Cannot change volume mode when cloning a VM from a template
2002027 - Two instances of Dotnet helm chart show as one in topology
2002075 - opm render does not automatically pulling in the image(s) used in the deployments
2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster
2002125 - Network policy details page heading should be updated to Network Policy details
2002133 - [e2e][automation] add support/virtualization and improve deleteResource
2002134 - [e2e][automation] add test to verify vm details tab
2002215 - Multipath day1 not working on s390x
2002238 - Image stream tag is not persisted when switching from yaml to form editor
2002262 - [vSphere] Incorrect user agent in vCenter sessions list
2002266 - SinkBinding create form doesn't allow to use subject name, instead of label selector
2002276 - OLM fails to upgrade operators immediately
2002300 - Altering the Schedule Profile configurations doesn't affect the placement of the pods
2002354 - Missing DU configuration "Done" status reporting during ZTP flow
2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn't use commonjs
2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation
2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN
2002397 - Resources search is inconsistent
2002434 - CRI-O leaks some children PIDs
2002443 - Getting undefined error on create local volume set page
2002461 - DNS operator performs spurious updates in response to API's defaulting of service's internalTrafficPolicy
2002504 - When the openshift-cluster-storage-operator is degraded because of "VSphereProblemDetectorController_SyncError", the insights operator is not sending the logs from all pods.
2002559 - User preference for topology list view does not follow when a new namespace is created
2002567 - Upstream SR-IOV worker doc has broken links
2002588 - Change text to be sentence case to align with PF
2002657 - ovn-kube egress IP monitoring is using a random port over the node network
2002713 - CNO: OVN logs should have millisecond resolution
2002748 - [ICNI2] 'ErrorAddingLogicalPort' failed to handle external GW check: timeout waiting for namespace event
2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite
2002763 - Two storage systems getting created with external mode RHCS
2002808 - KCM does not use web identity credentials
2002834 - Cluster-version operator does not remove unrecognized volume mounts
2002896 - Incorrect result return when user filter data by name on search page
2002950 - Why spec.containers.command is not created with "oc create deploymentconfig --image= -- "
2003096 - [e2e][automation] check bootsource URL is displaying on review step
2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role
2003120 - CI: Uncaught error with ResizeObserver on operand details page
2003145 - Duplicate operand tab titles causes "two children with the same key" warning
2003164 - OLM, fatal error: concurrent map writes
2003178 - [FLAKE][knative] The UI doesn't show updated traffic distribution after accepting the form
2003193 - Kubelet/crio leaks netns and veth ports in the host
2003195 - OVN CNI should ensure host veths are removed
2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting '-e JENKINS_PASSWORD=password' ENV which was working for old container images
2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace
2003239 - "[sig-builds][Feature:Builds][Slow] can use private repositories as build input" tests fail outside of CI
2003244 - Revert libovsdb client code
2003251 - Patternfly components with list element has list item bullet when they should not.
2003252 - "[sig-builds][Feature:Builds][Slow] starting a build using CLI start-build test context override environment BUILD_LOGLEVEL in buildconfig" tests do not work as expected outside of CI
2003269 - Rejected pods should be filtered from admission regression
2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release
2003426 - [e2e][automation] add test for vm details bootorder
2003496 - [e2e][automation] add test for vm resources requirment settings
2003641 - All metal ipi jobs are failing in 4.10
2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state
2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node
2003683 - Samples operator is panicking in CI
2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster "Connection Details" page
2003715 - Error on creating local volume set after selection of the volume mode
2003743 - Remove workaround keeping /boot RW for kdump support
2003775 - etcd pod on CrashLoopBackOff after master replacement procedure
2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver
2003792 - Monitoring metrics query graph flyover panel is useless
2003808 - Add Sprint 207 translations
2003845 - Project admin cannot access image vulnerabilities view
2003859 - sdn emits events with garbage messages
2003896 - (release-4.10) ApiRequestCounts conditional gatherer
2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas
2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes
2004059 - [e2e][automation] fix current tests for downstream
2004060 - Trying to use basic spring boot sample causes crash on Firefox
2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn't close after selection
2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently
2004203 - build config's created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver
2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory
2004449 - Boot option recovery menu prevents image boot
2004451 - The backup filename displayed in the RecentBackup message is incorrect
2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts
2004508 - TuneD issues with the recent ConfigParser changes.
2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions
2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs
2004578 - Monitoring and node labels missing for an external storage platform
2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days
2004596 - [4.10] Bootimage bump tracker
2004597 - Duplicate ramdisk log containers running
2004600 - Duplicate ramdisk log containers running
2004609 - output of "crictl inspectp" is not complete
2004625 - BMC credentials could be logged if they change
2004632 - When LE takes a large amount of time, multiple whereabouts are seen
2004721 - ptp/worker custom threshold doesn't change ptp events threshold
2004736 - [knative] Create button on new Broker form is inactive despite form being filled
2004796 - [e2e][automation] add test for vm scheduling policy
2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque
2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card
2004901 - [e2e][automation] improve kubevirt devconsole tests
2004962 - Console frontend job consuming too much CPU in CI
2005014 - state of ODF StorageSystem is misreported during installation or uninstallation
2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines
2005179 - pods status filter is not taking effect
2005182 - sync list of deprecated apis about to be removed
2005282 - Storage cluster name is given as title in StorageSystem details page
2005355 - setuptools 58 makes Kuryr CI fail
2005407 - ClusterNotUpgradeable Alert should be set to Severity Info
2005415 - PTP operator with sidecar api configured throws bind: address already in use
2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console
2005554 - The switch status of the button "Show default project" is not revealed correctly in code
2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
2005761 - QE - Implementing crw-basic feature file
2005783 - Fix accessibility issues in the "Internal" and "Internal - Attached Mode" Installation Flow
2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty
2005854 - SSH NodePort service is created for each VM
2005901 - KS, KCM and KA going Degraded during master nodes upgrade
2005902 - Current UI flow for MCG only deployment is confusing and doesn't reciprocate any message to the end-user
2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics
2005971 - Change telemeter to report the Application Services product usage metrics
2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files
2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased
2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types
2006101 - Power off fails for drivers that don't support Soft power off
2006243 - Metal IPI upgrade jobs are running out of disk space
2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn't use the 0th address
2006308 - Backing Store YAML tab on click displays a blank screen on UI
2006325 - Multicast is broken across nodes
2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators
2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource
2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn't have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2006690 - OS boot failure "x64 Exception Type 06 - Invalid Opcode Exception"
2006714 - add retry for etcd errors in kube-apiserver
2006767 - KubePodCrashLooping may not fire
2006803 - Set CoreDNS cache entries for forwarded zones
2006861 - Add Sprint 207 part 2 translations
2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap
2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors
2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded
2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick
2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails
2007271 - CI Integration for Knative test cases
2007289 - kubevirt tests are failing in CI
2007322 - Devfile/Dockerfile import does not work for unsupported git host
2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3.
2007379 - Events are not generated for master offset for ordinary clock
2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace
2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address
2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error
2007522 - No new local-storage-operator-metadata-container is build for 4.10
2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10
2007580 - Azure cilium installs are failing e2e tests
2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10
2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes
2007692 - 4.9 "old-rhcos" jobs are permafailing with storage test failures
2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow
2007757 - must-gather extracts imagestreams in the "openshift" namespace, but not Templates
2007802 - AWS machine actuator get stuck if machine is completely missing
2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator
2008119 - The serviceAccountIssuer field on Authentication CR is reseted to “” when installation process
2008151 - Topology breaks on clicking in empty state
2008185 - Console operator go.mod should use go 1.16.version
2008201 - openstack-az job is failing on haproxy idle test
2008207 - vsphere CSI driver doesn't set resource limits
2008223 - gather_audit_logs: fix oc command line to get the current audit profile
2008235 - The Save button in the Edit DC form remains disabled
2008256 - Update Internationalization README with scope info
2008321 - Add correct documentation link for MON_DISK_LOW
2008462 - Disable PodSecurity feature gate for 4.10
2008490 - Backing store details page does not contain all the kebab actions.
2008521 - gcp-hostname service should correct invalid search entries in resolv.conf
2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount
2008539 - Registry doesn't fall back to secondary ImageContentSourcePolicy Mirror
2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers
2008599 - Azure Stack UPI does not have Internal Load Balancer
2008612 - Plugin asset proxy does not pass through browser cache headers
2008712 - VPA webhook timeout prevents all pods from starting
2008733 - kube-scheduler: exposed /debug/pprof port
2008911 - Prometheus repeatedly scaling prometheus-operator replica set
2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]
2008987 - OpenShift SDN Hosted Egress IP's are not being scheduled to nodes after upgrade to 4.8.12
2009055 - Instances of OCS to be replaced with ODF on UI
2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs
2009083 - opm blocks pruning of existing bundles during add
2009111 - [IPI-on-GCP] 'Install a cluster with nested virtualization enabled' failed due to unable to launch compute instances
2009131 - [e2e][automation] add more test about vmi
2009148 - [e2e][automation] test vm nic presets and options
2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator
2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family
2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted
2009384 - UI changes to support BindableKinds CRD changes
2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped
2009424 - Deployment upgrade is failing availability check
2009454 - Change web terminal subscription permissions from get to list
2009465 - container-selinux should come from rhel8-appstream
2009514 - Bump OVS to 2.16-15
2009555 - Supermicro X11 system not booting from vMedia with AI
2009623 - Console: Observe > Metrics page: Table pagination menu shows bullet points
2009664 - Git Import: Edit of knative service doesn't work as expected for git import flow
2009699 - Failure to validate flavor RAM
2009754 - Footer is not sticky anymore in import forms
2009785 - CRI-O's version file should be pinned by MCO
2009791 - Installer: ibmcloud ignores install-config values
2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13
2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo
2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests
2009873 - Stale Logical Router Policies and Annotations for a given node
2009879 - There should be test-suite coverage to ensure admin-acks work as expected
2009888 - SRO package name collision between official and community version
2010073 - uninstalling and then reinstalling sriov-network-operator is not working
2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node.
2010181 - Environment variables not getting reset on reload on deployment edit form
2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2010341 - OpenShift Alerting Rules Style-Guide Compliance
2010342 - Local console builds can have out of memory errors
2010345 - OpenShift Alerting Rules Style-Guide Compliance
2010348 - Reverts PIE build mode for K8S components
2010352 - OpenShift Alerting Rules Style-Guide Compliance
2010354 - OpenShift Alerting Rules Style-Guide Compliance
2010359 - OpenShift Alerting Rules Style-Guide Compliance
2010368 - OpenShift Alerting Rules Style-Guide Compliance
2010376 - OpenShift Alerting Rules Style-Guide Compliance
2010662 - Cluster is unhealthy after image-registry-operator tests
2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)
2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API
2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address
2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing
2010864 - Failure building EFS operator
2010910 - ptp worker events unable to identify interface for multiple interfaces
2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24
2010921 - Azure Stack Hub does not handle additionalTrustBundle
2010931 - SRO CSV uses non default category "Drivers and plugins"
2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well.
2011038 - optional operator conditions are confusing
2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass
2011171 - diskmaker-manager constantly redeployed by LSO when creating LV's
2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image
2011368 - Tooltip in pipeline visualization shows misleading data
2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels
2011411 - Managed Service's Cluster overview page contains link to missing Storage dashboards
2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster
2011513 - Kubelet rejects pods that use resources that should be freed by completed pods
2011668 - Machine stuck in deleting phase in VMware "reconciler failed to Delete machine"
2011693 - (release-4.10) "insightsclient_request_recvreport_total" metric is always incremented
2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn't export namespace labels anymore
2011733 - Repository README points to broken documentarion link
2011753 - Ironic resumes clean before raid configuration job is actually completed
2011809 - The nodes page in the openshift console doesn't work. You just get a blank page
2011822 - Obfuscation doesn't work at clusters with OVN
2011882 - SRO helm charts not synced with templates
2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot
2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages
2011903 - vsphere-problem-detector: session leak
2011927 - OLM should allow users to specify a proxy for GRPC connections
2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods
2011960 - [tracker] Storage operator is not available after reboot cluster instances
2011971 - ICNI2 pods are stuck in ContainerCreating state
2011972 - Ingress operator not creating wildcard route for hypershift clusters
2011977 - SRO bundle references non-existent image
2012069 - Refactoring Status controller
2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI
2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group
2012233 - [IBMCLOUD] IPI: "Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)"
2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig
2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off
2012407 - [e2e][automation] improve vm tab console tests
2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don't have namespace label
2012562 - migration condition is not detected in list view
2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written
2012780 - The port 50936 used by haproxy is occupied by kube-apiserver
2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working
2012902 - Neutron Ports assigned to Completed Pods are not reused Edit
2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack
2012971 - Disable operands deletes
2013034 - Cannot install to openshift-nmstate namespace
2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)
2013199 - post reboot of node SRIOV policy taking huge time
2013203 - UI breaks when trying to create block pool before storage cluster/system creation
2013222 - Full breakage for nightly payload promotion
2013273 - Nil pointer exception when phc2sys options are missing
2013321 - TuneD: high CPU utilization of the TuneD daemon.
2013416 - Multiple assets emit different content to the same filename
2013431 - Application selector dropdown has incorrect font-size and positioning
2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8
2013545 - Service binding created outside topology is not visible
2013599 - Scorecard support storage is not included in ocp4.9
2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)
2013646 - fsync controller will show false positive if gaps in metrics are observed.
2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default
2013751 - Service details page is showing wrong in-cluster hostname
2013787 - There are two tittle 'Network Attachment Definition Details' on NAD details page
2013871 - Resource table headings are not aligned with their column data
2013895 - Cannot enable accelerated network via MachineSets on Azure
2013920 - "--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude"
2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG)
2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain
2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab)
2013996 - Project detail page: Action "Delete Project" does nothing for the default project
2014071 - Payload imagestream new tags not properly updated during cluster upgrade
2014153 - SRIOV exclusive pooling
2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace
2014238 - AWS console test is failing on importing duplicate YAML definitions
2014245 - Several aria-labels, external links, and labels aren't internationalized
2014248 - Several files aren't internationalized
2014352 - Could not filter out machine by using node name on machines page
2014464 - Unexpected spacing/padding below navigation groups in developer perspective
2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages
2014486 - Integration Tests: OLM single namespace operator tests failing
2014488 - Custom operator cannot change orders of condition tables
2014497 - Regex slows down different forms and creates too much recursion errors in the log
2014538 - Kuryr controller crash looping on self._get_vip_port(loadbalancer).id 'NoneType' object has no attribute 'id'
2014614 - Metrics scraping requests should be assigned to exempt priority level
2014710 - TestIngressStatus test is broken on Azure
2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly
2014995 - oc adm must-gather cannot gather audit logs with 'None' audit profile
2015115 - [RFE] PCI passthrough
2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl '--resource-group-name' parameter
2015154 - Support ports defined networks and primarySubnet
2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic
2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production
2015386 - Possibility to add labels to the built-in OCP alerts
2015395 - Table head on Affinity Rules modal is not fully expanded
2015416 - CI implementation for Topology plugin
2015418 - Project Filesystem query returns No datapoints found
2015420 - No vm resource in project view's inventory
2015422 - No conflict checking on snapshot name
2015472 - Form and YAML view switch button should have distinguishable status
2015481 - [4.10] sriov-network-operator daemon pods are failing to start
2015493 - Cloud Controller Manager Operator does not respect 'additionalTrustBundle' setting
2015496 - Storage - PersistentVolumes : Claim colum value 'No Claim' in English
2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on 'Add Capacity' button click
2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu
2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain.
2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart 'x% used' is in English
2015549 - Observe - Metrics: Column heading and pagination text is in English
2015557 - Workloads - DeploymentConfigs : Error message is in English
2015568 - Compute - Nodes : CPU column's values are in English
2015635 - Storage operator fails causing installation to fail on ASH
2015660 - "Finishing boot source customization" screen should not use term "patched"
2015793 - [hypershift] The collect-profiles job's pods should run on the control-plane node
2015806 - Metrics view in Deployment reports "Forbidden" when not cluster-admin
2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning
2015837 - OS_CLOUD overwrites install-config's platform.openstack.cloud
2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch
2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail
2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)
2016008 - [4.10] Bootimage bump tracker
2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver
2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator
2016054 - No e2e CI presubmit configured for release component cluster-autoscaler
2016055 - No e2e CI presubmit configured for release component console
2016058 - openshift-sync does not synchronise in "ose-jenkins:v4.8"
2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager
2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers
2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters.
2016179 - Add Sprint 208 translations
2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager
2016235 - should update to 7.5.11 for grafana resources version label
2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails
2016334 - shiftstack: SRIOV nic reported as not supported
2016352 - Some pods start before CA resources are present
2016367 - Empty task box is getting created for a pipeline without finally task
2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts
2016438 - Feature flag gating is missing in few extensions contributed via knative plugin
2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc
2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets
2016453 - Complete i18n for GaugeChart defaults
2016479 - iface-id-ver is not getting updated for existing lsp
2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear
2016951 - dynamic actions list is not disabling "open console" for stopped vms
2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available
2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances
2017016 - [REF] Virtualization menu
2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn
2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly
2017130 - t is not a function error navigating to details page
2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue
2017244 - ovirt csi operator static files creation is in the wrong order
2017276 - [4.10] Volume mounts not created with the correct security context
2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed.
2017427 - NTO does not restart TuneD daemon when profile application is taking too long
2017535 - Broken Argo CD link image on GitOps Details Page
2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references
2017564 - On-prem prepender dispatcher script overwrites DNS search settings
2017565 - CCMO does not handle additionalTrustBundle on Azure Stack
2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice
2017606 - [e2e][automation] add test to verify send key for VNC console
2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes
2017656 - VM IP address is "undefined" under VM details -> ssh field
2017663 - SSH password authentication is disabled when public key is not supplied
2017680 - [gcp] Couldn’t enable support for instances with GPUs on GCP
2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set
2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource
2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults
2017761 - [e2e][automation] dummy bug for 4.9 test dependency
2017872 - Add Sprint 209 translations
2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances
2017879 - Add Chinese translation for "alternate"
2017882 - multus: add handling of pod UIDs passed from runtime
2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods
2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI
2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS
2018094 - the tooltip length is limited
2018152 - CNI pod is not restarted when It cannot start servers due to ports being used
2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time
2018234 - user settings are saved in local storage instead of on cluster
2018264 - Delete Export button doesn't work in topology sidebar (general issue with unknown CSV?)
2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports)
2018275 - Topology graph doesn't show context menu for Export CSV
2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked
2018380 - Migrate docs links to access.redhat.com
2018413 - Error: context deadline exceeded, OCP 4.8.9
2018428 - PVC is deleted along with VM even with "Delete Disks" unchecked
2018445 - [e2e][automation] enhance tests for downstream
2018446 - [e2e][automation] move tests to different level
2018449 - [e2e][automation] add test about create/delete network attachment definition
2018490 - [4.10] Image provisioning fails with file name too long
2018495 - Fix typo in internationalization README
2018542 - Kernel upgrade does not reconcile DaemonSet
2018880 - Get 'No datapoints found.' when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit
2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes
2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950
2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10
2018985 - The rootdisk size is 15Gi of windows VM in customize wizard
2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync.
2019096 - Update SRO leader election timeout to support SNO
2019129 - SRO in operator hub points to wrong repo for README
2019181 - Performance profile does not apply
2019198 - ptp offset metrics are not named according to the log output
2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest
2019284 - Stop action should not in the action list while VMI is not running
2019346 - zombie processes accumulation and Argument list too long
2019360 - [RFE] Virtualization Overview page
2019452 - Logger object in LSO appends to existing logger recursively
2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect
2019634 - Pause and migration is enabled in action list for a user who has view only permission
2019636 - Actions in VM tabs should be disabled when user has view only permission
2019639 - "Take snapshot" should be disabled while VM image is still been importing
2019645 - Create button is not removed on "Virtual Machines" page for view only user
2019646 - Permission error should pop-up immediately while clicking "Create VM" button on template page for view only user
2019647 - "Remove favorite" and "Create new Template" should be disabled in template action list for view only user
2019717 - cant delete VM with un-owned pvc attached
2019722 - The shared-resource-csi-driver-node pod runs as “BestEffort” qosClass
2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as "Always"
2019744 - [RFE] Suggest users to download newest RHEL 8 version
2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level
2019827 - Display issue with top-level menu items running demo plugin
2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded
2019886 - Kuryr unable to finish ports recovery upon controller restart
2019948 - [RFE] Restructring Virtualization links
2019972 - The Nodes section doesn't display the csr of the nodes that are trying to join the cluster
2019977 - Installer doesn't validate region causing binary to hang with a 60 minute timeout
2019986 - Dynamic demo plugin fails to build
2019992 - instance:node_memory_utilisation:ratio metric is incorrect
2020001 - Update dockerfile for demo dynamic plugin to reflect dir change
2020003 - MCD does not regard "dangling" symlinks as a files, attempts to write through them on next backup, resulting in "not writing through dangling symlink" error and degradation.
2020107 - cluster-version-operator: remove runlevel from CVO namespace
2020153 - Creation of Windows high performance VM fails
2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn't be public
2020250 - Replacing deprecated ioutil
2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build
2020275 - ClusterOperators link in console returns blank page during upgrades
2020377 - permissions error while using tcpdump option with must-gather
2020489 - coredns_dns metrics don't include the custom zone metrics data due to CoreDNS prometheus plugin is not defined
2020498 - "Show PromQL" button is disabled
2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature
2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI
2020664 - DOWN subports are not cleaned up
2020904 - When trying to create a connection from the Developer view between VMs, it fails
2021016 - 'Prometheus Stats' of dashboard 'Prometheus Overview' miss data on console compared with Grafana
2021017 - 404 page not found error on knative eventing page
2021031 - QE - Fix the topology CI scripts
2021048 - [RFE] Added MAC Spoof check
2021053 - Metallb operator presented as community operator
2021067 - Extensive number of requests from storage version operator in cluster
2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes
2021135 - [azure-file-csi-driver] "make unit-test" returns non-zero code, but tests pass
2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node
2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating
2021152 - imagePullPolicy is "Always" for ptp operator images
2021191 - Project admins should be able to list available network attachment defintions
2021205 - Invalid URL in git import form causes validation to not happen on URL change
2021322 - cluster-api-provider-azure should populate purchase plan information
2021337 - Dynamic Plugins: ResourceLink doesn't render when passed a groupVersionKind
2021364 - Installer requires invalid AWS permission s3:GetBucketReplication
2021400 - Bump documentationBaseURL to 4.10
2021405 - [e2e][automation] VM creation wizard Cloud Init editor
2021433 - "[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified" test fail permanently on disconnected
2021466 - [e2e][automation] Windows guest tool mount
2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver
2021551 - Build is not recognizing the USER group from an s2i image
2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character
2021629 - api request counts for current hour are incorrect
2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page
2021693 - Modals assigned modal-lg class are no longer the correct width
2021724 - Observe > Dashboards: Graph lines are not visible when obscured by other lines
2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled
2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags
2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem
2022053 - dpdk application with vhost-net is not able to start
2022114 - Console logging every proxy request
2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack (Intermittent)
2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long
2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error .
2022447 - ServiceAccount in manifests conflicts with OLM
2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules.
2022509 - getOverrideForManifest does not check manifest.GVK.Group
2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache
2022612 - no namespace field for "Kubernetes / Compute Resources / Namespace (Pods)" admin console dashboard
2022627 - Machine object not picking up external FIP added to an openstack vm
2022646 - configure-ovs.sh failure - Error: unknown connection 'WARN:'
2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox
2022801 - Add Sprint 210 translations
2022811 - Fix kubelet log rotation file handle leak
2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations
2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests
2022880 - Pipeline renders with minor visual artifact with certain task dependencies
2022886 - Incorrect URL in operator description
2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config
2023060 - [e2e][automation] Windows VM with CDROM migration
2023077 - [e2e][automation] Home Overview Virtualization status
2023090 - [e2e][automation] Examples of Import URL for VM templates
2023102 - [e2e][automation] Cloudinit disk of VM from custom template
2023216 - ACL for a deleted egressfirewall still present on node join switch
2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9
2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy
2023342 - SCC admission should take ephemeralContainers into account
2023356 - Devfiles can't be loaded in Safari on macOS (403 - Forbidden)
2023434 - Update Azure Machine Spec API to accept Marketplace Images
2023500 - Latency experienced while waiting for volumes to attach to node
2023522 - can't remove package from index: database is locked
2023560 - "Network Attachment Definitions" has no project field on the top in the list view
2023592 - [e2e][automation] add mac spoof check for nad
2023604 - ACL violation when deleting a provisioning-configuration resource
2023607 - console returns blank page when normal user without any projects visit Installed Operators page
2023638 - Downgrade support level for extended control plane integration to Dev Preview
2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10
2023675 - Changing CNV Namespace
2023779 - Fix Patch 104847 in 4.9
2023781 - initial hardware devices is not loading in wizard
2023832 - CCO updates lastTransitionTime for non-Status changes
2023839 - Bump recommended FCOS to 34.20211031.3.0
2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly
2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from "registry:5000" repository
2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8
2024055 - External DNS added extra prefix for the TXT record
2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully
2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json
2024199 - 400 Bad Request error for some queries for the non admin user
2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode
2024262 - Sample catalog is not displayed when one API call to the backend fails
2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability
2024316 - modal about support displays wrong annotation
2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected
2024399 - Extra space is in the translated text of "Add/Remove alternate service" on Create Route page
2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view
2024493 - Observe > Alerting > Alerting rules page throws error trying to destructure undefined
2024515 - test-blocker: Ceph-storage-plugin tests failing
2024535 - hotplug disk missing OwnerReference
2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image
2024547 - Detail page is breaking for namespace store , backing store and bucket class.
2024551 - KMS resources not getting created for IBM FlashSystem storage
2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel
2024613 - pod-identity-webhook starts without tls
2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded
2024665 - Bindable services are not shown on topology
2024731 - linuxptp container: unnecessary checking of interfaces
2024750 - i18n some remaining OLM items
2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured
2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack
2024841 - test Keycloak with latest tag
2024859 - Not able to deploy an existing image from private image registry using developer console
2024880 - Egress IP breaks when network policies are applied
2024900 - Operator upgrade kube-apiserver
2024932 - console throws "Unauthorized" error after logging out
2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up
2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick
2025230 - ClusterAutoscalerUnschedulablePods should not be a warning
2025266 - CreateResource route has exact prop which need to be removed
2025301 - [e2e][automation] VM actions availability in different VM states
2025304 - overwrite storage section of the DV spec instead of the pvc section
2025431 - [RFE]Provide specific windows source link
2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36
2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node
2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn't work for ExternalTrafficPolicy=local
2025481 - Update VM Snapshots UI
2025488 - [DOCS] Update the doc for nmstate operator installation
2025592 - ODC 4.9 supports invalid devfiles only
2025765 - It should not try to load from storageProfile after unchecking"Apply optimized StorageProfile settings"
2025767 - VMs orphaned during machineset scaleup
2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns "kubevirt-hyperconverged" while using customize wizard
2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size’s vCPUsAvailable instead of vCPUs for the sku.
2025821 - Make "Network Attachment Definitions" available to regular user
2025823 - The console nav bar ignores plugin separator in existing sections
2025830 - CentOS capitalizaion is wrong
2025837 - Warn users that the RHEL URL expire
2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-
2025903 - [UI] RoleBindings tab doesn't show correct rolebindings
2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2026178 - OpenShift Alerting Rules Style-Guide Compliance
2026209 - Updation of task is getting failed (tekton hub integration)
2026223 - Internal error occurred: failed calling webhook "ptpconfigvalidationwebhook.openshift.io"
2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates
2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct
2026352 - Kube-Scheduler revision-pruner fail during install of new cluster
2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment
2026383 - Error when rendering custom Grafana dashboard through ConfigMap
2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation
2026396 - Cachito Issues: sriov-network-operator Image build failure
2026488 - openshift-controller-manager - delete event is repeating pathologically
2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined.
2026560 - Cluster-version operator does not remove unrecognized volume mounts
2026699 - fixed a bug with missing metadata
2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator
2026898 - Description/details are missing for Local Storage Operator
2027132 - Use the specific icon for Fedora and CentOS template
2027238 - "Node Exporter / USE Method / Cluster" CPU utilization graph shows incorrect legend
2027272 - KubeMemoryOvercommit alert should be human readable
2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group
2027288 - Devfile samples can't be loaded after fixing it on Safari (redirect caching issue)
2027299 - The status of checkbox component is not revealed correctly in code
2027311 - K8s watch hooks do not work when fetching core resources
2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation
2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don't use the downstream images
2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation
2027498 - [IBMCloud] SG Name character length limitation
2027501 - [4.10] Bootimage bump tracker
2027524 - Delete Application doesn't delete Channels or Brokers
2027563 - e2e/add-flow-ci.feature fix accessibility violations
2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges
2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions
2027685 - openshift-cluster-csi-drivers pods crashing on PSI
2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced
2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string
2027917 - No settings in hostfirmwaresettings and schema objects for masters
2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf
2027982 - nncp stucked at ConfigurationProgressing
2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters
2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed
2028030 - Panic detected in cluster-image-registry-operator pod
2028042 - Desktop viewer for Windows VM shows "no Service for the RDP (Remote Desktop Protocol) can be found"
2028054 - Cloud controller manager operator can't get leader lease when upgrading from 4.8 up to 4.9
2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin
2028141 - Console tests doesn't pass on Node.js 15 and 16
2028160 - Remove i18nKey in network-policy-peer-selectors.tsx
2028162 - Add Sprint 210 translations
2028170 - Remove leading and trailing whitespace
2028174 - Add Sprint 210 part 2 translations
2028187 - Console build doesn't pass on Node.js 16 because node-sass doesn't support it
2028217 - Cluster-version operator does not default Deployment replicas to one
2028240 - Multiple CatalogSources causing higher CPU use than necessary
2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn't be set in HostFirmwareSettings
2028325 - disableDrain should be set automatically on SNO
2028484 - AWS EBS CSI driver's livenessprobe does not respect operator's loglevel
2028531 - Missing netFilter to the list of parameters when platform is OpenStack
2028610 - Installer doesn't retry on GCP rate limiting
2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting
2028695 - destroy cluster does not prune bootstrap instance profile
2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs
2028802 - CRI-O panic due to invalid memory address or nil pointer dereference
2028816 - VLAN IDs not released on failures
2028881 - Override not working for the PerformanceProfile template
2028885 - Console should show an error context if it logs an error object
2028949 - Masthead dropdown item hover text color is incorrect
2028963 - Whereabouts should reconcile stranded IP addresses
2029034 - enabling ExternalCloudProvider leads to inoperative cluster
2029178 - Create VM with wizard - page is not displayed
2029181 - Missing CR from PGT
2029273 - wizard is not able to use if project field is "All Projects"
2029369 - Cypress tests github rate limit errors
2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out
2029394 - missing empty text for hardware devices at wizard review
2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used
2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl
2029521 - EFS CSI driver cannot delete volumes under load
2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle
2029579 - Clicking on an Application which has a Helm Release in it causes an error
2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn't for HPE
2029645 - Sync upstream 1.15.0 downstream
2029671 - VM action "pause" and "clone" should be disabled while VM disk is still being importing
2029742 - [ovn] Stale lr-policy-list and snat rules left for egressip
2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage
2029785 - CVO panic when an edge is included in both edges and conditionaledges
2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp)
2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error
2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace
2030228 - Fix StorageSpec resources field to use correct API
2030229 - Mirroring status card reflect wrong data
2030240 - Hide overview page for non-privileged user
2030305 - Export App job do not completes
2030347 - kube-state-metrics exposes metrics about resource annotations
2030364 - Shared resource CSI driver monitoring is not setup correctly
2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets
2030534 - Node selector/tolerations rules are evaluated too early
2030539 - Prometheus is not highly available
2030556 - Don't display Description or Message fields for alerting rules if those annotations are missing
2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation
2030574 - console service uses older "service.alpha.openshift.io" for the service serving certificates.
2030677 - BOND CNI: There is no option to configure MTU on a Bond interface
2030692 - NPE in PipelineJobListener.upsertWorkflowJob
2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache
2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error
2030847 - PerformanceProfile API version should be v2
2030961 - Customizing the OAuth server URL does not apply to upgraded cluster
2031006 - Application name input field is not autofocused when user selects "Create application"
2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex
2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn't be started
2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue
2031057 - Topology sidebar for Knative services shows a small pod ring with "0 undefined" as tooltip
2031060 - Failing CSR Unit test due to expired test certificate
2031085 - ovs-vswitchd running more threads than expected
2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1
2031228 - CVE-2021-43813 grafana: directory traversal vulnerability
2031502 - [RFE] New common templates crash the ui
2031685 - Duplicated forward upstreams should be removed from the dns operator
2031699 - The displayed ipv6 address of a dns upstream should be case sensitive
2031797 - [RFE] Order and text of Boot source type input are wrong
2031826 - CI tests needed to confirm driver-toolkit image contents
2031831 - OCP Console - Global CSS overrides affecting dynamic plugins
2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional
2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled)
2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain)
2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself
2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource
2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64
2032141 - open the alertrule link in new tab, got empty page
2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy
2032296 - Cannot create machine with ephemeral disk on Azure
2032407 - UI will show the default openshift template wizard for HANA template
2032415 - Templates page - remove "support level" badge and add "support level" column which should not be hard coded
2032421 - [RFE] UI integration with automatic updated images
2032516 - Not able to import git repo with .devfile.yaml
2032521 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the aws_vpc_dhcp_options_association resource
2032547 - hardware devices table have filter when table is empty
2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool
2032566 - Cluster-ingress-router does not support Azure Stack
2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso
2032589 - DeploymentConfigs ignore resolve-names annotation
2032732 - Fix styling conflicts due to recent console-wide CSS changes
2032831 - Knative Services and Revisions are not shown when Service has no ownerReference
2032851 - Networking is "not available" in Virtualization Overview
2032926 - Machine API components should use K8s 1.23 dependencies
2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24
2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster
2033013 - Project dropdown in user preferences page is broken
2033044 - Unable to change import strategy if devfile is invalid
2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable
2033111 - IBM VPC operator library bump removed global CLI args
2033138 - "No model registered for Templates" shows on customize wizard
2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected
2033239 - [IPI on Alibabacloud] 'openshift-install' gets the wrong region (‘cn-hangzhou’) selected
2033257 - unable to use configmap for helm charts
2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn’t triggered
2033290 - Product builds for console are failing
2033382 - MAPO is missing machine annotations
2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations
2033403 - Devfile catalog does not show provider information
2033404 - Cloud event schema is missing source type and resource field is using wrong value
2033407 - Secure route data is not pre-filled in edit flow form
2033422 - CNO not allowing LGW conversion from SGW in runtime
2033434 - Offer darwin/arm64 oc in clidownloads
2033489 - CCM operator failing on baremetal platform
2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver
2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains
2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating "cluster-infrastructure-02-config.yml" status, which leads to bootstrap failed and all master nodes NotReady
2033538 - Gather Cost Management Metrics Custom Resource
2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined
2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page
2033634 - list-style-type: disc is applied to the modal dropdowns
2033720 - Update samples in 4.10
2033728 - Bump OVS to 2.16.0-33
2033729 - remove runtime request timeout restriction for azure
2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended
2033749 - Azure Stack Terraform fails without Local Provider
2033750 - Local volume should pull multi-arch image for kube-rbac-proxy
2033751 - Bump kubernetes to 1.23
2033752 - make verify fails due to missing yaml-patch
2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource
2034004 - [e2e][automation] add tests for VM snapshot improvements
2034068 - [e2e][automation] Enhance tests for 4.10 downstream
2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore
2034097 - [OVN] After edit EgressIP object, the status is not correct
2034102 - [OVN] Recreate the deleted EgressIP object got InvalidEgressIP warning
2034129 - blank page returned when clicking 'Get started' button
2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0
2034153 - CNO does not verify MTU migration for OpenShiftSDN
2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled
2034170 - Use function.knative.dev for Knative Functions related labels
2034190 - unable to add new VirtIO disks to VMs
2034192 - Prometheus fails to insert reporting metrics when the sample limit is met
2034243 - regular user cant load template list
2034245 - installing a cluster on aws, gcp always fails with "Error: Incompatible provider version"
2034248 - GPU/Host device modal is too small
2034257 - regular user Create VM
missing permissions alert
2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]
2034287 - do not block upgrades if we can't create storageclass in 4.10 in vsphere
2034300 - Du validator policy is NonCompliant after DU configuration completed
2034319 - Negation constraint is not validating packages
2034322 - CNO doesn't pick up settings required when ExternalControlPlane topology
2034350 - The CNO should implement the Whereabouts IP reconciliation cron job
2034362 - update description of disk interface
2034398 - The Whereabouts IPPools CRD should include the podref field
2034409 - Default CatalogSources should be pointing to 4.10 index images
2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics
2034413 - cloud-network-config-controller fails to init with secret "cloud-credentials" not found in manual credential mode
2034460 - Summary: cloud-network-config-controller does not account for different environment
2034474 - Template's boot source is "Unknown source" before and after set enableCommonBootImageImport to true
2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren't working properly
2034493 - Change cluster version operator log level
2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list
2034527 - IPI deployment fails 'timeout reached while inspecting the node' when provisioning network ipv6
2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer
2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART
2034537 - Update team
2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds
2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success
2034577 - Current OVN gateway mode should be reflected on node annotation as well
2034621 - context menu not popping up for application group
2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10
2034624 - Warn about unsupported CSI driver in vsphere operator
2034647 - missing volumes list in snapshot modal
2034648 - Rebase openshift-controller-manager to 1.23
2034650 - Rebase openshift/builder to 1.23
2034705 - vSphere: storage e2e tests logging configuration data
2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail.
2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment
2034785 - ptpconfig with summary_interval cannot be applied
2034823 - RHEL9 should be starred in template list
2034838 - An external router can inject routes if no service is added
2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent
2034879 - Lifecycle hook's name and owner shouldn't be allowed to be empty
2034881 - Cloud providers components should use K8s 1.23 dependencies
2034884 - ART cannot build the image because it tries to download controller-gen
2034889 - oc adm prune deployments
does not work
2034898 - Regression in recently added Events feature
2034957 - update openshift-apiserver to kube 1.23.1
2035015 - ClusterLogForwarding CR remains stuck remediating forever
2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster
2035141 - [RFE] Show GPU/Host devices in template's details tab
2035146 - "kubevirt-plugin~PVC cannot be empty" shows on add-disk modal while adding existing PVC
2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting
2035199 - IPv6 support in mtu-migration-dispatcher.yaml
2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing
2035250 - Peering with ebgp peer over multi-hops doesn't work
2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices
2035315 - invalid test cases for AWS passthrough mode
2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env
2035321 - Add Sprint 211 translations
2035326 - [ExternalCloudProvider] installation with additional network on workers fails
2035328 - Ccoctl does not ignore credentials request manifest marked for deletion
2035333 - Kuryr orphans ports on 504 errors from Neutron
2035348 - Fix two grammar issues in kubevirt-plugin.json strings
2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets
2035409 - OLM E2E test depends on operator package that's no longer published
2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address
2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to 'ecs-cn-hangzhou.aliyuncs.com' timeout, although the specified region is 'us-east-1'
2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster
2035467 - UI: Queried metrics can't be ordered on Oberve->Metrics page
2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers
2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class
2035602 - [e2e][automation] add tests for Virtualization Overview page cards
2035703 - Roles -> RoleBindings tab doesn't show RoleBindings correctly
2035704 - RoleBindings list page filter doesn't apply
2035705 - Azure 'Destroy cluster' get stuck when the cluster resource group is already not existing.
2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed
2035772 - AccessMode and VolumeMode is not reserved for customize wizard
2035847 - Two dashes in the Cronjob / Job pod name
2035859 - the output of opm render doesn't contain olm.constraint which is defined in dependencies.yaml
2035882 - [BIOS setting values] Create events for all invalid settings in spec
2035903 - One redundant capi-operator credential requests in “oc adm extract --credentials-requests”
2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen
2035927 - Cannot enable HighNodeUtilization scheduler profile
2035933 - volume mode and access mode are empty in customize wizard review tab
2035969 - "ip a " shows "Error: Peer netns reference is invalid" after create test pods
2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation
2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error
2036029 - New added cloud-network-config operator doesn’t supported aws sts format credential
2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend
2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes
2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23
2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23
2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments
2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists
2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected
2036826 - oc adm prune deployments
can prune the RC/RS
2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform
2036861 - kube-apiserver is degraded while enable multitenant
2036937 - Command line tools page shows wrong download ODO link
2036940 - oc registry login fails if the file is empty or stdout
2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container
2036989 - Route URL copy to clipboard button wraps to a separate line by itself
2036990 - ZTP "DU Done inform policy" never becomes compliant on multi-node clusters
2036993 - Machine API components should use Go lang version 1.17
2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log.
2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api
2037073 - Alertmanager container fails to start because of startup probe never being successful
2037075 - Builds do not support CSI volumes
2037167 - Some log level in ibm-vpc-block-csi-controller are hard code
2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles
2037182 - PingSource badge color is not matched with knativeEventing color
2037203 - "Running VMs" card is too small in Virtualization Overview
2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly
2037237 - Add "This is a CD-ROM boot source" to customize wizard
2037241 - default TTL for noobaa cache buckets should be 0
2037246 - Cannot customize auto-update boot source
2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately
2037288 - Remove stale image reference
2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources
2037483 - Rbacs for Pods within the CBO should be more restrictive
2037484 - Bump dependencies to k8s 1.23
2037554 - Mismatched wave number error message should include the wave numbers that are in conflict
2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]
2037635 - impossible to configure custom certs for default console route in ingress config
2037637 - configure custom certificate for default console route doesn't take effect for OCP >= 4.8
2037638 - Builds do not support CSI volumes as volume sources
2037664 - text formatting issue in Installed Operators list table
2037680 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080
2037689 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080
2037801 - Serverless installation is failing on CI jobs for e2e tests
2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format
2037856 - use lease for leader election
2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10
2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests
2037904 - upgrade operator deployment failed due to memory limit too low for manager container
2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]
2038034 - non-privileged user cannot see auto-update boot source
2038053 - Bump dependencies to k8s 1.23
2038088 - Remove ipa-downloader references
2038160 - The default
project missed the annotation : openshift.io/node-selector: ""
2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional
2038196 - must-gather is missing collecting some metal3 resources
2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)
2038253 - Validator Policies are long lived
2038272 - Failures to build a PreprovisioningImage are not reported
2038384 - Azure Default Instance Types are Incorrect
2038389 - Failing test: [sig-arch] events should not repeat pathologically
2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket
2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips
2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained
2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect
2038663 - update kubevirt-plugin OWNERS
2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via "oc adm groups new"
2038705 - Update ptp reviewers
2038761 - Open Observe->Targets page, wait for a while, page become blank
2038768 - All the filters on the Observe->Targets page can't work
2038772 - Some monitors failed to display on Observe->Targets page
2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node
2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces
2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard
2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation
2038864 - E2E tests fail because multi-hop-net was not created
2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console
2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured
2038968 - Move feature gates from a carry patch to openshift/api
2039056 - Layout issue with breadcrumbs on API explorer page
2039057 - Kind column is not wide enough in API explorer page
2039064 - Bulk Import e2e test flaking at a high rate
2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled
2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters
2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost
2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy
2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator
2039170 - [upgrade]Error shown on registry operator "missing the cloud-provider-config configmap" after upgrade
2039227 - Improve image customization server parameter passing during installation
2039241 - Improve image customization server parameter passing during installation
2039244 - Helm Release revision history page crashes the UI
2039294 - SDN controller metrics cannot be consumed correctly by prometheus
2039311 - oc Does Not Describe Build CSI Volumes
2039315 - Helm release list page should only fetch secrets for deployed charts
2039321 - SDN controller metrics are not being consumed by prometheus
2039330 - Create NMState button doesn't work in OperatorHub web console
2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations
2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters.
2039359 - oc adm prune deployments
can't prune the RS where the associated Deployment no longer exists
2039382 - gather_metallb_logs does not have execution permission
2039406 - logout from rest session after vsphere operator sync is finished
2039408 - Add GCP region northamerica-northeast2 to allowed regions
2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration
2039425 - No need to set KlusterletAddonConfig CR applicationManager->enabled: true in RAN ztp deployment
2039491 - oc - git:// protocol used in unit tests
2039516 - Bump OVN to ovn21.12-21.12.0-25
2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate
2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled
2039541 - Resolv-prepender script duplicating entries
2039586 - [e2e] update centos8 to centos stream8
2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty
2039619 - [AWS] In tree provisioner storageclass aws disk type should contain 'gp3' and csi provisioner storageclass default aws disk type should be 'gp3'
2039670 - Create PDBs for control plane components
2039678 - Page goes blank when create image pull secret
2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported
2039743 - React missing key warning when open operator hub detail page (and maybe others as well)
2039756 - React missing key warning when open KnativeServing details
2039770 - Observe dashboard doesn't react on time-range changes after browser reload when perspective is changed in another tab
2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard
2039781 - [GSS] OBC is not visible by admin of a Project on Console
2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector
2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled
2039880 - Log level too low for control plane metrics
2039919 - Add E2E test for router compression feature
2039981 - ZTP for standard clusters installs stalld on master nodes
2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead
2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced
2040143 - [IPI on Alibabacloud] suggest to remove region "cn-nanjing" or provide better error message
2040150 - Update ConfigMap keys for IBM HPCS
2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth
2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository
2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp
2040376 - "unknown instance type" error for supported m6i.xlarge instance
2040394 - Controller: enqueue the failed configmap till services update
2040467 - Cannot build ztp-site-generator container image
2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn't take affect in OpenShift 4
2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps
2040535 - Auto-update boot source is not available in customize wizard
2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name
2040603 - rhel worker scaleup playbook failed because missing some dependency of podman
2040616 - rolebindings page doesn't load for normal users
2040620 - [MAPO] Error pulling MAPO image on installation
2040653 - Topology sidebar warns that another component is updated while rendering
2040655 - User settings update fails when selecting application in topology sidebar
2040661 - Different react warnings about updating state on unmounted components when leaving topology
2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation
2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi
2040694 - Three upstream HTTPClientConfig struct fields missing in the operator
2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers
2040710 - cluster-baremetal-operator cannot update BMC subscription CR
2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms
2040782 - Import YAML page blocks input with more then one generateName attribute
2040783 - The Import from YAML summary page doesn't show the resource name if created via generateName attribute
2040791 - Default PGT policies must be 'inform' to integrate with the Lifecycle Operator
2040793 - Fix snapshot e2e failures
2040880 - do not block upgrades if we can't connect to vcenter
2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10
2041093 - autounattend.xml missing
2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates
2041319 - [IPI on Alibabacloud] installation in region "cn-shanghai" failed, due to "Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped"
2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23
2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller
2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener
2041441 - Provision volume with size 3000Gi even if sizeRange: '[10-2000]GiB' in storageclass on IBM cloud
2041466 - Kubedescheduler version is missing from the operator logs
2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses
2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods)
2041492 - Spacing between resources in inventory card is too small
2041509 - GCP Cloud provider components should use K8s 1.23 dependencies
2041510 - cluster-baremetal-operator doesn't run baremetal-operator's subscription webhook
2041541 - audit: ManagedFields are dropped using API not annotation
2041546 - ovnkube: set election timer at RAFT cluster creation time
2041554 - use lease for leader election
2041581 - KubeDescheduler operator log shows "Use of insecure cipher detected"
2041583 - etcd and api server cpu mask interferes with a guaranteed workload
2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure
2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation
2041620 - bundle CSV alm-examples does not parse
2041641 - Fix inotify leak and kubelet retaining memory
2041671 - Delete templates leads to 404 page
2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category
2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled
2041750 - [IPI on Alibabacloud] trying "create install-config" with region "cn-wulanchabu (China (Ulanqab))" (or "ap-southeast-6 (Philippines (Manila))", "cn-guangzhou (China (Guangzhou))") failed due to invalid endpoint
2041763 - The Observe > Alerting pages no longer have their default sort order applied
2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken
2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied
2041882 - cloud-network-config operator can't work normal on GCP workload identity cluster
2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases
2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist
2041971 - [vsphere] Reconciliation of mutating webhooks didn't happen
2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile
2041999 - [PROXY] external dns pod cannot recognize custom proxy CA
2042001 - unexpectedly found multiple load balancers
2042029 - kubedescheduler fails to install completely
2042036 - [IBMCLOUD] "openshift-install explain installconfig.platform.ibmcloud" contains not yet supported custom vpc parameters
2042049 - Seeing warning related to unrecognized feature gate in kubescheduler & KCM logs
2042059 - update discovery burst to reflect lots of CRDs on openshift clusters
2042069 - Revert toolbox to rhcos-toolbox
2042169 - Can not delete egressnetworkpolicy in Foreground propagation
2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool
2042265 - [IBM]"--scale-down-utilization-threshold" doesn't work on IBMCloud
2042274 - Storage API should be used when creating a PVC
2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection
2042366 - Lifecycle hooks should be independently managed
2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway
2042382 - [e2e][automation] CI takes more then 2 hours to run
2042395 - Add prerequisites for active health checks test
2042438 - Missing rpms in openstack-installer image
2042466 - Selection does not happen when switching from Topology Graph to List View
2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver
2042567 - insufficient info on CodeReady Containers configuration
2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk
2042619 - Overview page of the console is broken for hypershift clusters
2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running
2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud
2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud
2042770 - [IPI on Alibabacloud] with vpcID & vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly
2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)
2042851 - Create template from SAP HANA template flow - VM is created instead of a new template
2042906 - Edit machineset with same machine deletion hook name succeed
2042960 - azure-file CI fails with "gid(0) in storageClass and pod fsgroup(1000) are not equal"
2043003 - [IPI on Alibabacloud] 'destroy cluster' of a failed installation (bug2041694) stuck after 'stage=Nat gateways'
2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]
2043043 - Cluster Autoscaler should use K8s 1.23 dependencies
2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)
2043078 - Favorite system projects not visible in the project selector after toggling "Show default projects".
2043117 - Recommended operators links are erroneously treated as external
2043130 - Update CSI sidecars to the latest release for 4.10
2043234 - Missing validation when creating several BGPPeers with the same peerAddress
2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler
2043254 - crio does not bind the security profiles directory
2043296 - Ignition fails when reusing existing statically-keyed LUKS volume
2043297 - [4.10] Bootimage bump tracker
2043316 - RHCOS VM fails to boot on Nutanix AOS
2043446 - Rebase aws-efs-utils to the latest upstream version.
2043556 - Add proper ci-operator configuration to ironic and ironic-agent images
2043577 - DPU network operator
2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator
2043675 - Too many machines deleted by cluster autoscaler when scaling down
2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation
2043709 - Logging flags no longer being bound to command line
2043721 - Installer bootstrap hosts using outdated kubelet containing bugs
2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather
2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23
2043780 - Bump router to k8s.io/api 1.23
2043787 - Bump cluster-dns-operator to k8s.io/api 1.23
2043801 - Bump CoreDNS to k8s.io/api 1.23
2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown
2043961 - [OVN-K] If pod creation fails, retry doesn't work as expected.
2044201 - Templates golden image parameters names should be supported
2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]
2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter “csi.storage.k8s.io/fstype” create pvc,pod successfully but write data to the pod's volume failed of "Permission denied"
2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects
2044347 - Bump to kubernetes 1.23.3
2044481 - collect sharedresource cluster scoped instances with must-gather
2044496 - Unable to create hardware events subscription - failed to add finalizers
2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources
2044680 - Additional libovsdb performance and resource consumption fixes
2044704 - Observe > Alerting pages should not show runbook links in 4.10
2044717 - [e2e] improve tests for upstream test environment
2044724 - Remove namespace column on VM list page when a project is selected
2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff
2044808 - machine-config-daemon-pull.service: use cp
instead of cat
when extracting MCD in OKD
2045024 - CustomNoUpgrade alerts should be ignored
2045112 - vsphere-problem-detector has missing rbac rules for leases
2045199 - SnapShot with Disk Hot-plug hangs
2045561 - Cluster Autoscaler should use the same default Group value as Cluster API
2045591 - Reconciliation of aws pod identity mutating webhook did not happen
2045849 - Add Sprint 212 translations
2045866 - MCO Operator pod spam "Error creating event" warning messages in 4.10
2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin
2045916 - [IBMCloud] Default machine profile in installer is unreliable
2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment
2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify
2046137 - oc output for unknown commands is not human readable
2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance
2046297 - Bump DB reconnect timeout
2046517 - In Notification drawer, the "Recommendations" header shows when there isn't any recommendations
2046597 - Observe > Targets page may show the wrong service monitor is multiple monitors have the same namespace & label selectors
2046626 - Allow setting custom metrics for Ansible-based Operators
2046683 - [AliCloud]"--scale-down-utilization-threshold" doesn't work on AliCloud
2047025 - Installation fails because of Alibaba CSI driver operator is degraded
2047190 - Bump Alibaba CSI driver for 4.10
2047238 - When using communities and localpreferences together, only localpreference gets applied
2047255 - alibaba: resourceGroupID not found
2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions
2047317 - Update HELM OWNERS files under Dev Console
2047455 - [IBM Cloud] Update custom image os type
2047496 - Add image digest feature
2047779 - do not degrade cluster if storagepolicy creation fails
2047927 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2047929 - use lease for leader election
2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2048046 - New route annotation to show another URL or hide topology URL decorator doesn't work for Knative Services
2048048 - Application tab in User Preferences dropdown menus are too wide.
2048050 - Topology list view items are not highlighted on keyboard navigation
2048117 - [IBM]Shouldn't change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value
2048413 - Bond CNI: Failed to attach Bond NAD to pod
2048443 - Image registry operator panics when finalizes config deletion
2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*
2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt
2048598 - Web terminal view is broken
2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure
2048891 - Topology page is crashed
2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class
2049043 - Cannot create VM from template
2049156 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2049886 - Placeholder bug for OCP 4.10.0 metadata release
2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning
2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2
2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0
2050227 - Installation on PSI fails with: 'openstack platform does not have the required standard-attr-tag network extension'
2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]
2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members
2050310 - ContainerCreateError when trying to launch large (>500) numbers of pods across nodes
2050370 - alert data for burn budget needs to be updated to prevent regression
2050393 - ZTP missing support for local image registry and custom machine config
2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud
2050737 - Remove metrics and events for master port offsets
2050801 - Vsphere upi tries to access vsphere during manifests generation phase
2050883 - Logger object in LSO does not log source location accurately
2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit
2052062 - Whereabouts should implement client-go 1.22+
2052125 - [4.10] Crio appears to be coredumping in some scenarios
2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config
2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade.
2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests
2052598 - kube-scheduler should use configmap lease
2052599 - kube-controller-manger should use configmap lease
2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh
2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total
not valid
2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop
2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set.
2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1
2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch
2052756 - [4.10] PVs are not being cleaned up after PVC deletion
2053175 - oc adm catalog mirror throws 'missing signature key' error when using file://local/index
2053218 - ImagePull fails with error "unable to pull manifest from example.com/busy.box:v5 invalid reference format"
2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs
2053268 - inability to detect static lifecycle failure
2053314 - requestheader IDP test doesn't wait for cleanup, causing high failure rates
2053323 - OpenShift-Ansible BYOH Unit Tests are Broken
2053339 - Remove dev preview badge from IBM FlashSystem deployment windows
2053751 - ztp-site-generate container is missing convenience entrypoint
2053945 - [4.10] Failed to apply sriov policy on intel nics
2054109 - Missing "app" label
2054154 - RoleBinding in project without subject is causing "Project access" page to fail
2054244 - Latest pipeline run should be listed on the top of the pipeline run list
2054288 - console-master-e2e-gcp-console is broken
2054562 - DPU network operator 4.10 branch need to sync with master
2054897 - Unable to deploy hw-event-proxy operator
2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently
2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line
2055371 - Remove Check which enforces summary_interval must match logSyncInterval
2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11
2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API
2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured
2056479 - ovirt-csi-driver-node pods are crashing intermittently
2056572 - reconcilePrecaching error: cannot list resource "clusterserviceversions" in API group "operators.coreos.com" at the cluster scope"
2056629 - [4.10] EFS CSI driver can't unmount volumes with "wait: no child processes"
2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs
2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation
2056948 - post 1.23 rebase: regression in service-load balancer reliability
2057438 - Service Level Agreement (SLA) always show 'Unknown'
2057721 - Fix Proxy support in RHACM 2.4.2
2057724 - Image creation fails when NMstateConfig CR is empty
2058641 - [4.10] Pod density test causing problems when using kube-burner
2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install
2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials
2060956 - service domain can't be resolved when networkpolicy is used in OCP 4.10-rc
- References:
https://access.redhat.com/security/cve/CVE-2014-3577 https://access.redhat.com/security/cve/CVE-2016-10228 https://access.redhat.com/security/cve/CVE-2017-14502 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2018-1000858 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9169 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-25013 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-8927 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-9952 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-13434 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-15358 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-25660 https://access.redhat.com/security/cve/CVE-2020-25677 https://access.redhat.com/security/cve/CVE-2020-27618 https://access.redhat.com/security/cve/CVE-2020-27781 https://access.redhat.com/security/cve/CVE-2020-29361 https://access.redhat.com/security/cve/CVE-2020-29362 https://access.redhat.com/security/cve/CVE-2020-29363 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/cve/CVE-2021-3326 https://access.redhat.com/security/cve/CVE-2021-3449 https://access.redhat.com/security/cve/CVE-2021-3450 https://access.redhat.com/security/cve/CVE-2021-3516 https://access.redhat.com/security/cve/CVE-2021-3517 https://access.redhat.com/security/cve/CVE-2021-3518 https://access.redhat.com/security/cve/CVE-2021-3520 https://access.redhat.com/security/cve/CVE-2021-3521 https://access.redhat.com/security/cve/CVE-2021-3537 https://access.redhat.com/security/cve/CVE-2021-3541 https://access.redhat.com/security/cve/CVE-2021-3733 https://access.redhat.com/security/cve/CVE-2021-3749 https://access.redhat.com/security/cve/CVE-2021-20305 https://access.redhat.com/security/cve/CVE-2021-21684 https://access.redhat.com/security/cve/CVE-2021-22946 https://access.redhat.com/security/cve/CVE-2021-22947 https://access.redhat.com/security/cve/CVE-2021-25215 https://access.redhat.com/security/cve/CVE-2021-27218 https://access.redhat.com/security/cve/CVE-2021-30666 https://access.redhat.com/security/cve/CVE-2021-30761 https://access.redhat.com/security/cve/CVE-2021-30762 https://access.redhat.com/security/cve/CVE-2021-33928 https://access.redhat.com/security/cve/CVE-2021-33929 https://access.redhat.com/security/cve/CVE-2021-33930 https://access.redhat.com/security/cve/CVE-2021-33938 https://access.redhat.com/security/cve/CVE-2021-36222 https://access.redhat.com/security/cve/CVE-2021-37750 https://access.redhat.com/security/cve/CVE-2021-39226 https://access.redhat.com/security/cve/CVE-2021-41190 https://access.redhat.com/security/cve/CVE-2021-43813 https://access.redhat.com/security/cve/CVE-2021-44716 https://access.redhat.com/security/cve/CVE-2021-44717 https://access.redhat.com/security/cve/CVE-2022-0532 https://access.redhat.com/security/cve/CVE-2022-21673 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL 0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne eGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM CEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF aDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC Y/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp sQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO RDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN rs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry bSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z 7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT b5PUYUBIZLc= =GUDA -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Solution:
Before applying the update, back up your existing installation, including all applications, configuration files, databases and database settings, and so on.
The References section of this erratum contains a download link for the update. You must be logged in to download the update. Description:
Red Hat Advanced Cluster Management for Kubernetes 2.1.6 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in.
Bug fixes:
-
RHACM 2.1.6 images (BZ#1940581)
-
When generating the import cluster string, it can include unescaped characters (BZ#1934184)
-
Bugs fixed (https://bugzilla.redhat.com/):
1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1929338 - CVE-2020-35149 mquery: Code injection via merge or clone operation 1934184 - When generating the import cluster string, it can include unescaped characters 1940581 - RHACM 2.1.6 images
- Relevant releases/architectures:
Red Hat JBoss Core Services on RHEL 7 Server - noarch, ppc64, x86_64
- Description:
This release adds the new Apache HTTP Server 2.4.37 Service Pack 7 packages that are part of the JBoss Core Services offering. Refer to the Release Notes for information on the most significant bug fixes and enhancements included in this release. Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied.
For details on how to apply this update, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1941547 - CVE-2021-3450 openssl: CA certificate check bypass with X509_V_FLAG_X509_STRICT 1941554 - CVE-2021-3449 openssl: NULL pointer dereference in signature_algorithms processing
- Package List:
Red Hat JBoss Core Services on RHEL 7 Server:
Source: jbcs-httpd24-httpd-2.4.37-70.jbcs.el7.src.rpm jbcs-httpd24-mod_cluster-native-1.3.14-20.Final_redhat_2.jbcs.el7.src.rpm jbcs-httpd24-mod_http2-1.15.7-14.jbcs.el7.src.rpm jbcs-httpd24-mod_jk-1.2.48-13.redhat_1.jbcs.el7.src.rpm jbcs-httpd24-mod_md-2.0.8-33.jbcs.el7.src.rpm jbcs-httpd24-mod_security-2.9.2-60.GA.jbcs.el7.src.rpm jbcs-httpd24-nghttp2-1.39.2-37.jbcs.el7.src.rpm jbcs-httpd24-openssl-1.1.1g-6.jbcs.el7.src.rpm jbcs-httpd24-openssl-chil-1.0.0-5.jbcs.el7.src.rpm jbcs-httpd24-openssl-pkcs11-0.4.10-20.jbcs.el7.src.rpm
noarch: jbcs-httpd24-httpd-manual-2.4.37-70.jbcs.el7.noarch.rpm
ppc64: jbcs-httpd24-mod_http2-1.15.7-14.jbcs.el7.ppc64.rpm jbcs-httpd24-mod_http2-debuginfo-1.15.7-14.jbcs.el7.ppc64.rpm jbcs-httpd24-mod_md-2.0.8-33.jbcs.el7.ppc64.rpm jbcs-httpd24-mod_md-debuginfo-2.0.8-33.jbcs.el7.ppc64.rpm jbcs-httpd24-openssl-chil-1.0.0-5.jbcs.el7.ppc64.rpm jbcs-httpd24-openssl-chil-debuginfo-1.0.0-5.jbcs.el7.ppc64.rpm jbcs-httpd24-openssl-pkcs11-0.4.10-20.jbcs.el7.ppc64.rpm jbcs-httpd24-openssl-pkcs11-debuginfo-0.4.10-20.jbcs.el7.ppc64.rpm
x86_64: jbcs-httpd24-httpd-2.4.37-70.jbcs.el7.x86_64.rpm jbcs-httpd24-httpd-debuginfo-2.4.37-70.jbcs.el7.x86_64.rpm jbcs-httpd24-httpd-devel-2.4.37-70.jbcs.el7.x86_64.rpm jbcs-httpd24-httpd-selinux-2.4.37-70.jbcs.el7.x86_64.rpm jbcs-httpd24-httpd-tools-2.4.37-70.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_cluster-native-1.3.14-20.Final_redhat_2.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_cluster-native-debuginfo-1.3.14-20.Final_redhat_2.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_http2-1.15.7-14.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_http2-debuginfo-1.15.7-14.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_jk-ap24-1.2.48-13.redhat_1.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_jk-debuginfo-1.2.48-13.redhat_1.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_jk-manual-1.2.48-13.redhat_1.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_ldap-2.4.37-70.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_md-2.0.8-33.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_md-debuginfo-2.0.8-33.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_proxy_html-2.4.37-70.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_security-2.9.2-60.GA.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_security-debuginfo-2.9.2-60.GA.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_session-2.4.37-70.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_ssl-2.4.37-70.jbcs.el7.x86_64.rpm jbcs-httpd24-nghttp2-1.39.2-37.jbcs.el7.x86_64.rpm jbcs-httpd24-nghttp2-debuginfo-1.39.2-37.jbcs.el7.x86_64.rpm jbcs-httpd24-nghttp2-devel-1.39.2-37.jbcs.el7.x86_64.rpm jbcs-httpd24-openssl-1.1.1g-6.jbcs.el7.x86_64.rpm jbcs-httpd24-openssl-chil-1.0.0-5.jbcs.el7.x86_64.rpm jbcs-httpd24-openssl-chil-debuginfo-1.0.0-5.jbcs.el7.x86_64.rpm jbcs-httpd24-openssl-debuginfo-1.1.1g-6.jbcs.el7.x86_64.rpm jbcs-httpd24-openssl-devel-1.1.1g-6.jbcs.el7.x86_64.rpm jbcs-httpd24-openssl-libs-1.1.1g-6.jbcs.el7.x86_64.rpm jbcs-httpd24-openssl-perl-1.1.1g-6.jbcs.el7.x86_64.rpm jbcs-httpd24-openssl-pkcs11-0.4.10-20.jbcs.el7.x86_64.rpm jbcs-httpd24-openssl-pkcs11-debuginfo-0.4.10-20.jbcs.el7.x86_64.rpm jbcs-httpd24-openssl-static-1.1.1g-6.jbcs.el7.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- Description:
Red Hat JBoss Web Server is a fully integrated and certified set of components for hosting Java web applications. It is comprised of the Apache HTTP Server, the Apache Tomcat Servlet container, Apache Tomcat Connector (mod_jk), JBoss HTTP Connector (mod_cluster), Hibernate, and the Tomcat Native library.
Security Fix(es):
- golang: crypto/tls: certificate of wrong type is causing TLS client to panic (CVE-2021-34558)
- golang: net: lookup functions may return invalid host names (CVE-2021-33195)
- golang: net/http/httputil: ReverseProxy forwards connection headers if first one is empty (CVE-2021-33197)
- golang: match/big.Rat: may cause a panic or an unrecoverable fatal error if passed inputs with very large exponents (CVE-2021-33198)
- golang: encoding/xml: infinite loop when using xml.NewTokenDecoder with a custom TokenReader (CVE-2021-27918)
- golang: net/http: panic in ReadRequest and ReadResponse when reading a very large header (CVE-2021-31525)
- golang: archive/zip: malformed archive may cause panic or memory exhaustion (CVE-2021-33196)
It was found that the CVE-2021-27918, CVE-2021-31525 and CVE-2021-33196 have been incorrectly mentioned as fixed in RHSA for Serverless client kn 1.16.0. Bugs fixed (https://bugzilla.redhat.com/):
1983596 - CVE-2021-34558 golang: crypto/tls: certificate of wrong type is causing TLS client to panic 1983651 - Release of OpenShift Serverless Serving 1.17.0 1983654 - Release of OpenShift Serverless Eventing 1.17.0 1989564 - CVE-2021-33195 golang: net: lookup functions may return invalid host names 1989570 - CVE-2021-33197 golang: net/http/httputil: ReverseProxy forwards connection headers if first one is empty 1989575 - CVE-2021-33198 golang: math/big.Rat: may cause a panic or an unrecoverable fatal error if passed inputs with very large exponents 1992955 - CVE-2021-3703 serverless: incomplete fix for CVE-2021-27918 / CVE-2021-31525 / CVE-2021-33196
5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202103-1463", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "storagegrid", "scope": "eq", "trust": 2.0, "vendor": "netapp", "version": null }, { "model": "capture client", "scope": "lt", "trust": 1.0, "vendor": "sonicwall", "version": "3.6.24" }, { "model": "jd edwards enterpriseone tools", "scope": "lt", "trust": 1.0, "vendor": "oracle", "version": "9.2.6.0" }, { "model": "mysql server", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "5.7.33" }, { "model": "web gateway", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "8.2.19" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "windriver", "version": "18.0" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "14.16.1" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "15.0.0" }, { "model": "mysql enterprise monitor", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.0.23" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "15.14.0" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "10.24.1" }, { "model": "weblogic server", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.1.1.0.0" }, { "model": "graalvm", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "21.0.0.2" }, { "model": "web gateway cloud service", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "10.1.1" }, { "model": "oncommand workflow automation", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "secure global desktop", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "5.6" }, { "model": "nessus agent", "scope": "gte", "trust": 1.0, "vendor": "tenable", "version": "8.2.1" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.12.0" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "14.0.0" }, { "model": "weblogic server", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "12.2.1.4.0" }, { "model": "openssl", "scope": "lt", "trust": 1.0, "vendor": "openssl", "version": "1.1.1k" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "commerce guided search", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "11.3.2" }, { "model": "mysql workbench", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.0.23" }, { "model": "sonicos", "scope": "lte", "trust": 1.0, "vendor": "sonicwall", "version": "7.0.1-r1456" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "10.0.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "windriver", "version": null }, { "model": "graalvm", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "20.3.1.2" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "windriver", "version": "19.0" }, { "model": "graalvm", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "19.3.5" }, { "model": "web gateway cloud service", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "8.2.19" }, { "model": "web gateway cloud service", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "9.2.10" }, { "model": "peoplesoft enterprise peopletools", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "8.57" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "12.0.0" }, { "model": "secure backup", "scope": "lt", "trust": 1.0, "vendor": "oracle", "version": "18.1.0.1.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "34" }, { "model": "mysql server", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "8.0.15" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "windriver", "version": "17.0" }, { "model": "santricity smi-s provider", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "enterprise manager for storage management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "13.4.0.0" }, { "model": "mysql server", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.0.23" }, { "model": "jd edwards world security", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "a9.4" }, { "model": "email security", "scope": "lt", "trust": 1.0, "vendor": "sonicwall", "version": "10.0.11" }, { "model": "peoplesoft enterprise peopletools", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.59" }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "1.1.1h" }, { "model": "cloud volumes ontap mediator", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "freebsd", "scope": "eq", "trust": 1.0, "vendor": "freebsd", "version": "12.2" }, { "model": "web gateway", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "10.1.1" }, { "model": "nessus agent", "scope": "lte", "trust": 1.0, "vendor": "tenable", "version": "8.2.3" }, { "model": "mysql connectors", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.0.23" }, { "model": "sma100", "scope": "lt", "trust": 1.0, "vendor": "sonicwall", "version": "10.2.1.0-17sv" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "12.22.1" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.11.1" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.12.1" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.13.0" }, { "model": "nessus", "scope": "lte", "trust": 1.0, "vendor": "tenable", "version": "8.13.1" }, { "model": "web gateway", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "9.2.10" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.11.0" } ], "sources": [ { "db": "NVD", "id": "CVE-2021-3450" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.1.1k", "versionStartIncluding": "1.1.1h", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:freebsd:freebsd:12.2:p1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:freebsd:freebsd:12.2:p2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:freebsd:freebsd:12.2:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:santricity_smi-s_provider_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:santricity_smi-s_provider:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:storagegrid_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:storagegrid:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:windriver:linux:-:*:*:*:cd:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:windriver:linux:18.0:*:*:*:lts:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:windriver:linux:19.0:*:*:*:lts:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:windriver:linux:17.0:*:*:*:lts:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:oncommand_workflow_automation:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:storagegrid:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:cloud_volumes_ontap_mediator:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:34:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:tenable:nessus_agent:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.2.3", "versionStartIncluding": "8.2.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.13.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.11.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.12.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.12.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.13.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.11.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_world_security:a9.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:weblogic_server:12.2.1.4.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:weblogic_server:14.1.1.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_manager_for_storage_management:13.4.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:secure_global_desktop:5.6:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:graalvm:20.3.1.2:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:graalvm:21.0.0.2:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:graalvm:19.3.5:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_server:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.0.23", "versionStartIncluding": "8.0.15", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_server:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "5.7.33", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_workbench:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.0.23", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:commerce_guided_search:11.3.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_connectors:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.0.23", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_enterpriseone_tools:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "9.2.6.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_enterprise_monitor:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.0.23", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:secure_backup:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "18.1.0.1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.59", "versionStartIncluding": "8.57", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway_cloud_service:10.1.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway_cloud_service:9.2.10:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway_cloud_service:8.2.19:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway:10.1.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway:9.2.10:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway:8.2.19:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:sonicwall:sma100_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "10.2.1.0-17sv", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:sonicwall:sma100:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:sonicwall:sonicos:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "7.0.1-r1456", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:sonicwall:email_security:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "10.0.11", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:sonicwall:capture_client:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.6.24", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndExcluding": "15.14.0", "versionStartIncluding": "15.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndExcluding": "14.16.1", "versionStartIncluding": "14.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndExcluding": "12.22.1", "versionStartIncluding": "12.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndExcluding": "10.24.1", "versionStartIncluding": "10.0.0", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2021-3450" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "162694" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "162383" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "162183" }, { "db": "PACKETSTORM", "id": "162337" }, { "db": "PACKETSTORM", "id": "162196" }, { "db": "PACKETSTORM", "id": "162201" }, { "db": "PACKETSTORM", "id": "164192" } ], "trust": 0.9 }, "cve": "CVE-2021-3450", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 5.8, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.6, "impactScore": 4.9, "integrityImpact": "PARTIAL", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:N", "version": "2.0" }, { "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "NONE", "baseScore": 5.8, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.6, "id": "VHN-388430", "impactScore": 4.9, "integrityImpact": "PARTIAL", "severity": "MEDIUM", "trust": 0.1, "vectorString": "AV:N/AC:M/AU:N/C:P/I:P/A:N", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULMON", "availabilityImpact": "NONE", "baseScore": 5.8, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.6, "id": "CVE-2021-3450", "impactScore": 4.9, "integrityImpact": "PARTIAL", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "MEDIUM", "trust": 0.1, "userInteractionRequired": null, "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:N", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "HIGH", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 7.4, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "exploitabilityScore": 2.2, "impactScore": 5.2, "integrityImpact": "HIGH", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:H/A:N", "version": "3.1" } ], "severity": [ { "author": "NVD", "id": "CVE-2021-3450", "trust": 1.0, "value": "HIGH" }, { "author": "VULHUB", "id": "VHN-388430", "trust": 0.1, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2021-3450", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-388430" }, { "db": "VULMON", "id": "CVE-2021-3450" }, { "db": "NVD", "id": "CVE-2021-3450" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "The X509_V_FLAG_X509_STRICT flag enables additional security checks of the certificates present in a certificate chain. It is not set by default. Starting from OpenSSL version 1.1.1h a check to disallow certificates in the chain that have explicitly encoded elliptic curve parameters was added as an additional strict check. An error in the implementation of this check meant that the result of a previous check to confirm that certificates in the chain are valid CA certificates was overwritten. This effectively bypasses the check that non-CA certificates must not be able to issue other certificates. If a \"purpose\" has been configured then there is a subsequent opportunity for checks that the certificate is a valid CA. All of the named \"purpose\" values implemented in libcrypto perform this check. Therefore, where a purpose is set the certificate chain will still be rejected even when the strict flag has been used. A purpose is set by default in libssl client and server certificate verification routines, but it can be overridden or removed by an application. In order to be affected, an application must explicitly set the X509_V_FLAG_X509_STRICT verification flag and either not set a purpose for the certificate verification or, in the case of TLS client or server applications, override the default purpose. OpenSSL versions 1.1.1h and newer are affected by this issue. Users of these versions should upgrade to OpenSSL 1.1.1k. OpenSSL 1.0.2 is not impacted by this issue. Fixed in OpenSSL 1.1.1k (Affected 1.1.1h-1.1.1j). OpenSSL is an open source general encryption library of the Openssl team that can implement the Secure Sockets Layer (SSLv2/v3) and Transport Layer Security (TLSv1) protocols. The product supports a variety of encryption algorithms, including symmetric ciphers, hash algorithms, secure hash algorithms, etc. On March 25, 2021, the OpenSSL Project released a security advisory, OpenSSL Security Advisory [25 March 2021], that disclosed two vulnerabilities. \nExploitation of these vulnerabilities could allow an malicious user to use a valid non-certificate authority (CA) certificate to act as a CA and sign a certificate for an arbitrary organization, user or device, or to cause a denial of service (DoS) condition. See\nthe following Release Notes documentation, which will be updated shortly\nfor this release, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana\ngement_for_kubernetes/2.3/html/release_notes/\n\nSecurity:\n\n* fastify-reply-from: crafted URL allows prefix scape of the proxied\nbackend service (CVE-2021-21321)\n\n* fastify-http-proxy: crafted URL allows prefix scape of the proxied\nbackend service (CVE-2021-21322)\n\n* nodejs-netmask: improper input validation of octal input data\n(CVE-2021-28918)\n\n* redis: Integer overflow via STRALGO LCS command (CVE-2021-29477)\n\n* redis: Integer overflow via COPY command for large intsets\n(CVE-2021-29478)\n\n* nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)\n\n* nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n(CVE-2020-28500)\n\n* golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing\n- -u- extension (CVE-2020-28851)\n\n* golang.org/x/text: Panic in language.ParseAcceptLanguage while processing\nbcp47 tag (CVE-2020-28852)\n\n* nodejs-ansi_up: XSS due to insufficient URL sanitization (CVE-2021-3377)\n\n* oras: zip-slip vulnerability via oras-pull (CVE-2021-21272)\n\n* redis: integer overflow when configurable limit for maximum supported\nbulk input size is too big on 32-bit platforms (CVE-2021-21309)\n\n* nodejs-lodash: command injection via template (CVE-2021-23337)\n\n* nodejs-hosted-git-info: Regular Expression denial of service via\nshortcutMatch in fromUrl() (CVE-2021-23362)\n\n* browserslist: parsing of invalid queries could result in Regular\nExpression Denial of Service (ReDoS) (CVE-2021-23364)\n\n* nodejs-postcss: Regular expression denial of service during source map\nparsing (CVE-2021-23368)\n\n* nodejs-handlebars: Remote code execution when compiling untrusted compile\ntemplates with strict:true option (CVE-2021-23369)\n\n* nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in\nlib/previous-map.js (CVE-2021-23382)\n\n* nodejs-handlebars: Remote code execution when compiling untrusted compile\ntemplates with compat:true option (CVE-2021-23383)\n\n* openssl: integer overflow in CipherUpdate (CVE-2021-23840)\n\n* openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n(CVE-2021-23841)\n\n* nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n(CVE-2021-27292)\n\n* grafana: snapshot feature allow an unauthenticated remote attacker to\ntrigger a DoS via a remote API call (CVE-2021-27358)\n\n* nodejs-is-svg: ReDoS via malicious string (CVE-2021-28092)\n\n* nodejs-netmask: incorrectly parses an IP address that has octal integer\nwith invalid character (CVE-2021-29418)\n\n* ulikunitz/xz: Infinite loop in readUvarint allows for denial of service\n(CVE-2021-29482)\n\n* normalize-url: ReDoS for data URLs (CVE-2021-33502)\n\n* nodejs-trim-newlines: ReDoS in .end() method (CVE-2021-33623)\n\n* nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n(CVE-2021-23343)\n\n* html-parse-stringify: Regular Expression DoS (CVE-2021-23346)\n\n* openssl: incorrect SSLv2 rollback protection (CVE-2021-23839)\n\nFor more details about the security issues, including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npages listed in the References section. \n\nBugs:\n\n* RFE Make the source code for the endpoint-metrics-operator public (BZ#\n1913444)\n\n* cluster became offline after apiserver health check (BZ# 1942589)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension\n1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag\n1913444 - RFE Make the source code for the endpoint-metrics-operator public\n1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull\n1927520 - RHACM 2.3.0 images\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection\n1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate\n1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms\n1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization\n1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string\n1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application\n1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call\n1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS\n1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service\n1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service\n1942589 - cluster became offline after apiserver health check\n1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl()\n1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character\n1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data\n1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service\n1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option\n1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing\n1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js\n1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service\n1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS)\n1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option\n1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command\n1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets\n1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs\n1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method\n1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions\n1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id\n1983131 - Defragmenting an etcd member doesn\u0027t reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters\n\n5. \n\nBug fix:\n\n* RHACM 2.0.10 images (BZ #1940452)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1940452 - RHACM 2.0.10 images\n1944286 - CVE-2021-23358 nodejs-underscore: Arbitrary code execution via the template function\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: OpenShift Container Platform 4.10.3 security update\nAdvisory ID: RHSA-2022:0056-01\nProduct: Red Hat OpenShift Enterprise\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:0056\nIssue date: 2022-03-10\nCVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 \n CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 \n CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 \n CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 \n CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 \n CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 \n CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 \n CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 \n CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 \n CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 \n CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 \n CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 \n CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 \n CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 \n CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 \n CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 \n CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 \n CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 \n CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 \n CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 \n CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 \n CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 \n CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 \n CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 \n CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 \n CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 \n CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 \n CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 \n CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 \n CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 \n CVE-2022-24407 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.10.3 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.10.3. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:0055\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nSecurity Fix(es):\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n* grafana: Snapshot authentication bypass (CVE-2021-39226)\n* golang: net/http: limit growth of header canonicalization cache\n(CVE-2021-44716)\n* nodejs-axios: Regular expression denial of service in trim function\n(CVE-2021-3749)\n* golang: syscall: don\u0027t close fd 0 on ForkExec error (CVE-2021-44717)\n* grafana: Forward OAuth Identity Token can allow users to access some data\nsources (CVE-2022-21673)\n* grafana: directory traversal vulnerability (CVE-2021-43813)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-x86_64\n\nThe image digest is\nsha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56\n\n(For s390x architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-s390x\n\nThe image digest is\nsha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69\n\n(For ppc64le architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le\n\nThe image digest is\nsha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c\n\nAll OpenShift Container Platform 4.10 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.10 see the following documentation,\nwhich will be updated shortly for this release, for moderate instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1808240 - Always return metrics value for pods under the user\u0027s namespace\n1815189 - feature flagged UI does not always become available after operator installation\n1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters\n1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly\n1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal\n1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered\n1878925 - \u0027oc adm upgrade --to ...\u0027 rejects versions which occur only in history, while the cluster-version operator supports history fallback\n1880738 - origin e2e test deletes original worker\n1882983 - oVirt csi driver should refuse to provision RWX and ROX PV\n1886450 - Keepalived router id check not documented for RHV/VMware IPI\n1889488 - The metrics endpoint for the Scheduler is not protected by RBAC\n1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom\n1896474 - Path based routing is broken for some combinations\n1897431 - CIDR support for additional network attachment with the bridge CNI plug-in\n1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes\n1907433 - Excessive logging in image operator\n1909906 - The router fails with PANIC error when stats port already in use\n1911173 - [MSTR-998] Many charts\u0027 legend names show {{}} instead of words\n1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. \n1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)\n1917893 - [ovirt] install fails: due to terraform error \"Cannot attach Virtual Disk: Disk is locked\" on vm resource\n1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1926522 - oc adm catalog does not clean temporary files\n1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. \n1928141 - kube-storage-version-migrator constantly reporting type \"Upgradeable\" status Unknown\n1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it\u0027s storageclass is not yet finished, confusing users\n1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x\n1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade\n1937085 - RHV UPI inventory playbook missing guarantee_memory\n1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion\n1938236 - vsphere-problem-detector does not support overriding log levels via storage CR\n1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods\n1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer\n1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]\n1942913 - ThanosSidecarUnhealthy isn\u0027t resilient to WAL replays. \n1943363 - [ovn] CNO should gracefully terminate ovn-northd\n1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17\n1948080 - authentication should not set Available=False APIServices_Error with 503s\n1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set\n1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0\n1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer\n1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs\n1953264 - \"remote error: tls: bad certificate\" logs in prometheus-operator container\n1955300 - Machine config operator reports unavailable for 23m during upgrade\n1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set\n1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set\n1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters\n1956496 - Needs SR-IOV Docs Upstream\n1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret\n1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid\n1956964 - upload a boot-source to OpenShift virtualization using the console\n1957547 - [RFE]VM name is not auto filled in dev console\n1958349 - ovn-controller doesn\u0027t release the memory after cluster-density run\n1959352 - [scale] failed to get pod annotation: timed out waiting for annotations\n1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not\n1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]\n1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects\n1961391 - String updates\n1961509 - DHCP daemon pod should have CPU and memory requests set but not limits\n1962066 - Edit machine/machineset specs not working\n1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1963053 - `oc whoami --show-console` should show the web console URL, not the server api URL\n1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1964327 - Support containers with name:tag@digest\n1964789 - Send keys and disconnect does not work for VNC console\n1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7\n1966445 - Unmasking a service doesn\u0027t work if it masked using MCO\n1966477 - Use GA version in KAS/OAS/OauthAS to avoid: \"audit.k8s.io/v1beta1\" is deprecated and will be removed in a future release, use \"audit.k8s.io/v1\" instead\n1966521 - kube-proxy\u0027s userspace implementation consumes excessive CPU\n1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up\n1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount\n1970218 - MCO writes incorrect file contents if compression field is specified\n1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]\n1970805 - Cannot create build when docker image url contains dir structure\n1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io\n1972827 - image registry does not remain available during upgrade\n1972962 - Should set the minimum value for the `--max-icsp-size` flag of `oc adm catalog mirror`\n1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run\n1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established\n1976301 - [ci] e2e-azure-upi is permafailing\n1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. \n1976674 - CCO didn\u0027t set Upgradeable to False when cco mode is configured to Manual on azure platform\n1976894 - Unidling a StatefulSet does not work as expected\n1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases\n1977414 - Build Config timed out waiting for condition 400: Bad Request\n1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus\n1978528 - systemd-coredump started and failed intermittently for unknown reasons\n1978581 - machine-config-operator: remove runlevel from mco namespace\n1979562 - Cluster operators: don\u0027t show messages when neither progressing, degraded or unavailable\n1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9\n1979966 - OCP builds always fail when run on RHEL7 nodes\n1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading\n1981549 - Machine-config daemon does not recover from broken Proxy configuration\n1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]\n1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues\n1982063 - \u0027Control Plane\u0027 is not translated in Simplified Chinese language in Home-\u003eOverview page\n1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands\n1982662 - Workloads - DaemonSets - Add storage: i18n misses\n1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE \"*/secrets/encryption-config\" on single node clusters\n1983758 - upgrades are failing on disruptive tests\n1983964 - Need Device plugin configuration for the NIC \"needVhostNet\" \u0026 \"isRdma\"\n1984592 - global pull secret not working in OCP4.7.4+ for additional private registries\n1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs\n1985486 - Cluster Proxy not used during installation on OSP with Kuryr\n1985724 - VM Details Page missing translations\n1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted\n1985933 - Downstream image registry recommendation\n1985965 - oVirt CSI driver does not report volume stats\n1986216 - [scale] SNO: Slow Pod recovery due to \"timed out waiting for OVS port binding\"\n1986237 - \"MachineNotYetDeleted\" in Pending state , alert not fired\n1986239 - crictl create fails with \"PID namespace requested, but sandbox infra container invalid\"\n1986302 - console continues to fetch prometheus alert and silences for normal user\n1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI\n1986338 - error creating list of resources in Import YAML\n1986502 - yaml multi file dnd duplicates previous dragged files\n1986819 - fix string typos for hot-plug disks\n1987044 - [OCPV48] Shutoff VM is being shown as \"Starting\" in WebUI when using spec.runStrategy Manual/RerunOnFailure\n1987136 - Declare operatorframework.io/arch.* labels for all operators\n1987257 - Go-http-client user-agent being used for oc adm mirror requests\n1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold\n1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP\n1988406 - SSH key dropped when selecting \"Customize virtual machine\" in UI\n1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade\n1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with \"Unable to connect to the server\"\n1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs\n1989438 - expected replicas is wrong\n1989502 - Developer Catalog is disappearing after short time\n1989843 - \u0027More\u0027 and \u0027Show Less\u0027 functions are not translated on several page\n1990014 - oc debug \u003cpod-name\u003e does not work for Windows pods\n1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created\n1990193 - \u0027more\u0027 and \u0027Show Less\u0027 is not being translated on Home -\u003e Search page\n1990255 - Partial or all of the Nodes/StorageClasses don\u0027t appear back on UI after text is removed from search bar\n1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI\n1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi-* symlinks\n1990556 - get-resources.sh doesn\u0027t honor the no_proxy settings even with no_proxy var\n1990625 - Ironic agent registers with SLAAC address with privacy-stable\n1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time\n1991067 - github.com can not be resolved inside pods where cluster is running on openstack. \n1991573 - Enable typescript strictNullCheck on network-policies files\n1991641 - Baremetal Cluster Operator still Available After Delete Provisioning\n1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator\n1991819 - Misspelled word \"ocurred\" in oc inspect cmd\n1991942 - Alignment and spacing fixes\n1992414 - Two rootdisks show on storage step if \u0027This is a CD-ROM boot source\u0027 is checked\n1992453 - The configMap failed to save on VM environment tab\n1992466 - The button \u0027Save\u0027 and \u0027Reload\u0027 are not translated on vm environment tab\n1992475 - The button \u0027Open console in New Window\u0027 and \u0027Disconnect\u0027 are not translated on vm console tab\n1992509 - Could not customize boot source due to source PVC not found\n1992541 - all the alert rules\u0027 annotations \"summary\" and \"description\" should comply with the OpenShift alerting guidelines\n1992580 - storageProfile should stay with the same value by check/uncheck the apply button\n1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply\n1992777 - [IBMCLOUD] Default \"ibm_iam_authorization_policy\" is not working as expected in all scenarios\n1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed)\n1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing\n1994094 - Some hardcodes are detected at the code level in OpenShift console components\n1994142 - Missing required cloud config fields for IBM Cloud\n1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools\n1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart\n1995335 - [SCALE] ovnkube CNI: remove ovs flows check\n1995493 - Add Secret to workload button and Actions button are not aligned on secret details page\n1995531 - Create RDO-based Ironic image to be promoted to OKD\n1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator\n1995887 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs\n1995924 - CMO should report `Upgradeable: false` when HA workload is incorrectly spread\n1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole\n1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN\n1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down\n1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page\n1996647 - Provide more useful degraded message in auth operator on DNS errors\n1996736 - Large number of 501 lr-policies in INCI2 env\n1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes\n1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP\n1996928 - Enable default operator indexes on ARM\n1997028 - prometheus-operator update removes env var support for thanos-sidecar\n1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used\n1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller. \n1997245 - \"Subscription already exists in openshift-storage namespace\" error message is seen while installing odf-operator via UI\n1997269 - Have to refresh console to install kube-descheduler\n1997478 - Storage operator is not available after reboot cluster instances\n1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n1997967 - storageClass is not reserved from default wizard to customize wizard\n1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order\n1998038 - [e2e][automation] add tests for UI for VM disk hot-plug\n1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus\n1998174 - Create storageclass gp3-csi after install ocp cluster on aws\n1998183 - \"r: Bad Gateway\" info is improper\n1998235 - Firefox warning: Cookie \u201ccsrf-token\u201d will be soon rejected\n1998377 - Filesystem table head is not full displayed in disk tab\n1998378 - Virtual Machine is \u0027Not available\u0027 in Home -\u003e Overview -\u003e Cluster inventory\n1998519 - Add fstype when create localvolumeset instance on web console\n1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses\n1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page\n1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable\n1999091 - Console update toast notification can appear multiple times\n1999133 - removing and recreating static pod manifest leaves pod in error state\n1999246 - .indexignore is not ingore when oc command load dc configuration\n1999250 - ArgoCD in GitOps operator can\u0027t manage namespaces\n1999255 - ovnkube-node always crashes out the first time it starts\n1999261 - ovnkube-node log spam (and security token leak?)\n1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -\u003e Operator Installation page\n1999314 - console-operator is slow to mark Degraded as False once console starts working\n1999425 - kube-apiserver with \"[SHOULD NOT HAPPEN] failed to update managedFields\" err=\"failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)\n1999556 - \"master\" pool should be updated before the CVO reports available at the new version occurred\n1999578 - AWS EFS CSI tests are constantly failing\n1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages\n1999619 - cloudinit is malformatted if a user sets a password during VM creation flow\n1999621 - Empty ssh_authorized_keys entry is added to VM\u0027s cloudinit if created from a customize flow\n1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined\n1999668 - openshift-install destroy cluster panic\u0027s when given invalid credentials to cloud provider (Azure Stack Hub)\n1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource\n1999771 - revert \"force cert rotation every couple days for development\" in 4.10\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n1999796 - Openshift Console `Helm` tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace. \n1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions\n1999903 - Click \"This is a CD-ROM boot source\" ticking \"Use template size PVC\" on pvc upload form\n1999983 - No way to clear upload error from template boot source\n2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter\n2000096 - Git URL is not re-validated on edit build-config form reload\n2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig\n2000236 - Confusing usage message from dynkeepalived CLI\n2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported\n2000430 - bump cluster-api-provider-ovirt version in installer\n2000450 - 4.10: Enable static PV multi-az test\n2000490 - All critical alerts shipped by CMO should have links to a runbook\n2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)\n2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster\n2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled\n2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console\n2000754 - IPerf2 tests should be lower\n2000846 - Structure logs in the entire codebase of Local Storage Operator\n2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24\n2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM\n2000938 - CVO does not respect changes to a Deployment strategy\n2000963 - \u0027Inline-volume (default fs)] volumes should store data\u0027 tests are failing on OKD with updated selinux-policy\n2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don\u0027t have snapshot and should be fullClone\n2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole\n2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error\n2001337 - Details Card in ODF Dashboard mentions OCS\n2001339 - fix text content hotplug\n2001413 - [e2e][automation] add/delete nic and disk to template\n2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log\n2001442 - Empty termination.log file for the kube-apiserver has too permissive mode\n2001479 - IBM Cloud DNS unable to create/update records\n2001566 - Enable alerts for prometheus operator in UWM\n2001575 - Clicking on the perspective switcher shows a white page with loader\n2001577 - Quick search placeholder is not displayed properly when the search string is removed\n2001578 - [e2e][automation] add tests for vm dashboard tab\n2001605 - PVs remain in Released state for a long time after the claim is deleted\n2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options\n2001620 - Cluster becomes degraded if it can\u0027t talk to Manila\n2001760 - While creating \u0027Backing Store\u0027, \u0027Bucket Class\u0027, \u0027Namespace Store\u0027 user is navigated to \u0027Installed Operators\u0027 page after clicking on ODF\n2001761 - Unable to apply cluster operator storage for SNO on GCP platform. \n2001765 - Some error message in the log of diskmaker-manager caused confusion\n2001784 - show loading page before final results instead of showing a transient message No log files exist\n2001804 - Reload feature on Environment section in Build Config form does not work properly\n2001810 - cluster admin unable to view BuildConfigs in all namespaces\n2001817 - Failed to load RoleBindings list that will lead to \u2018Role name\u2019 is not able to be selected on Create RoleBinding page as well\n2001823 - OCM controller must update operator status\n2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start\n2001835 - Could not select image tag version when create app from dev console\n2001855 - Add capacity is disabled for ocs-storagecluster\n2001856 - Repeating event: MissingVersion no image found for operand pod\n2001959 - Side nav list borders don\u0027t extend to edges of container\n2002007 - Layout issue on \"Something went wrong\" page\n2002010 - ovn-kube may never attempt to retry a pod creation\n2002012 - Cannot change volume mode when cloning a VM from a template\n2002027 - Two instances of Dotnet helm chart show as one in topology\n2002075 - opm render does not automatically pulling in the image(s) used in the deployments\n2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster\n2002125 - Network policy details page heading should be updated to Network Policy details\n2002133 - [e2e][automation] add support/virtualization and improve deleteResource\n2002134 - [e2e][automation] add test to verify vm details tab\n2002215 - Multipath day1 not working on s390x\n2002238 - Image stream tag is not persisted when switching from yaml to form editor\n2002262 - [vSphere] Incorrect user agent in vCenter sessions list\n2002266 - SinkBinding create form doesn\u0027t allow to use subject name, instead of label selector\n2002276 - OLM fails to upgrade operators immediately\n2002300 - Altering the Schedule Profile configurations doesn\u0027t affect the placement of the pods\n2002354 - Missing DU configuration \"Done\" status reporting during ZTP flow\n2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn\u0027t use commonjs\n2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation\n2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN\n2002397 - Resources search is inconsistent\n2002434 - CRI-O leaks some children PIDs\n2002443 - Getting undefined error on create local volume set page\n2002461 - DNS operator performs spurious updates in response to API\u0027s defaulting of service\u0027s internalTrafficPolicy\n2002504 - When the openshift-cluster-storage-operator is degraded because of \"VSphereProblemDetectorController_SyncError\", the insights operator is not sending the logs from all pods. \n2002559 - User preference for topology list view does not follow when a new namespace is created\n2002567 - Upstream SR-IOV worker doc has broken links\n2002588 - Change text to be sentence case to align with PF\n2002657 - ovn-kube egress IP monitoring is using a random port over the node network\n2002713 - CNO: OVN logs should have millisecond resolution\n2002748 - [ICNI2] \u0027ErrorAddingLogicalPort\u0027 failed to handle external GW check: timeout waiting for namespace event\n2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite\n2002763 - Two storage systems getting created with external mode RHCS\n2002808 - KCM does not use web identity credentials\n2002834 - Cluster-version operator does not remove unrecognized volume mounts\n2002896 - Incorrect result return when user filter data by name on search page\n2002950 - Why spec.containers.command is not created with \"oc create deploymentconfig \u003cdc-name\u003e --image=\u003cimage\u003e -- \u003ccommand\u003e\"\n2003096 - [e2e][automation] check bootsource URL is displaying on review step\n2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role\n2003120 - CI: Uncaught error with ResizeObserver on operand details page\n2003145 - Duplicate operand tab titles causes \"two children with the same key\" warning\n2003164 - OLM, fatal error: concurrent map writes\n2003178 - [FLAKE][knative] The UI doesn\u0027t show updated traffic distribution after accepting the form\n2003193 - Kubelet/crio leaks netns and veth ports in the host\n2003195 - OVN CNI should ensure host veths are removed\n2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting \u0027-e JENKINS_PASSWORD=password\u0027 ENV which was working for old container images\n2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2003239 - \"[sig-builds][Feature:Builds][Slow] can use private repositories as build input\" tests fail outside of CI\n2003244 - Revert libovsdb client code\n2003251 - Patternfly components with list element has list item bullet when they should not. \n2003252 - \"[sig-builds][Feature:Builds][Slow] starting a build using CLI start-build test context override environment BUILD_LOGLEVEL in buildconfig\" tests do not work as expected outside of CI\n2003269 - Rejected pods should be filtered from admission regression\n2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release\n2003426 - [e2e][automation] add test for vm details bootorder\n2003496 - [e2e][automation] add test for vm resources requirment settings\n2003641 - All metal ipi jobs are failing in 4.10\n2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state\n2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node\n2003683 - Samples operator is panicking in CI\n2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster \"Connection Details\" page\n2003715 - Error on creating local volume set after selection of the volume mode\n2003743 - Remove workaround keeping /boot RW for kdump support\n2003775 - etcd pod on CrashLoopBackOff after master replacement procedure\n2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver\n2003792 - Monitoring metrics query graph flyover panel is useless\n2003808 - Add Sprint 207 translations\n2003845 - Project admin cannot access image vulnerabilities view\n2003859 - sdn emits events with garbage messages\n2003896 - (release-4.10) ApiRequestCounts conditional gatherer\n2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas\n2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes\n2004059 - [e2e][automation] fix current tests for downstream\n2004060 - Trying to use basic spring boot sample causes crash on Firefox\n2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn\u0027t close after selection\n2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently\n2004203 - build config\u0027s created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver\n2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory\n2004449 - Boot option recovery menu prevents image boot\n2004451 - The backup filename displayed in the RecentBackup message is incorrect\n2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts\n2004508 - TuneD issues with the recent ConfigParser changes. \n2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions\n2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs\n2004578 - Monitoring and node labels missing for an external storage platform\n2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days\n2004596 - [4.10] Bootimage bump tracker\n2004597 - Duplicate ramdisk log containers running\n2004600 - Duplicate ramdisk log containers running\n2004609 - output of \"crictl inspectp\" is not complete\n2004625 - BMC credentials could be logged if they change\n2004632 - When LE takes a large amount of time, multiple whereabouts are seen\n2004721 - ptp/worker custom threshold doesn\u0027t change ptp events threshold\n2004736 - [knative] Create button on new Broker form is inactive despite form being filled\n2004796 - [e2e][automation] add test for vm scheduling policy\n2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque\n2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card\n2004901 - [e2e][automation] improve kubevirt devconsole tests\n2004962 - Console frontend job consuming too much CPU in CI\n2005014 - state of ODF StorageSystem is misreported during installation or uninstallation\n2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines\n2005179 - pods status filter is not taking effect\n2005182 - sync list of deprecated apis about to be removed\n2005282 - Storage cluster name is given as title in StorageSystem details page\n2005355 - setuptools 58 makes Kuryr CI fail\n2005407 - ClusterNotUpgradeable Alert should be set to Severity Info\n2005415 - PTP operator with sidecar api configured throws bind: address already in use\n2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console\n2005554 - The switch status of the button \"Show default project\" is not revealed correctly in code\n2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable\n2005761 - QE - Implementing crw-basic feature file\n2005783 - Fix accessibility issues in the \"Internal\" and \"Internal - Attached Mode\" Installation Flow\n2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty\n2005854 - SSH NodePort service is created for each VM\n2005901 - KS, KCM and KA going Degraded during master nodes upgrade\n2005902 - Current UI flow for MCG only deployment is confusing and doesn\u0027t reciprocate any message to the end-user\n2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics\n2005971 - Change telemeter to report the Application Services product usage metrics\n2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files\n2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased\n2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types\n2006101 - Power off fails for drivers that don\u0027t support Soft power off\n2006243 - Metal IPI upgrade jobs are running out of disk space\n2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn\u0027t use the 0th address\n2006308 - Backing Store YAML tab on click displays a blank screen on UI\n2006325 - Multicast is broken across nodes\n2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators\n2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource\n2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2006690 - OS boot failure \"x64 Exception Type 06 - Invalid Opcode Exception\"\n2006714 - add retry for etcd errors in kube-apiserver\n2006767 - KubePodCrashLooping may not fire\n2006803 - Set CoreDNS cache entries for forwarded zones\n2006861 - Add Sprint 207 part 2 translations\n2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap\n2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors\n2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded\n2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick\n2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails\n2007271 - CI Integration for Knative test cases\n2007289 - kubevirt tests are failing in CI\n2007322 - Devfile/Dockerfile import does not work for unsupported git host\n2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3. \n2007379 - Events are not generated for master offset for ordinary clock\n2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace\n2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address\n2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error\n2007522 - No new local-storage-operator-metadata-container is build for 4.10\n2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10\n2007580 - Azure cilium installs are failing e2e tests\n2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10\n2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes\n2007692 - 4.9 \"old-rhcos\" jobs are permafailing with storage test failures\n2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow\n2007757 - must-gather extracts imagestreams in the \"openshift\" namespace, but not Templates\n2007802 - AWS machine actuator get stuck if machine is completely missing\n2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator\n2008119 - The serviceAccountIssuer field on Authentication CR is reseted to \u201c\u201d when installation process\n2008151 - Topology breaks on clicking in empty state\n2008185 - Console operator go.mod should use go 1.16.version\n2008201 - openstack-az job is failing on haproxy idle test\n2008207 - vsphere CSI driver doesn\u0027t set resource limits\n2008223 - gather_audit_logs: fix oc command line to get the current audit profile\n2008235 - The Save button in the Edit DC form remains disabled\n2008256 - Update Internationalization README with scope info\n2008321 - Add correct documentation link for MON_DISK_LOW\n2008462 - Disable PodSecurity feature gate for 4.10\n2008490 - Backing store details page does not contain all the kebab actions. \n2008521 - gcp-hostname service should correct invalid search entries in resolv.conf\n2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount\n2008539 - Registry doesn\u0027t fall back to secondary ImageContentSourcePolicy Mirror\n2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers\n2008599 - Azure Stack UPI does not have Internal Load Balancer\n2008612 - Plugin asset proxy does not pass through browser cache headers\n2008712 - VPA webhook timeout prevents all pods from starting\n2008733 - kube-scheduler: exposed /debug/pprof port\n2008911 - Prometheus repeatedly scaling prometheus-operator replica set\n2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2008987 - OpenShift SDN Hosted Egress IP\u0027s are not being scheduled to nodes after upgrade to 4.8.12\n2009055 - Instances of OCS to be replaced with ODF on UI\n2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs\n2009083 - opm blocks pruning of existing bundles during add\n2009111 - [IPI-on-GCP] \u0027Install a cluster with nested virtualization enabled\u0027 failed due to unable to launch compute instances\n2009131 - [e2e][automation] add more test about vmi\n2009148 - [e2e][automation] test vm nic presets and options\n2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator\n2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family\n2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted\n2009384 - UI changes to support BindableKinds CRD changes\n2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped\n2009424 - Deployment upgrade is failing availability check\n2009454 - Change web terminal subscription permissions from get to list\n2009465 - container-selinux should come from rhel8-appstream\n2009514 - Bump OVS to 2.16-15\n2009555 - Supermicro X11 system not booting from vMedia with AI\n2009623 - Console: Observe \u003e Metrics page: Table pagination menu shows bullet points\n2009664 - Git Import: Edit of knative service doesn\u0027t work as expected for git import flow\n2009699 - Failure to validate flavor RAM\n2009754 - Footer is not sticky anymore in import forms\n2009785 - CRI-O\u0027s version file should be pinned by MCO\n2009791 - Installer: ibmcloud ignores install-config values\n2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13\n2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo\n2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2009873 - Stale Logical Router Policies and Annotations for a given node\n2009879 - There should be test-suite coverage to ensure admin-acks work as expected\n2009888 - SRO package name collision between official and community version\n2010073 - uninstalling and then reinstalling sriov-network-operator is not working\n2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node. \n2010181 - Environment variables not getting reset on reload on deployment edit form\n2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2010341 - OpenShift Alerting Rules Style-Guide Compliance\n2010342 - Local console builds can have out of memory errors\n2010345 - OpenShift Alerting Rules Style-Guide Compliance\n2010348 - Reverts PIE build mode for K8S components\n2010352 - OpenShift Alerting Rules Style-Guide Compliance\n2010354 - OpenShift Alerting Rules Style-Guide Compliance\n2010359 - OpenShift Alerting Rules Style-Guide Compliance\n2010368 - OpenShift Alerting Rules Style-Guide Compliance\n2010376 - OpenShift Alerting Rules Style-Guide Compliance\n2010662 - Cluster is unhealthy after image-registry-operator tests\n2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)\n2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API\n2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address\n2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing\n2010864 - Failure building EFS operator\n2010910 - ptp worker events unable to identify interface for multiple interfaces\n2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24\n2010921 - Azure Stack Hub does not handle additionalTrustBundle\n2010931 - SRO CSV uses non default category \"Drivers and plugins\"\n2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. \n2011038 - optional operator conditions are confusing\n2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass\n2011171 - diskmaker-manager constantly redeployed by LSO when creating LV\u0027s\n2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image\n2011368 - Tooltip in pipeline visualization shows misleading data\n2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels\n2011411 - Managed Service\u0027s Cluster overview page contains link to missing Storage dashboards\n2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster\n2011513 - Kubelet rejects pods that use resources that should be freed by completed pods\n2011668 - Machine stuck in deleting phase in VMware \"reconciler failed to Delete machine\"\n2011693 - (release-4.10) \"insightsclient_request_recvreport_total\" metric is always incremented\n2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn\u0027t export namespace labels anymore\n2011733 - Repository README points to broken documentarion link\n2011753 - Ironic resumes clean before raid configuration job is actually completed\n2011809 - The nodes page in the openshift console doesn\u0027t work. You just get a blank page\n2011822 - Obfuscation doesn\u0027t work at clusters with OVN\n2011882 - SRO helm charts not synced with templates\n2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot\n2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages\n2011903 - vsphere-problem-detector: session leak\n2011927 - OLM should allow users to specify a proxy for GRPC connections\n2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods\n2011960 - [tracker] Storage operator is not available after reboot cluster instances\n2011971 - ICNI2 pods are stuck in ContainerCreating state\n2011972 - Ingress operator not creating wildcard route for hypershift clusters\n2011977 - SRO bundle references non-existent image\n2012069 - Refactoring Status controller\n2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI\n2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group\n2012233 - [IBMCLOUD] IPI: \"Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)\"\n2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig\n2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off\n2012407 - [e2e][automation] improve vm tab console tests\n2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don\u0027t have namespace label\n2012562 - migration condition is not detected in list view\n2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written\n2012780 - The port 50936 used by haproxy is occupied by kube-apiserver\n2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working\n2012902 - Neutron Ports assigned to Completed Pods are not reused Edit\n2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack\n2012971 - Disable operands deletes\n2013034 - Cannot install to openshift-nmstate namespace\n2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)\n2013199 - post reboot of node SRIOV policy taking huge time\n2013203 - UI breaks when trying to create block pool before storage cluster/system creation\n2013222 - Full breakage for nightly payload promotion\n2013273 - Nil pointer exception when phc2sys options are missing\n2013321 - TuneD: high CPU utilization of the TuneD daemon. \n2013416 - Multiple assets emit different content to the same filename\n2013431 - Application selector dropdown has incorrect font-size and positioning\n2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2013545 - Service binding created outside topology is not visible\n2013599 - Scorecard support storage is not included in ocp4.9\n2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)\n2013646 - fsync controller will show false positive if gaps in metrics are observed. \n2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default\n2013751 - Service details page is showing wrong in-cluster hostname\n2013787 - There are two tittle \u0027Network Attachment Definition Details\u0027 on NAD details page\n2013871 - Resource table headings are not aligned with their column data\n2013895 - Cannot enable accelerated network via MachineSets on Azure\n2013920 - \"--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude\"\n2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG)\n2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain\n2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab)\n2013996 - Project detail page: Action \"Delete Project\" does nothing for the default project\n2014071 - Payload imagestream new tags not properly updated during cluster upgrade\n2014153 - SRIOV exclusive pooling\n2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace\n2014238 - AWS console test is failing on importing duplicate YAML definitions\n2014245 - Several aria-labels, external links, and labels aren\u0027t internationalized\n2014248 - Several files aren\u0027t internationalized\n2014352 - Could not filter out machine by using node name on machines page\n2014464 - Unexpected spacing/padding below navigation groups in developer perspective\n2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages\n2014486 - Integration Tests: OLM single namespace operator tests failing\n2014488 - Custom operator cannot change orders of condition tables\n2014497 - Regex slows down different forms and creates too much recursion errors in the log\n2014538 - Kuryr controller crash looping on self._get_vip_port(loadbalancer).id \u0027NoneType\u0027 object has no attribute \u0027id\u0027\n2014614 - Metrics scraping requests should be assigned to exempt priority level\n2014710 - TestIngressStatus test is broken on Azure\n2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly\n2014995 - oc adm must-gather cannot gather audit logs with \u0027None\u0027 audit profile\n2015115 - [RFE] PCI passthrough\n2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl \u0027--resource-group-name\u0027 parameter\n2015154 - Support ports defined networks and primarySubnet\n2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic\n2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production\n2015386 - Possibility to add labels to the built-in OCP alerts\n2015395 - Table head on Affinity Rules modal is not fully expanded\n2015416 - CI implementation for Topology plugin\n2015418 - Project Filesystem query returns No datapoints found\n2015420 - No vm resource in project view\u0027s inventory\n2015422 - No conflict checking on snapshot name\n2015472 - Form and YAML view switch button should have distinguishable status\n2015481 - [4.10] sriov-network-operator daemon pods are failing to start\n2015493 - Cloud Controller Manager Operator does not respect \u0027additionalTrustBundle\u0027 setting\n2015496 - Storage - PersistentVolumes : Claim colum value \u0027No Claim\u0027 in English\n2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on \u0027Add Capacity\u0027 button click\n2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu\n2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. \n2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart \u0027x% used\u0027 is in English\n2015549 - Observe - Metrics: Column heading and pagination text is in English\n2015557 - Workloads - DeploymentConfigs : Error message is in English\n2015568 - Compute - Nodes : CPU column\u0027s values are in English\n2015635 - Storage operator fails causing installation to fail on ASH\n2015660 - \"Finishing boot source customization\" screen should not use term \"patched\"\n2015793 - [hypershift] The collect-profiles job\u0027s pods should run on the control-plane node\n2015806 - Metrics view in Deployment reports \"Forbidden\" when not cluster-admin\n2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning\n2015837 - OS_CLOUD overwrites install-config\u0027s platform.openstack.cloud\n2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch\n2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail\n2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)\n2016008 - [4.10] Bootimage bump tracker\n2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver\n2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator\n2016054 - No e2e CI presubmit configured for release component cluster-autoscaler\n2016055 - No e2e CI presubmit configured for release component console\n2016058 - openshift-sync does not synchronise in \"ose-jenkins:v4.8\"\n2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager\n2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers\n2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. \n2016179 - Add Sprint 208 translations\n2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager\n2016235 - should update to 7.5.11 for grafana resources version label\n2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails\n2016334 - shiftstack: SRIOV nic reported as not supported\n2016352 - Some pods start before CA resources are present\n2016367 - Empty task box is getting created for a pipeline without finally task\n2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts\n2016438 - Feature flag gating is missing in few extensions contributed via knative plugin\n2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc\n2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets\n2016453 - Complete i18n for GaugeChart defaults\n2016479 - iface-id-ver is not getting updated for existing lsp\n2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear\n2016951 - dynamic actions list is not disabling \"open console\" for stopped vms\n2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available\n2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances\n2017016 - [REF] Virtualization menu\n2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn\n2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly\n2017130 - t is not a function error navigating to details page\n2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue\n2017244 - ovirt csi operator static files creation is in the wrong order\n2017276 - [4.10] Volume mounts not created with the correct security context\n2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. \n2017427 - NTO does not restart TuneD daemon when profile application is taking too long\n2017535 - Broken Argo CD link image on GitOps Details Page\n2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references\n2017564 - On-prem prepender dispatcher script overwrites DNS search settings\n2017565 - CCMO does not handle additionalTrustBundle on Azure Stack\n2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice\n2017606 - [e2e][automation] add test to verify send key for VNC console\n2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes\n2017656 - VM IP address is \"undefined\" under VM details -\u003e ssh field\n2017663 - SSH password authentication is disabled when public key is not supplied\n2017680 - [gcp] Couldn\u2019t enable support for instances with GPUs on GCP\n2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set\n2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource\n2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults\n2017761 - [e2e][automation] dummy bug for 4.9 test dependency\n2017872 - Add Sprint 209 translations\n2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances\n2017879 - Add Chinese translation for \"alternate\"\n2017882 - multus: add handling of pod UIDs passed from runtime\n2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods\n2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI\n2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS\n2018094 - the tooltip length is limited\n2018152 - CNI pod is not restarted when It cannot start servers due to ports being used\n2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time\n2018234 - user settings are saved in local storage instead of on cluster\n2018264 - Delete Export button doesn\u0027t work in topology sidebar (general issue with unknown CSV?)\n2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports)\n2018275 - Topology graph doesn\u0027t show context menu for Export CSV\n2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked\n2018380 - Migrate docs links to access.redhat.com\n2018413 - Error: context deadline exceeded, OCP 4.8.9\n2018428 - PVC is deleted along with VM even with \"Delete Disks\" unchecked\n2018445 - [e2e][automation] enhance tests for downstream\n2018446 - [e2e][automation] move tests to different level\n2018449 - [e2e][automation] add test about create/delete network attachment definition\n2018490 - [4.10] Image provisioning fails with file name too long\n2018495 - Fix typo in internationalization README\n2018542 - Kernel upgrade does not reconcile DaemonSet\n2018880 - Get \u0027No datapoints found.\u0027 when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit\n2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes\n2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950\n2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10\n2018985 - The rootdisk size is 15Gi of windows VM in customize wizard\n2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync. \n2019096 - Update SRO leader election timeout to support SNO\n2019129 - SRO in operator hub points to wrong repo for README\n2019181 - Performance profile does not apply\n2019198 - ptp offset metrics are not named according to the log output\n2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest\n2019284 - Stop action should not in the action list while VMI is not running\n2019346 - zombie processes accumulation and Argument list too long\n2019360 - [RFE] Virtualization Overview page\n2019452 - Logger object in LSO appends to existing logger recursively\n2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect\n2019634 - Pause and migration is enabled in action list for a user who has view only permission\n2019636 - Actions in VM tabs should be disabled when user has view only permission\n2019639 - \"Take snapshot\" should be disabled while VM image is still been importing\n2019645 - Create button is not removed on \"Virtual Machines\" page for view only user\n2019646 - Permission error should pop-up immediately while clicking \"Create VM\" button on template page for view only user\n2019647 - \"Remove favorite\" and \"Create new Template\" should be disabled in template action list for view only user\n2019717 - cant delete VM with un-owned pvc attached\n2019722 - The shared-resource-csi-driver-node pod runs as \u201cBestEffort\u201d qosClass\n2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as \"Always\"\n2019744 - [RFE] Suggest users to download newest RHEL 8 version\n2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level\n2019827 - Display issue with top-level menu items running demo plugin\n2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded\n2019886 - Kuryr unable to finish ports recovery upon controller restart\n2019948 - [RFE] Restructring Virtualization links\n2019972 - The Nodes section doesn\u0027t display the csr of the nodes that are trying to join the cluster\n2019977 - Installer doesn\u0027t validate region causing binary to hang with a 60 minute timeout\n2019986 - Dynamic demo plugin fails to build\n2019992 - instance:node_memory_utilisation:ratio metric is incorrect\n2020001 - Update dockerfile for demo dynamic plugin to reflect dir change\n2020003 - MCD does not regard \"dangling\" symlinks as a files, attempts to write through them on next backup, resulting in \"not writing through dangling symlink\" error and degradation. \n2020107 - cluster-version-operator: remove runlevel from CVO namespace\n2020153 - Creation of Windows high performance VM fails\n2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn\u0027t be public\n2020250 - Replacing deprecated ioutil\n2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build\n2020275 - ClusterOperators link in console returns blank page during upgrades\n2020377 - permissions error while using tcpdump option with must-gather\n2020489 - coredns_dns metrics don\u0027t include the custom zone metrics data due to CoreDNS prometheus plugin is not defined\n2020498 - \"Show PromQL\" button is disabled\n2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature\n2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI\n2020664 - DOWN subports are not cleaned up\n2020904 - When trying to create a connection from the Developer view between VMs, it fails\n2021016 - \u0027Prometheus Stats\u0027 of dashboard \u0027Prometheus Overview\u0027 miss data on console compared with Grafana\n2021017 - 404 page not found error on knative eventing page\n2021031 - QE - Fix the topology CI scripts\n2021048 - [RFE] Added MAC Spoof check\n2021053 - Metallb operator presented as community operator\n2021067 - Extensive number of requests from storage version operator in cluster\n2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes\n2021135 - [azure-file-csi-driver] \"make unit-test\" returns non-zero code, but tests pass\n2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node\n2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating\n2021152 - imagePullPolicy is \"Always\" for ptp operator images\n2021191 - Project admins should be able to list available network attachment defintions\n2021205 - Invalid URL in git import form causes validation to not happen on URL change\n2021322 - cluster-api-provider-azure should populate purchase plan information\n2021337 - Dynamic Plugins: ResourceLink doesn\u0027t render when passed a groupVersionKind\n2021364 - Installer requires invalid AWS permission s3:GetBucketReplication\n2021400 - Bump documentationBaseURL to 4.10\n2021405 - [e2e][automation] VM creation wizard Cloud Init editor\n2021433 - \"[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified\" test fail permanently on disconnected\n2021466 - [e2e][automation] Windows guest tool mount\n2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver\n2021551 - Build is not recognizing the USER group from an s2i image\n2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character\n2021629 - api request counts for current hour are incorrect\n2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page\n2021693 - Modals assigned modal-lg class are no longer the correct width\n2021724 - Observe \u003e Dashboards: Graph lines are not visible when obscured by other lines\n2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled\n2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags\n2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem\n2022053 - dpdk application with vhost-net is not able to start\n2022114 - Console logging every proxy request\n2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack (Intermittent)\n2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long\n2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error . \n2022447 - ServiceAccount in manifests conflicts with OLM\n2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. \n2022509 - getOverrideForManifest does not check manifest.GVK.Group\n2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache\n2022612 - no namespace field for \"Kubernetes / Compute Resources / Namespace (Pods)\" admin console dashboard\n2022627 - Machine object not picking up external FIP added to an openstack vm\n2022646 - configure-ovs.sh failure - Error: unknown connection \u0027WARN:\u0027\n2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox\n2022801 - Add Sprint 210 translations\n2022811 - Fix kubelet log rotation file handle leak\n2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations\n2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2022880 - Pipeline renders with minor visual artifact with certain task dependencies\n2022886 - Incorrect URL in operator description\n2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config\n2023060 - [e2e][automation] Windows VM with CDROM migration\n2023077 - [e2e][automation] Home Overview Virtualization status\n2023090 - [e2e][automation] Examples of Import URL for VM templates\n2023102 - [e2e][automation] Cloudinit disk of VM from custom template\n2023216 - ACL for a deleted egressfirewall still present on node join switch\n2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9\n2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy\n2023342 - SCC admission should take ephemeralContainers into account\n2023356 - Devfiles can\u0027t be loaded in Safari on macOS (403 - Forbidden)\n2023434 - Update Azure Machine Spec API to accept Marketplace Images\n2023500 - Latency experienced while waiting for volumes to attach to node\n2023522 - can\u0027t remove package from index: database is locked\n2023560 - \"Network Attachment Definitions\" has no project field on the top in the list view\n2023592 - [e2e][automation] add mac spoof check for nad\n2023604 - ACL violation when deleting a provisioning-configuration resource\n2023607 - console returns blank page when normal user without any projects visit Installed Operators page\n2023638 - Downgrade support level for extended control plane integration to Dev Preview\n2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10\n2023675 - Changing CNV Namespace\n2023779 - Fix Patch 104847 in 4.9\n2023781 - initial hardware devices is not loading in wizard\n2023832 - CCO updates lastTransitionTime for non-Status changes\n2023839 - Bump recommended FCOS to 34.20211031.3.0\n2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly\n2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from \"registry:5000\" repository\n2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8\n2024055 - External DNS added extra prefix for the TXT record\n2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully\n2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json\n2024199 - 400 Bad Request error for some queries for the non admin user\n2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode\n2024262 - Sample catalog is not displayed when one API call to the backend fails\n2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability\n2024316 - modal about support displays wrong annotation\n2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected\n2024399 - Extra space is in the translated text of \"Add/Remove alternate service\" on Create Route page\n2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view\n2024493 - Observe \u003e Alerting \u003e Alerting rules page throws error trying to destructure undefined\n2024515 - test-blocker: Ceph-storage-plugin tests failing\n2024535 - hotplug disk missing OwnerReference\n2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image\n2024547 - Detail page is breaking for namespace store , backing store and bucket class. \n2024551 - KMS resources not getting created for IBM FlashSystem storage\n2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel\n2024613 - pod-identity-webhook starts without tls\n2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded\n2024665 - Bindable services are not shown on topology\n2024731 - linuxptp container: unnecessary checking of interfaces\n2024750 - i18n some remaining OLM items\n2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured\n2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack\n2024841 - test Keycloak with latest tag\n2024859 - Not able to deploy an existing image from private image registry using developer console\n2024880 - Egress IP breaks when network policies are applied\n2024900 - Operator upgrade kube-apiserver\n2024932 - console throws \"Unauthorized\" error after logging out\n2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up\n2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick\n2025230 - ClusterAutoscalerUnschedulablePods should not be a warning\n2025266 - CreateResource route has exact prop which need to be removed\n2025301 - [e2e][automation] VM actions availability in different VM states\n2025304 - overwrite storage section of the DV spec instead of the pvc section\n2025431 - [RFE]Provide specific windows source link\n2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36\n2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node\n2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn\u0027t work for ExternalTrafficPolicy=local\n2025481 - Update VM Snapshots UI\n2025488 - [DOCS] Update the doc for nmstate operator installation\n2025592 - ODC 4.9 supports invalid devfiles only\n2025765 - It should not try to load from storageProfile after unchecking\"Apply optimized StorageProfile settings\"\n2025767 - VMs orphaned during machineset scaleup\n2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns \"kubevirt-hyperconverged\" while using customize wizard\n2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size\u2019s vCPUsAvailable instead of vCPUs for the sku. \n2025821 - Make \"Network Attachment Definitions\" available to regular user\n2025823 - The console nav bar ignores plugin separator in existing sections\n2025830 - CentOS capitalizaion is wrong\n2025837 - Warn users that the RHEL URL expire\n2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-*\n2025903 - [UI] RoleBindings tab doesn\u0027t show correct rolebindings\n2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2026178 - OpenShift Alerting Rules Style-Guide Compliance\n2026209 - Updation of task is getting failed (tekton hub integration)\n2026223 - Internal error occurred: failed calling webhook \"ptpconfigvalidationwebhook.openshift.io\"\n2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates\n2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct\n2026352 - Kube-Scheduler revision-pruner fail during install of new cluster\n2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment\n2026383 - Error when rendering custom Grafana dashboard through ConfigMap\n2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation\n2026396 - Cachito Issues: sriov-network-operator Image build failure\n2026488 - openshift-controller-manager - delete event is repeating pathologically\n2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. \n2026560 - Cluster-version operator does not remove unrecognized volume mounts\n2026699 - fixed a bug with missing metadata\n2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator\n2026898 - Description/details are missing for Local Storage Operator\n2027132 - Use the specific icon for Fedora and CentOS template\n2027238 - \"Node Exporter / USE Method / Cluster\" CPU utilization graph shows incorrect legend\n2027272 - KubeMemoryOvercommit alert should be human readable\n2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group\n2027288 - Devfile samples can\u0027t be loaded after fixing it on Safari (redirect caching issue)\n2027299 - The status of checkbox component is not revealed correctly in code\n2027311 - K8s watch hooks do not work when fetching core resources\n2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation\n2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don\u0027t use the downstream images\n2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation\n2027498 - [IBMCloud] SG Name character length limitation\n2027501 - [4.10] Bootimage bump tracker\n2027524 - Delete Application doesn\u0027t delete Channels or Brokers\n2027563 - e2e/add-flow-ci.feature fix accessibility violations\n2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges\n2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions\n2027685 - openshift-cluster-csi-drivers pods crashing on PSI\n2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced\n2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string\n2027917 - No settings in hostfirmwaresettings and schema objects for masters\n2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf\n2027982 - nncp stucked at ConfigurationProgressing\n2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters\n2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed\n2028030 - Panic detected in cluster-image-registry-operator pod\n2028042 - Desktop viewer for Windows VM shows \"no Service for the RDP (Remote Desktop Protocol) can be found\"\n2028054 - Cloud controller manager operator can\u0027t get leader lease when upgrading from 4.8 up to 4.9\n2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin\n2028141 - Console tests doesn\u0027t pass on Node.js 15 and 16\n2028160 - Remove i18nKey in network-policy-peer-selectors.tsx\n2028162 - Add Sprint 210 translations\n2028170 - Remove leading and trailing whitespace\n2028174 - Add Sprint 210 part 2 translations\n2028187 - Console build doesn\u0027t pass on Node.js 16 because node-sass doesn\u0027t support it\n2028217 - Cluster-version operator does not default Deployment replicas to one\n2028240 - Multiple CatalogSources causing higher CPU use than necessary\n2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn\u0027t be set in HostFirmwareSettings\n2028325 - disableDrain should be set automatically on SNO\n2028484 - AWS EBS CSI driver\u0027s livenessprobe does not respect operator\u0027s loglevel\n2028531 - Missing netFilter to the list of parameters when platform is OpenStack\n2028610 - Installer doesn\u0027t retry on GCP rate limiting\n2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting\n2028695 - destroy cluster does not prune bootstrap instance profile\n2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs\n2028802 - CRI-O panic due to invalid memory address or nil pointer dereference\n2028816 - VLAN IDs not released on failures\n2028881 - Override not working for the PerformanceProfile template\n2028885 - Console should show an error context if it logs an error object\n2028949 - Masthead dropdown item hover text color is incorrect\n2028963 - Whereabouts should reconcile stranded IP addresses\n2029034 - enabling ExternalCloudProvider leads to inoperative cluster\n2029178 - Create VM with wizard - page is not displayed\n2029181 - Missing CR from PGT\n2029273 - wizard is not able to use if project field is \"All Projects\"\n2029369 - Cypress tests github rate limit errors\n2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out\n2029394 - missing empty text for hardware devices at wizard review\n2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used\n2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl\n2029521 - EFS CSI driver cannot delete volumes under load\n2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle\n2029579 - Clicking on an Application which has a Helm Release in it causes an error\n2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn\u0027t for HPE\n2029645 - Sync upstream 1.15.0 downstream\n2029671 - VM action \"pause\" and \"clone\" should be disabled while VM disk is still being importing\n2029742 - [ovn] Stale lr-policy-list and snat rules left for egressip\n2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage\n2029785 - CVO panic when an edge is included in both edges and conditionaledges\n2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp)\n2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error\n2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2030228 - Fix StorageSpec resources field to use correct API\n2030229 - Mirroring status card reflect wrong data\n2030240 - Hide overview page for non-privileged user\n2030305 - Export App job do not completes\n2030347 - kube-state-metrics exposes metrics about resource annotations\n2030364 - Shared resource CSI driver monitoring is not setup correctly\n2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets\n2030534 - Node selector/tolerations rules are evaluated too early\n2030539 - Prometheus is not highly available\n2030556 - Don\u0027t display Description or Message fields for alerting rules if those annotations are missing\n2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation\n2030574 - console service uses older \"service.alpha.openshift.io\" for the service serving certificates. \n2030677 - BOND CNI: There is no option to configure MTU on a Bond interface\n2030692 - NPE in PipelineJobListener.upsertWorkflowJob\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2030847 - PerformanceProfile API version should be v2\n2030961 - Customizing the OAuth server URL does not apply to upgraded cluster\n2031006 - Application name input field is not autofocused when user selects \"Create application\"\n2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex\n2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn\u0027t be started\n2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue\n2031057 - Topology sidebar for Knative services shows a small pod ring with \"0 undefined\" as tooltip\n2031060 - Failing CSR Unit test due to expired test certificate\n2031085 - ovs-vswitchd running more threads than expected\n2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1\n2031228 - CVE-2021-43813 grafana: directory traversal vulnerability\n2031502 - [RFE] New common templates crash the ui\n2031685 - Duplicated forward upstreams should be removed from the dns operator\n2031699 - The displayed ipv6 address of a dns upstream should be case sensitive\n2031797 - [RFE] Order and text of Boot source type input are wrong\n2031826 - CI tests needed to confirm driver-toolkit image contents\n2031831 - OCP Console - Global CSS overrides affecting dynamic plugins\n2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional\n2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled)\n2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain)\n2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself\n2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource\n2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64\n2032141 - open the alertrule link in new tab, got empty page\n2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy\n2032296 - Cannot create machine with ephemeral disk on Azure\n2032407 - UI will show the default openshift template wizard for HANA template\n2032415 - Templates page - remove \"support level\" badge and add \"support level\" column which should not be hard coded\n2032421 - [RFE] UI integration with automatic updated images\n2032516 - Not able to import git repo with .devfile.yaml\n2032521 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the aws_vpc_dhcp_options_association resource\n2032547 - hardware devices table have filter when table is empty\n2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool\n2032566 - Cluster-ingress-router does not support Azure Stack\n2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso\n2032589 - DeploymentConfigs ignore resolve-names annotation\n2032732 - Fix styling conflicts due to recent console-wide CSS changes\n2032831 - Knative Services and Revisions are not shown when Service has no ownerReference\n2032851 - Networking is \"not available\" in Virtualization Overview\n2032926 - Machine API components should use K8s 1.23 dependencies\n2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24\n2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster\n2033013 - Project dropdown in user preferences page is broken\n2033044 - Unable to change import strategy if devfile is invalid\n2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable\n2033111 - IBM VPC operator library bump removed global CLI args\n2033138 - \"No model registered for Templates\" shows on customize wizard\n2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected\n2033239 - [IPI on Alibabacloud] \u0027openshift-install\u0027 gets the wrong region (\u2018cn-hangzhou\u2019) selected\n2033257 - unable to use configmap for helm charts\n2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn\u2019t triggered\n2033290 - Product builds for console are failing\n2033382 - MAPO is missing machine annotations\n2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations\n2033403 - Devfile catalog does not show provider information\n2033404 - Cloud event schema is missing source type and resource field is using wrong value\n2033407 - Secure route data is not pre-filled in edit flow form\n2033422 - CNO not allowing LGW conversion from SGW in runtime\n2033434 - Offer darwin/arm64 oc in clidownloads\n2033489 - CCM operator failing on baremetal platform\n2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver\n2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains\n2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating \"cluster-infrastructure-02-config.yml\" status, which leads to bootstrap failed and all master nodes NotReady\n2033538 - Gather Cost Management Metrics Custom Resource\n2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined\n2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page\n2033634 - list-style-type: disc is applied to the modal dropdowns\n2033720 - Update samples in 4.10\n2033728 - Bump OVS to 2.16.0-33\n2033729 - remove runtime request timeout restriction for azure\n2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended\n2033749 - Azure Stack Terraform fails without Local Provider\n2033750 - Local volume should pull multi-arch image for kube-rbac-proxy\n2033751 - Bump kubernetes to 1.23\n2033752 - make verify fails due to missing yaml-patch\n2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource\n2034004 - [e2e][automation] add tests for VM snapshot improvements\n2034068 - [e2e][automation] Enhance tests for 4.10 downstream\n2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore\n2034097 - [OVN] After edit EgressIP object, the status is not correct\n2034102 - [OVN] Recreate the deleted EgressIP object got InvalidEgressIP warning\n2034129 - blank page returned when clicking \u0027Get started\u0027 button\n2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0\n2034153 - CNO does not verify MTU migration for OpenShiftSDN\n2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled\n2034170 - Use function.knative.dev for Knative Functions related labels\n2034190 - unable to add new VirtIO disks to VMs\n2034192 - Prometheus fails to insert reporting metrics when the sample limit is met\n2034243 - regular user cant load template list\n2034245 - installing a cluster on aws, gcp always fails with \"Error: Incompatible provider version\"\n2034248 - GPU/Host device modal is too small\n2034257 - regular user `Create VM` missing permissions alert\n2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2034287 - do not block upgrades if we can\u0027t create storageclass in 4.10 in vsphere\n2034300 - Du validator policy is NonCompliant after DU configuration completed\n2034319 - Negation constraint is not validating packages\n2034322 - CNO doesn\u0027t pick up settings required when ExternalControlPlane topology\n2034350 - The CNO should implement the Whereabouts IP reconciliation cron job\n2034362 - update description of disk interface\n2034398 - The Whereabouts IPPools CRD should include the podref field\n2034409 - Default CatalogSources should be pointing to 4.10 index images\n2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics\n2034413 - cloud-network-config-controller fails to init with secret \"cloud-credentials\" not found in manual credential mode\n2034460 - Summary: cloud-network-config-controller does not account for different environment\n2034474 - Template\u0027s boot source is \"Unknown source\" before and after set enableCommonBootImageImport to true\n2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren\u0027t working properly\n2034493 - Change cluster version operator log level\n2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list\n2034527 - IPI deployment fails \u0027timeout reached while inspecting the node\u0027 when provisioning network ipv6\n2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer\n2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART\n2034537 - Update team\n2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds\n2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success\n2034577 - Current OVN gateway mode should be reflected on node annotation as well\n2034621 - context menu not popping up for application group\n2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10\n2034624 - Warn about unsupported CSI driver in vsphere operator\n2034647 - missing volumes list in snapshot modal\n2034648 - Rebase openshift-controller-manager to 1.23\n2034650 - Rebase openshift/builder to 1.23\n2034705 - vSphere: storage e2e tests logging configuration data\n2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail. \n2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment\n2034785 - ptpconfig with summary_interval cannot be applied\n2034823 - RHEL9 should be starred in template list\n2034838 - An external router can inject routes if no service is added\n2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent\n2034879 - Lifecycle hook\u0027s name and owner shouldn\u0027t be allowed to be empty\n2034881 - Cloud providers components should use K8s 1.23 dependencies\n2034884 - ART cannot build the image because it tries to download controller-gen\n2034889 - `oc adm prune deployments` does not work\n2034898 - Regression in recently added Events feature\n2034957 - update openshift-apiserver to kube 1.23.1\n2035015 - ClusterLogForwarding CR remains stuck remediating forever\n2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster\n2035141 - [RFE] Show GPU/Host devices in template\u0027s details tab\n2035146 - \"kubevirt-plugin~PVC cannot be empty\" shows on add-disk modal while adding existing PVC\n2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting\n2035199 - IPv6 support in mtu-migration-dispatcher.yaml\n2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing\n2035250 - Peering with ebgp peer over multi-hops doesn\u0027t work\n2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices\n2035315 - invalid test cases for AWS passthrough mode\n2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env\n2035321 - Add Sprint 211 translations\n2035326 - [ExternalCloudProvider] installation with additional network on workers fails\n2035328 - Ccoctl does not ignore credentials request manifest marked for deletion\n2035333 - Kuryr orphans ports on 504 errors from Neutron\n2035348 - Fix two grammar issues in kubevirt-plugin.json strings\n2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets\n2035409 - OLM E2E test depends on operator package that\u0027s no longer published\n2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address\n2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to \u0027ecs-cn-hangzhou.aliyuncs.com\u0027 timeout, although the specified region is \u0027us-east-1\u0027\n2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster\n2035467 - UI: Queried metrics can\u0027t be ordered on Oberve-\u003eMetrics page\n2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers\n2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class\n2035602 - [e2e][automation] add tests for Virtualization Overview page cards\n2035703 - Roles -\u003e RoleBindings tab doesn\u0027t show RoleBindings correctly\n2035704 - RoleBindings list page filter doesn\u0027t apply\n2035705 - Azure \u0027Destroy cluster\u0027 get stuck when the cluster resource group is already not existing. \n2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed\n2035772 - AccessMode and VolumeMode is not reserved for customize wizard\n2035847 - Two dashes in the Cronjob / Job pod name\n2035859 - the output of opm render doesn\u0027t contain olm.constraint which is defined in dependencies.yaml\n2035882 - [BIOS setting values] Create events for all invalid settings in spec\n2035903 - One redundant capi-operator credential requests in \u201coc adm extract --credentials-requests\u201d\n2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen\n2035927 - Cannot enable HighNodeUtilization scheduler profile\n2035933 - volume mode and access mode are empty in customize wizard review tab\n2035969 - \"ip a \" shows \"Error: Peer netns reference is invalid\" after create test pods\n2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation\n2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error\n2036029 - New added cloud-network-config operator doesn\u2019t supported aws sts format credential\n2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend\n2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes\n2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23\n2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23\n2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments\n2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists\n2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected\n2036826 - `oc adm prune deployments` can prune the RC/RS\n2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform\n2036861 - kube-apiserver is degraded while enable multitenant\n2036937 - Command line tools page shows wrong download ODO link\n2036940 - oc registry login fails if the file is empty or stdout\n2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container\n2036989 - Route URL copy to clipboard button wraps to a separate line by itself\n2036990 - ZTP \"DU Done inform policy\" never becomes compliant on multi-node clusters\n2036993 - Machine API components should use Go lang version 1.17\n2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log. \n2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api\n2037073 - Alertmanager container fails to start because of startup probe never being successful\n2037075 - Builds do not support CSI volumes\n2037167 - Some log level in ibm-vpc-block-csi-controller are hard code\n2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles\n2037182 - PingSource badge color is not matched with knativeEventing color\n2037203 - \"Running VMs\" card is too small in Virtualization Overview\n2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly\n2037237 - Add \"This is a CD-ROM boot source\" to customize wizard\n2037241 - default TTL for noobaa cache buckets should be 0\n2037246 - Cannot customize auto-update boot source\n2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately\n2037288 - Remove stale image reference\n2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources\n2037483 - Rbacs for Pods within the CBO should be more restrictive\n2037484 - Bump dependencies to k8s 1.23\n2037554 - Mismatched wave number error message should include the wave numbers that are in conflict\n2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]\n2037635 - impossible to configure custom certs for default console route in ingress config\n2037637 - configure custom certificate for default console route doesn\u0027t take effect for OCP \u003e= 4.8\n2037638 - Builds do not support CSI volumes as volume sources\n2037664 - text formatting issue in Installed Operators list table\n2037680 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037689 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037801 - Serverless installation is failing on CI jobs for e2e tests\n2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format\n2037856 - use lease for leader election\n2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10\n2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests\n2037904 - upgrade operator deployment failed due to memory limit too low for manager container\n2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]\n2038034 - non-privileged user cannot see auto-update boot source\n2038053 - Bump dependencies to k8s 1.23\n2038088 - Remove ipa-downloader references\n2038160 - The `default` project missed the annotation : openshift.io/node-selector: \"\"\n2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional\n2038196 - must-gather is missing collecting some metal3 resources\n2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)\n2038253 - Validator Policies are long lived\n2038272 - Failures to build a PreprovisioningImage are not reported\n2038384 - Azure Default Instance Types are Incorrect\n2038389 - Failing test: [sig-arch] events should not repeat pathologically\n2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket\n2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips\n2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained\n2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect\n2038663 - update kubevirt-plugin OWNERS\n2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via \"oc adm groups new\"\n2038705 - Update ptp reviewers\n2038761 - Open Observe-\u003eTargets page, wait for a while, page become blank\n2038768 - All the filters on the Observe-\u003eTargets page can\u0027t work\n2038772 - Some monitors failed to display on Observe-\u003eTargets page\n2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node\n2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces\n2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard\n2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation\n2038864 - E2E tests fail because multi-hop-net was not created\n2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console\n2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured\n2038968 - Move feature gates from a carry patch to openshift/api\n2039056 - Layout issue with breadcrumbs on API explorer page\n2039057 - Kind column is not wide enough in API explorer page\n2039064 - Bulk Import e2e test flaking at a high rate\n2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled\n2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters\n2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost\n2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy\n2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator\n2039170 - [upgrade]Error shown on registry operator \"missing the cloud-provider-config configmap\" after upgrade\n2039227 - Improve image customization server parameter passing during installation\n2039241 - Improve image customization server parameter passing during installation\n2039244 - Helm Release revision history page crashes the UI\n2039294 - SDN controller metrics cannot be consumed correctly by prometheus\n2039311 - oc Does Not Describe Build CSI Volumes\n2039315 - Helm release list page should only fetch secrets for deployed charts\n2039321 - SDN controller metrics are not being consumed by prometheus\n2039330 - Create NMState button doesn\u0027t work in OperatorHub web console\n2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations\n2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters. \n2039359 - `oc adm prune deployments` can\u0027t prune the RS where the associated Deployment no longer exists\n2039382 - gather_metallb_logs does not have execution permission\n2039406 - logout from rest session after vsphere operator sync is finished\n2039408 - Add GCP region northamerica-northeast2 to allowed regions\n2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration\n2039425 - No need to set KlusterletAddonConfig CR applicationManager-\u003eenabled: true in RAN ztp deployment\n2039491 - oc - git:// protocol used in unit tests\n2039516 - Bump OVN to ovn21.12-21.12.0-25\n2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate\n2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled\n2039541 - Resolv-prepender script duplicating entries\n2039586 - [e2e] update centos8 to centos stream8\n2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty\n2039619 - [AWS] In tree provisioner storageclass aws disk type should contain \u0027gp3\u0027 and csi provisioner storageclass default aws disk type should be \u0027gp3\u0027\n2039670 - Create PDBs for control plane components\n2039678 - Page goes blank when create image pull secret\n2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported\n2039743 - React missing key warning when open operator hub detail page (and maybe others as well)\n2039756 - React missing key warning when open KnativeServing details\n2039770 - Observe dashboard doesn\u0027t react on time-range changes after browser reload when perspective is changed in another tab\n2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard\n2039781 - [GSS] OBC is not visible by admin of a Project on Console\n2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector\n2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled\n2039880 - Log level too low for control plane metrics\n2039919 - Add E2E test for router compression feature\n2039981 - ZTP for standard clusters installs stalld on master nodes\n2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead\n2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced\n2040143 - [IPI on Alibabacloud] suggest to remove region \"cn-nanjing\" or provide better error message\n2040150 - Update ConfigMap keys for IBM HPCS\n2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth\n2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository\n2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp\n2040376 - \"unknown instance type\" error for supported m6i.xlarge instance\n2040394 - Controller: enqueue the failed configmap till services update\n2040467 - Cannot build ztp-site-generator container image\n2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn\u0027t take affect in OpenShift 4\n2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps\n2040535 - Auto-update boot source is not available in customize wizard\n2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name\n2040603 - rhel worker scaleup playbook failed because missing some dependency of podman\n2040616 - rolebindings page doesn\u0027t load for normal users\n2040620 - [MAPO] Error pulling MAPO image on installation\n2040653 - Topology sidebar warns that another component is updated while rendering\n2040655 - User settings update fails when selecting application in topology sidebar\n2040661 - Different react warnings about updating state on unmounted components when leaving topology\n2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation\n2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi\n2040694 - Three upstream HTTPClientConfig struct fields missing in the operator\n2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers\n2040710 - cluster-baremetal-operator cannot update BMC subscription CR\n2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms\n2040782 - Import YAML page blocks input with more then one generateName attribute\n2040783 - The Import from YAML summary page doesn\u0027t show the resource name if created via generateName attribute\n2040791 - Default PGT policies must be \u0027inform\u0027 to integrate with the Lifecycle Operator\n2040793 - Fix snapshot e2e failures\n2040880 - do not block upgrades if we can\u0027t connect to vcenter\n2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10\n2041093 - autounattend.xml missing\n2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates\n2041319 - [IPI on Alibabacloud] installation in region \"cn-shanghai\" failed, due to \"Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped\"\n2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23\n2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller\n2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener\n2041441 - Provision volume with size 3000Gi even if sizeRange: \u0027[10-2000]GiB\u0027 in storageclass on IBM cloud\n2041466 - Kubedescheduler version is missing from the operator logs\n2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses\n2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods)\n2041492 - Spacing between resources in inventory card is too small\n2041509 - GCP Cloud provider components should use K8s 1.23 dependencies\n2041510 - cluster-baremetal-operator doesn\u0027t run baremetal-operator\u0027s subscription webhook\n2041541 - audit: ManagedFields are dropped using API not annotation\n2041546 - ovnkube: set election timer at RAFT cluster creation time\n2041554 - use lease for leader election\n2041581 - KubeDescheduler operator log shows \"Use of insecure cipher detected\"\n2041583 - etcd and api server cpu mask interferes with a guaranteed workload\n2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure\n2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation\n2041620 - bundle CSV alm-examples does not parse\n2041641 - Fix inotify leak and kubelet retaining memory\n2041671 - Delete templates leads to 404 page\n2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category\n2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled\n2041750 - [IPI on Alibabacloud] trying \"create install-config\" with region \"cn-wulanchabu (China (Ulanqab))\" (or \"ap-southeast-6 (Philippines (Manila))\", \"cn-guangzhou (China (Guangzhou))\") failed due to invalid endpoint\n2041763 - The Observe \u003e Alerting pages no longer have their default sort order applied\n2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken\n2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied\n2041882 - cloud-network-config operator can\u0027t work normal on GCP workload identity cluster\n2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases\n2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist\n2041971 - [vsphere] Reconciliation of mutating webhooks didn\u0027t happen\n2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile\n2041999 - [PROXY] external dns pod cannot recognize custom proxy CA\n2042001 - unexpectedly found multiple load balancers\n2042029 - kubedescheduler fails to install completely\n2042036 - [IBMCLOUD] \"openshift-install explain installconfig.platform.ibmcloud\" contains not yet supported custom vpc parameters\n2042049 - Seeing warning related to unrecognized feature gate in kubescheduler \u0026 KCM logs\n2042059 - update discovery burst to reflect lots of CRDs on openshift clusters\n2042069 - Revert toolbox to rhcos-toolbox\n2042169 - Can not delete egressnetworkpolicy in Foreground propagation\n2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool\n2042265 - [IBM]\"--scale-down-utilization-threshold\" doesn\u0027t work on IBMCloud\n2042274 - Storage API should be used when creating a PVC\n2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection\n2042366 - Lifecycle hooks should be independently managed\n2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway\n2042382 - [e2e][automation] CI takes more then 2 hours to run\n2042395 - Add prerequisites for active health checks test\n2042438 - Missing rpms in openstack-installer image\n2042466 - Selection does not happen when switching from Topology Graph to List View\n2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver\n2042567 - insufficient info on CodeReady Containers configuration\n2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk\n2042619 - Overview page of the console is broken for hypershift clusters\n2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running\n2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud\n2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud\n2042770 - [IPI on Alibabacloud] with vpcID \u0026 vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly\n2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)\n2042851 - Create template from SAP HANA template flow - VM is created instead of a new template\n2042906 - Edit machineset with same machine deletion hook name succeed\n2042960 - azure-file CI fails with \"gid(0) in storageClass and pod fsgroup(1000) are not equal\"\n2043003 - [IPI on Alibabacloud] \u0027destroy cluster\u0027 of a failed installation (bug2041694) stuck after \u0027stage=Nat gateways\u0027\n2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n2043043 - Cluster Autoscaler should use K8s 1.23 dependencies\n2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)\n2043078 - Favorite system projects not visible in the project selector after toggling \"Show default projects\". \n2043117 - Recommended operators links are erroneously treated as external\n2043130 - Update CSI sidecars to the latest release for 4.10\n2043234 - Missing validation when creating several BGPPeers with the same peerAddress\n2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler\n2043254 - crio does not bind the security profiles directory\n2043296 - Ignition fails when reusing existing statically-keyed LUKS volume\n2043297 - [4.10] Bootimage bump tracker\n2043316 - RHCOS VM fails to boot on Nutanix AOS\n2043446 - Rebase aws-efs-utils to the latest upstream version. \n2043556 - Add proper ci-operator configuration to ironic and ironic-agent images\n2043577 - DPU network operator\n2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator\n2043675 - Too many machines deleted by cluster autoscaler when scaling down\n2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation\n2043709 - Logging flags no longer being bound to command line\n2043721 - Installer bootstrap hosts using outdated kubelet containing bugs\n2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather\n2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23\n2043780 - Bump router to k8s.io/api 1.23\n2043787 - Bump cluster-dns-operator to k8s.io/api 1.23\n2043801 - Bump CoreDNS to k8s.io/api 1.23\n2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown\n2043961 - [OVN-K] If pod creation fails, retry doesn\u0027t work as expected. \n2044201 - Templates golden image parameters names should be supported\n2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]\n2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter \u201ccsi.storage.k8s.io/fstype\u201d create pvc,pod successfully but write data to the pod\u0027s volume failed of \"Permission denied\"\n2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects\n2044347 - Bump to kubernetes 1.23.3\n2044481 - collect sharedresource cluster scoped instances with must-gather\n2044496 - Unable to create hardware events subscription - failed to add finalizers\n2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources\n2044680 - Additional libovsdb performance and resource consumption fixes\n2044704 - Observe \u003e Alerting pages should not show runbook links in 4.10\n2044717 - [e2e] improve tests for upstream test environment\n2044724 - Remove namespace column on VM list page when a project is selected\n2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff\n2044808 - machine-config-daemon-pull.service: use `cp` instead of `cat` when extracting MCD in OKD\n2045024 - CustomNoUpgrade alerts should be ignored\n2045112 - vsphere-problem-detector has missing rbac rules for leases\n2045199 - SnapShot with Disk Hot-plug hangs\n2045561 - Cluster Autoscaler should use the same default Group value as Cluster API\n2045591 - Reconciliation of aws pod identity mutating webhook did not happen\n2045849 - Add Sprint 212 translations\n2045866 - MCO Operator pod spam \"Error creating event\" warning messages in 4.10\n2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin\n2045916 - [IBMCloud] Default machine profile in installer is unreliable\n2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment\n2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify\n2046137 - oc output for unknown commands is not human readable\n2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance\n2046297 - Bump DB reconnect timeout\n2046517 - In Notification drawer, the \"Recommendations\" header shows when there isn\u0027t any recommendations\n2046597 - Observe \u003e Targets page may show the wrong service monitor is multiple monitors have the same namespace \u0026 label selectors\n2046626 - Allow setting custom metrics for Ansible-based Operators\n2046683 - [AliCloud]\"--scale-down-utilization-threshold\" doesn\u0027t work on AliCloud\n2047025 - Installation fails because of Alibaba CSI driver operator is degraded\n2047190 - Bump Alibaba CSI driver for 4.10\n2047238 - When using communities and localpreferences together, only localpreference gets applied\n2047255 - alibaba: resourceGroupID not found\n2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions\n2047317 - Update HELM OWNERS files under Dev Console\n2047455 - [IBM Cloud] Update custom image os type\n2047496 - Add image digest feature\n2047779 - do not degrade cluster if storagepolicy creation fails\n2047927 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2047929 - use lease for leader election\n2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2048046 - New route annotation to show another URL or hide topology URL decorator doesn\u0027t work for Knative Services\n2048048 - Application tab in User Preferences dropdown menus are too wide. \n2048050 - Topology list view items are not highlighted on keyboard navigation\n2048117 - [IBM]Shouldn\u0027t change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value\n2048413 - Bond CNI: Failed to attach Bond NAD to pod\n2048443 - Image registry operator panics when finalizes config deletion\n2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*\n2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt\n2048598 - Web terminal view is broken\n2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure\n2048891 - Topology page is crashed\n2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class\n2049043 - Cannot create VM from template\n2049156 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2049886 - Placeholder bug for OCP 4.10.0 metadata release\n2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning\n2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2\n2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0\n2050227 - Installation on PSI fails with: \u0027openstack platform does not have the required standard-attr-tag network extension\u0027\n2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]\n2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members\n2050310 - ContainerCreateError when trying to launch large (\u003e500) numbers of pods across nodes\n2050370 - alert data for burn budget needs to be updated to prevent regression\n2050393 - ZTP missing support for local image registry and custom machine config\n2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud\n2050737 - Remove metrics and events for master port offsets\n2050801 - Vsphere upi tries to access vsphere during manifests generation phase\n2050883 - Logger object in LSO does not log source location accurately\n2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit\n2052062 - Whereabouts should implement client-go 1.22+\n2052125 - [4.10] Crio appears to be coredumping in some scenarios\n2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config\n2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. \n2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052598 - kube-scheduler should use configmap lease\n2052599 - kube-controller-manger should use configmap lease\n2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1\n2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch\n2052756 - [4.10] PVs are not being cleaned up after PVC deletion\n2053175 - oc adm catalog mirror throws \u0027missing signature key\u0027 error when using file://local/index\n2053218 - ImagePull fails with error \"unable to pull manifest from example.com/busy.box:v5 invalid reference format\"\n2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs\n2053268 - inability to detect static lifecycle failure\n2053314 - requestheader IDP test doesn\u0027t wait for cleanup, causing high failure rates\n2053323 - OpenShift-Ansible BYOH Unit Tests are Broken\n2053339 - Remove dev preview badge from IBM FlashSystem deployment windows\n2053751 - ztp-site-generate container is missing convenience entrypoint\n2053945 - [4.10] Failed to apply sriov policy on intel nics\n2054109 - Missing \"app\" label\n2054154 - RoleBinding in project without subject is causing \"Project access\" page to fail\n2054244 - Latest pipeline run should be listed on the top of the pipeline run list\n2054288 - console-master-e2e-gcp-console is broken\n2054562 - DPU network operator 4.10 branch need to sync with master\n2054897 - Unable to deploy hw-event-proxy operator\n2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently\n2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line\n2055371 - Remove Check which enforces summary_interval must match logSyncInterval\n2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11\n2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API\n2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured\n2056479 - ovirt-csi-driver-node pods are crashing intermittently\n2056572 - reconcilePrecaching error: cannot list resource \"clusterserviceversions\" in API group \"operators.coreos.com\" at the cluster scope\"\n2056629 - [4.10] EFS CSI driver can\u0027t unmount volumes with \"wait: no child processes\"\n2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs\n2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation\n2056948 - post 1.23 rebase: regression in service-load balancer reliability\n2057438 - Service Level Agreement (SLA) always show \u0027Unknown\u0027\n2057721 - Fix Proxy support in RHACM 2.4.2\n2057724 - Image creation fails when NMstateConfig CR is empty\n2058641 - [4.10] Pod density test causing problems when using kube-burner\n2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install\n2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials\n2060956 - service domain can\u0027t be resolved when networkpolicy is used in OCP 4.10-rc\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2014-3577\nhttps://access.redhat.com/security/cve/CVE-2016-10228\nhttps://access.redhat.com/security/cve/CVE-2017-14502\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2018-1000858\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9169\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-25013\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-8927\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-9952\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-13434\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-15358\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-25660\nhttps://access.redhat.com/security/cve/CVE-2020-25677\nhttps://access.redhat.com/security/cve/CVE-2020-27618\nhttps://access.redhat.com/security/cve/CVE-2020-27781\nhttps://access.redhat.com/security/cve/CVE-2020-29361\nhttps://access.redhat.com/security/cve/CVE-2020-29362\nhttps://access.redhat.com/security/cve/CVE-2020-29363\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/cve/CVE-2021-3326\nhttps://access.redhat.com/security/cve/CVE-2021-3449\nhttps://access.redhat.com/security/cve/CVE-2021-3450\nhttps://access.redhat.com/security/cve/CVE-2021-3516\nhttps://access.redhat.com/security/cve/CVE-2021-3517\nhttps://access.redhat.com/security/cve/CVE-2021-3518\nhttps://access.redhat.com/security/cve/CVE-2021-3520\nhttps://access.redhat.com/security/cve/CVE-2021-3521\nhttps://access.redhat.com/security/cve/CVE-2021-3537\nhttps://access.redhat.com/security/cve/CVE-2021-3541\nhttps://access.redhat.com/security/cve/CVE-2021-3733\nhttps://access.redhat.com/security/cve/CVE-2021-3749\nhttps://access.redhat.com/security/cve/CVE-2021-20305\nhttps://access.redhat.com/security/cve/CVE-2021-21684\nhttps://access.redhat.com/security/cve/CVE-2021-22946\nhttps://access.redhat.com/security/cve/CVE-2021-22947\nhttps://access.redhat.com/security/cve/CVE-2021-25215\nhttps://access.redhat.com/security/cve/CVE-2021-27218\nhttps://access.redhat.com/security/cve/CVE-2021-30666\nhttps://access.redhat.com/security/cve/CVE-2021-30761\nhttps://access.redhat.com/security/cve/CVE-2021-30762\nhttps://access.redhat.com/security/cve/CVE-2021-33928\nhttps://access.redhat.com/security/cve/CVE-2021-33929\nhttps://access.redhat.com/security/cve/CVE-2021-33930\nhttps://access.redhat.com/security/cve/CVE-2021-33938\nhttps://access.redhat.com/security/cve/CVE-2021-36222\nhttps://access.redhat.com/security/cve/CVE-2021-37750\nhttps://access.redhat.com/security/cve/CVE-2021-39226\nhttps://access.redhat.com/security/cve/CVE-2021-41190\nhttps://access.redhat.com/security/cve/CVE-2021-43813\nhttps://access.redhat.com/security/cve/CVE-2021-44716\nhttps://access.redhat.com/security/cve/CVE-2021-44717\nhttps://access.redhat.com/security/cve/CVE-2022-0532\nhttps://access.redhat.com/security/cve/CVE-2022-21673\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL\n0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne\neGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM\nCEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF\naDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC\nY/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp\nsQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO\nRDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN\nrs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry\nbSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z\n7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT\nb5PUYUBIZLc=\n=GUDA\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Solution:\n\nBefore applying the update, back up your existing installation, including\nall applications, configuration files, databases and database settings, and\nso on. \n\nThe References section of this erratum contains a download link for the\nupdate. You must be logged in to download the update. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.1.6 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \n\nBug fixes:\n\n* RHACM 2.1.6 images (BZ#1940581)\n\n* When generating the import cluster string, it can include unescaped\ncharacters (BZ#1934184)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1929338 - CVE-2020-35149 mquery: Code injection via merge or clone operation\n1934184 - When generating the import cluster string, it can include unescaped characters\n1940581 - RHACM 2.1.6 images\n\n5. Relevant releases/architectures:\n\nRed Hat JBoss Core Services on RHEL 7 Server - noarch, ppc64, x86_64\n\n3. Description:\n\nThis release adds the new Apache HTTP Server 2.4.37 Service Pack 7 packages\nthat are part of the JBoss Core Services offering. Refer to the Release Notes for information on the most\nsignificant bug fixes and enhancements included in this release. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nFor details on how to apply this update, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1941547 - CVE-2021-3450 openssl: CA certificate check bypass with X509_V_FLAG_X509_STRICT\n1941554 - CVE-2021-3449 openssl: NULL pointer dereference in signature_algorithms processing\n\n6. Package List:\n\nRed Hat JBoss Core Services on RHEL 7 Server:\n\nSource:\njbcs-httpd24-httpd-2.4.37-70.jbcs.el7.src.rpm\njbcs-httpd24-mod_cluster-native-1.3.14-20.Final_redhat_2.jbcs.el7.src.rpm\njbcs-httpd24-mod_http2-1.15.7-14.jbcs.el7.src.rpm\njbcs-httpd24-mod_jk-1.2.48-13.redhat_1.jbcs.el7.src.rpm\njbcs-httpd24-mod_md-2.0.8-33.jbcs.el7.src.rpm\njbcs-httpd24-mod_security-2.9.2-60.GA.jbcs.el7.src.rpm\njbcs-httpd24-nghttp2-1.39.2-37.jbcs.el7.src.rpm\njbcs-httpd24-openssl-1.1.1g-6.jbcs.el7.src.rpm\njbcs-httpd24-openssl-chil-1.0.0-5.jbcs.el7.src.rpm\njbcs-httpd24-openssl-pkcs11-0.4.10-20.jbcs.el7.src.rpm\n\nnoarch:\njbcs-httpd24-httpd-manual-2.4.37-70.jbcs.el7.noarch.rpm\n\nppc64:\njbcs-httpd24-mod_http2-1.15.7-14.jbcs.el7.ppc64.rpm\njbcs-httpd24-mod_http2-debuginfo-1.15.7-14.jbcs.el7.ppc64.rpm\njbcs-httpd24-mod_md-2.0.8-33.jbcs.el7.ppc64.rpm\njbcs-httpd24-mod_md-debuginfo-2.0.8-33.jbcs.el7.ppc64.rpm\njbcs-httpd24-openssl-chil-1.0.0-5.jbcs.el7.ppc64.rpm\njbcs-httpd24-openssl-chil-debuginfo-1.0.0-5.jbcs.el7.ppc64.rpm\njbcs-httpd24-openssl-pkcs11-0.4.10-20.jbcs.el7.ppc64.rpm\njbcs-httpd24-openssl-pkcs11-debuginfo-0.4.10-20.jbcs.el7.ppc64.rpm\n\nx86_64:\njbcs-httpd24-httpd-2.4.37-70.jbcs.el7.x86_64.rpm\njbcs-httpd24-httpd-debuginfo-2.4.37-70.jbcs.el7.x86_64.rpm\njbcs-httpd24-httpd-devel-2.4.37-70.jbcs.el7.x86_64.rpm\njbcs-httpd24-httpd-selinux-2.4.37-70.jbcs.el7.x86_64.rpm\njbcs-httpd24-httpd-tools-2.4.37-70.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_cluster-native-1.3.14-20.Final_redhat_2.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_cluster-native-debuginfo-1.3.14-20.Final_redhat_2.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_http2-1.15.7-14.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_http2-debuginfo-1.15.7-14.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_jk-ap24-1.2.48-13.redhat_1.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_jk-debuginfo-1.2.48-13.redhat_1.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_jk-manual-1.2.48-13.redhat_1.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_ldap-2.4.37-70.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_md-2.0.8-33.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_md-debuginfo-2.0.8-33.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_proxy_html-2.4.37-70.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_security-2.9.2-60.GA.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_security-debuginfo-2.9.2-60.GA.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_session-2.4.37-70.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_ssl-2.4.37-70.jbcs.el7.x86_64.rpm\njbcs-httpd24-nghttp2-1.39.2-37.jbcs.el7.x86_64.rpm\njbcs-httpd24-nghttp2-debuginfo-1.39.2-37.jbcs.el7.x86_64.rpm\njbcs-httpd24-nghttp2-devel-1.39.2-37.jbcs.el7.x86_64.rpm\njbcs-httpd24-openssl-1.1.1g-6.jbcs.el7.x86_64.rpm\njbcs-httpd24-openssl-chil-1.0.0-5.jbcs.el7.x86_64.rpm\njbcs-httpd24-openssl-chil-debuginfo-1.0.0-5.jbcs.el7.x86_64.rpm\njbcs-httpd24-openssl-debuginfo-1.1.1g-6.jbcs.el7.x86_64.rpm\njbcs-httpd24-openssl-devel-1.1.1g-6.jbcs.el7.x86_64.rpm\njbcs-httpd24-openssl-libs-1.1.1g-6.jbcs.el7.x86_64.rpm\njbcs-httpd24-openssl-perl-1.1.1g-6.jbcs.el7.x86_64.rpm\njbcs-httpd24-openssl-pkcs11-0.4.10-20.jbcs.el7.x86_64.rpm\njbcs-httpd24-openssl-pkcs11-debuginfo-0.4.10-20.jbcs.el7.x86_64.rpm\njbcs-httpd24-openssl-static-1.1.1g-6.jbcs.el7.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. Description:\n\nRed Hat JBoss Web Server is a fully integrated and certified set of\ncomponents for hosting Java web applications. It is comprised of the Apache\nHTTP Server, the Apache Tomcat Servlet container, Apache Tomcat Connector\n(mod_jk), JBoss HTTP Connector (mod_cluster), Hibernate, and the Tomcat\nNative library. \n\nSecurity Fix(es):\n\n* golang: crypto/tls: certificate of wrong type is causing TLS client to\npanic\n(CVE-2021-34558)\n* golang: net: lookup functions may return invalid host names\n(CVE-2021-33195)\n* golang: net/http/httputil: ReverseProxy forwards connection headers if\nfirst one is empty (CVE-2021-33197)\n* golang: match/big.Rat: may cause a panic or an unrecoverable fatal error\nif passed inputs with very large exponents (CVE-2021-33198)\n* golang: encoding/xml: infinite loop when using xml.NewTokenDecoder with a\ncustom TokenReader (CVE-2021-27918)\n* golang: net/http: panic in ReadRequest and ReadResponse when reading a\nvery large header (CVE-2021-31525)\n* golang: archive/zip: malformed archive may cause panic or memory\nexhaustion (CVE-2021-33196)\n\nIt was found that the CVE-2021-27918, CVE-2021-31525 and CVE-2021-33196\nhave been incorrectly mentioned as fixed in RHSA for Serverless client kn\n1.16.0. Bugs fixed (https://bugzilla.redhat.com/):\n\n1983596 - CVE-2021-34558 golang: crypto/tls: certificate of wrong type is causing TLS client to panic\n1983651 - Release of OpenShift Serverless Serving 1.17.0\n1983654 - Release of OpenShift Serverless Eventing 1.17.0\n1989564 - CVE-2021-33195 golang: net: lookup functions may return invalid host names\n1989570 - CVE-2021-33197 golang: net/http/httputil: ReverseProxy forwards connection headers if first one is empty\n1989575 - CVE-2021-33198 golang: math/big.Rat: may cause a panic or an unrecoverable fatal error if passed inputs with very large exponents\n1992955 - CVE-2021-3703 serverless: incomplete fix for CVE-2021-27918 / CVE-2021-31525 / CVE-2021-33196\n\n5", "sources": [ { "db": "NVD", "id": "CVE-2021-3450" }, { "db": "VULHUB", "id": "VHN-388430" }, { "db": "VULMON", "id": "CVE-2021-3450" }, { "db": "PACKETSTORM", "id": "162694" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "162383" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "162183" }, { "db": "PACKETSTORM", "id": "162337" }, { "db": "PACKETSTORM", "id": "162196" }, { "db": "PACKETSTORM", "id": "162201" }, { "db": "PACKETSTORM", "id": "164192" } ], "trust": 1.89 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2021-3450", "trust": 2.1 }, { "db": "SIEMENS", "id": "SSA-389290", "trust": 1.2 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/03/28/3", "trust": 1.2 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/03/27/2", "trust": 1.2 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/03/28/4", "trust": 1.2 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/03/27/1", "trust": 1.2 }, { "db": "TENABLE", "id": "TNS-2021-05", "trust": 1.2 }, { "db": "TENABLE", "id": "TNS-2021-09", "trust": 1.2 }, { "db": "TENABLE", "id": "TNS-2021-08", "trust": 1.2 }, { "db": "PULSESECURE", "id": "SA44845", "trust": 1.2 }, { "db": "MCAFEE", "id": "SB10356", "trust": 1.2 }, { "db": "PACKETSTORM", "id": "162337", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "162196", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "162383", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "162201", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "162183", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "162151", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162197", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162189", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163257", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162172", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162307", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162200", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162013", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162041", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162699", "trust": 0.1 }, { "db": "VULHUB", "id": "VHN-388430", "trust": 0.1 }, { "db": "ICS CERT", "id": "ICSA-22-069-09", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2021-3450", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162694", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163747", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166279", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164192", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-388430" }, { "db": "VULMON", "id": "CVE-2021-3450" }, { "db": "PACKETSTORM", "id": "162694" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "162383" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "162183" }, { "db": "PACKETSTORM", "id": "162337" }, { "db": "PACKETSTORM", "id": "162196" }, { "db": "PACKETSTORM", "id": "162201" }, { "db": "PACKETSTORM", "id": "164192" }, { "db": "NVD", "id": "CVE-2021-3450" } ] }, "id": "VAR-202103-1463", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-388430" } ], "trust": 0.38583214499999996 }, "last_update_date": "2024-07-23T21:05:39.679000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "The Register", "trust": 0.2, "url": "https://www.theregister.co.uk/2021/03/25/openssl_bug_fix/" }, { "title": "Red Hat: CVE-2021-3450", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=cve-2021-3450" }, { "title": "IBM: Security Bulletin: OpenSSL Vulnerabilities Affect IBM Sterling Connect:Express for UNIX (CVE-2021-3449, CVE-2021-3450)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=084930e972e3fa390ca483e019684fa8" }, { "title": "Arch Linux Advisories: [ASA-202103-10] openssl: multiple issues", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=asa-202103-10" }, { "title": "Amazon Linux 2: ALAS2-2021-1622", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2021-1622" }, { "title": "Arch Linux Issues: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2021-3450 log" }, { "title": "Cisco: Multiple Vulnerabilities in OpenSSL Affecting Cisco Products: March 2021", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=cisco_security_advisories_and_alerts_ciscoproducts\u0026qid=cisco-sa-openssl-2021-ghy28djd" }, { "title": "Tenable Security Advisories: [R1] Nessus 8.13.2 Fixes Multiple Third-party Vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=tns-2021-05" }, { "title": "Hitachi Security Advisories: Multiple Vulnerabilities in Hitachi Ops Center Common Services", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2021-117" }, { "title": "Tenable Security Advisories: [R1] Nessus Network Monitor 5.13.1 Fixes Multiple Third-party Vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=tns-2021-09" }, { "title": "Hitachi Security Advisories: Multiple Vulnerabilities in Hitachi Ops Center Analyzer viewpoint", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2021-119" }, { "title": "IBM: Security Bulletin: Vulnerabilities in XStream, Java, OpenSSL, WebSphere Application Server Liberty and Node.js affect IBM Spectrum Control", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=928e1f86fc9400462623e646ce4f11d9" }, { "title": "Red Hat: Moderate: OpenShift Container Platform 4.10.3 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220056 - security advisory" }, { "title": "Siemens Security Advisories: Siemens Security Advisory", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=4a9822530e6b610875f83ffc10e02aba" }, { "title": "Siemens Security Advisories: Siemens Security Advisory", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=ec6577109e640dac19a6ddb978afe82d" }, { "title": "yr_of_the_jellyfish", "trust": 0.1, "url": "https://github.com/rnbochsr/yr_of_the_jellyfish " }, { "title": "", "trust": 0.1, "url": "https://github.com/tianocore-docs/thirdpartysecurityadvisories " }, { "title": "tekton-image-scan-trivy", "trust": 0.1, "url": "https://github.com/vinamra28/tekton-image-scan-trivy " }, { "title": "TASSL-1.1.1k", "trust": 0.1, "url": "https://github.com/jntass/tassl-1.1.1k " }, { "title": "", "trust": 0.1, "url": "https://github.com/scholarnishu/trivy-by-aquasecurity " }, { "title": "", "trust": 0.1, "url": "https://github.com/teresaweber685/book_list " }, { "title": "", "trust": 0.1, "url": "https://github.com/isgo-golgo13/gokit-gorillakit-enginesvc " }, { "title": "", "trust": 0.1, "url": "https://github.com/fredrkl/trivy-demo " }, { "title": "BleepingComputer", "trust": 0.1, "url": "https://www.bleepingcomputer.com/news/security/openssl-fixes-severe-dos-certificate-validation-vulnerabilities/" } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-3450" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-295", "trust": 1.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-388430" }, { "db": "NVD", "id": "CVE-2021-3450" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.3, "url": "https://tools.cisco.com/security/center/content/ciscosecurityadvisory/cisco-sa-openssl-2021-ghy28djd" }, { "trust": 1.2, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-389290.pdf" }, { "trust": 1.2, "url": "https://kb.pulsesecure.net/articles/pulse_security_advisories/sa44845" }, { "trust": 1.2, "url": "https://psirt.global.sonicwall.com/vuln-detail/snwlid-2021-0013" }, { "trust": 1.2, "url": "https://security.netapp.com/advisory/ntap-20210326-0006/" }, { "trust": 1.2, "url": "https://www.openssl.org/news/secadv/20210325.txt" }, { "trust": 1.2, "url": "https://www.tenable.com/security/tns-2021-05" }, { "trust": 1.2, "url": "https://www.tenable.com/security/tns-2021-08" }, { "trust": 1.2, "url": "https://www.tenable.com/security/tns-2021-09" }, { "trust": 1.2, "url": "https://security.gentoo.org/glsa/202103-03" }, { "trust": 1.2, "url": "https://mta.openssl.org/pipermail/openssl-announce/2021-march/000198.html" }, { "trust": 1.2, "url": "https://security.freebsd.org/advisories/freebsd-sa-21:07.openssl.asc" }, { "trust": 1.2, "url": "https://www.oracle.com//security-alerts/cpujul2021.html" }, { "trust": 1.2, "url": "https://www.oracle.com/security-alerts/cpuapr2021.html" }, { "trust": 1.2, "url": "https://www.oracle.com/security-alerts/cpuapr2022.html" }, { "trust": 1.2, "url": "https://www.oracle.com/security-alerts/cpujul2022.html" }, { "trust": 1.2, "url": "https://www.oracle.com/security-alerts/cpuoct2021.html" }, { "trust": 1.2, "url": "http://www.openwall.com/lists/oss-security/2021/03/27/1" }, { "trust": 1.2, "url": "http://www.openwall.com/lists/oss-security/2021/03/27/2" }, { "trust": 1.2, "url": "http://www.openwall.com/lists/oss-security/2021/03/28/3" }, { "trust": 1.2, "url": "http://www.openwall.com/lists/oss-security/2021/03/28/4" }, { "trust": 1.1, "url": "https://kc.mcafee.com/corporate/index?page=content\u0026id=sb10356" }, { "trust": 1.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=2a40b7bc7b94dd7de897a74571e7024f0cf0d63b" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/ccbfllvqvilivgzmbjl3ixzgkwqisynp/" }, { "trust": 1.0, "url": "https://access.redhat.com/security/cve/cve-2021-3450" }, { "trust": 0.9, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.9, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2021-3449" }, { "trust": 0.9, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-20305" }, { "trust": 0.5, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3449" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3450" }, { "trust": 0.4, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-20454" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2018-1000858" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-13050" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-14889" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-1730" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-19906" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-13627" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2018-20843" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-15903" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-15358" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3520" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-13434" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3537" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3518" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-29362" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3516" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2017-14502" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-9169" }, { "trust": 0.3, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-29361" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3517" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3541" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3326" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-25013" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-8927" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-29363" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2016-10228" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-27618" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20305" }, { "trust": 0.2, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-1730" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-8286" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-28196" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15358" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27618" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-8231" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13434" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-8285" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28196" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20271" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9169" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29362" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-2708" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-2708" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-8284" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29361" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27363" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3347" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-28374" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-27364" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-26708" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27365" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0466" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-27152" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-27363" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27152" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3347" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-27365" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-0466" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27364" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28374" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-26708" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-27218" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3121" }, { "trust": 0.2, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.2, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=2a40b7bc7b94dd7de897a74571e7024f0cf0d63b" }, { "trust": 0.1, "url": "https://kc.mcafee.com/corporate/index?page=content\u0026amp;id=sb10356" }, { "trust": 0.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/ccbfllvqvilivgzmbjl3ixzgkwqisynp/" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/295.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-069-09" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20916" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19221" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20907" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20907" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13631" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14422" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-7595" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13632" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8492" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-16168" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9327" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13630" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20387" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.5/html/serverless_applications/index" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5018" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20218" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3115" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9327" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-16935" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19221" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-6405" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20388" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3114" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20388" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2021" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13631" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20387" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8492" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-5018" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19956" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13632" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14422" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13630" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-6405" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19956" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16935" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20218" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7595" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16168" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20916" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28469" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28500" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20934" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29418" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28852" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33034" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28092" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28851" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33909" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27219" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29482" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23337" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32399" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27358" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23369" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21321" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23368" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-11668" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23362" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23364" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23343" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21309" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23841" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23383" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28918" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28851" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3560" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28852" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23840" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33033" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20934" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25217" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28469" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3016" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3377" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28500" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21272" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29477" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27292" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23346" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29478" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11668" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23839" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33623" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21322" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23382" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33910" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23358" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15586" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28362" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23358" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16845" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-16845" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28362" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15586" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:1448" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9925" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9802" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30762" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33938" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9895" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8625" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44716" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8812" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3899" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8819" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43813" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3867" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8720" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9893" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33930" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8808" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3902" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24407" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25215" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3900" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30761" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33928" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9805" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8820" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9807" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8769" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8710" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-37750" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8813" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9850" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27781" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8811" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0055" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22947" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9803" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9862" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2014-3577" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3749" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3885" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15503" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41190" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10018" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25660" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8835" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8764" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3733" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8844" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3865" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3864" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21684" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14391" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3862" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0056" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3901" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-39226" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8823" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3895" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44717" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-11793" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0532" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9894" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8816" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8771" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3897" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9806" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8814" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8743" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33929" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9915" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36222" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8815" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8783" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20807" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9952" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22946" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21673" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8766" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3868" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8846" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3894" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25677" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30666" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8782" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3521" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:1196" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20218" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20218" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3121" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:1369" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35149" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-35149" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14040" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14040" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:1199" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:1202" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8284" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27918" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33196" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33195" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27918" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27218" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html/serverless/index" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33196" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33197" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8231" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/serverless/index" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33195" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33198" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33198" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-31525" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-34558" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3556" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3326" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33197" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20271" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8927" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29363" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8285" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3421" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-31525" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8286" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3703" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.7/html/serverless/index" } ], "sources": [ { "db": "VULHUB", "id": "VHN-388430" }, { "db": "VULMON", "id": "CVE-2021-3450" }, { "db": "PACKETSTORM", "id": "162694" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "162383" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "162183" }, { "db": "PACKETSTORM", "id": "162337" }, { "db": "PACKETSTORM", "id": "162196" }, { "db": "PACKETSTORM", "id": "162201" }, { "db": "PACKETSTORM", "id": "164192" }, { "db": "NVD", "id": "CVE-2021-3450" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-388430" }, { "db": "VULMON", "id": "CVE-2021-3450" }, { "db": "PACKETSTORM", "id": "162694" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "162383" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "162183" }, { "db": "PACKETSTORM", "id": "162337" }, { "db": "PACKETSTORM", "id": "162196" }, { "db": "PACKETSTORM", "id": "162201" }, { "db": "PACKETSTORM", "id": "164192" }, { "db": "NVD", "id": "CVE-2021-3450" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-03-25T00:00:00", "db": "VULHUB", "id": "VHN-388430" }, { "date": "2021-03-25T00:00:00", "db": "VULMON", "id": "CVE-2021-3450" }, { "date": "2021-05-19T14:19:18", "db": "PACKETSTORM", "id": "162694" }, { "date": "2021-08-06T14:02:37", "db": "PACKETSTORM", "id": "163747" }, { "date": "2021-04-29T14:37:49", "db": "PACKETSTORM", "id": "162383" }, { "date": "2022-03-11T16:38:38", "db": "PACKETSTORM", "id": "166279" }, { "date": "2021-04-14T16:40:32", "db": "PACKETSTORM", "id": "162183" }, { "date": "2021-04-26T19:21:56", "db": "PACKETSTORM", "id": "162337" }, { "date": "2021-04-15T13:49:54", "db": "PACKETSTORM", "id": "162196" }, { "date": "2021-04-15T13:50:39", "db": "PACKETSTORM", "id": "162201" }, { "date": "2021-09-17T16:04:56", "db": "PACKETSTORM", "id": "164192" }, { "date": "2021-03-25T15:15:13.560000", "db": "NVD", "id": "CVE-2021-3450" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-02-28T00:00:00", "db": "VULHUB", "id": "VHN-388430" }, { "date": "2023-11-07T00:00:00", "db": "VULMON", "id": "CVE-2021-3450" }, { "date": "2023-11-07T03:38:00.923000", "db": "NVD", "id": "CVE-2021-3450" } ] }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat Security Advisory 2021-2021-01", "sources": [ { "db": "PACKETSTORM", "id": "162694" } ], "trust": 0.1 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "code execution", "sources": [ { "db": "PACKETSTORM", "id": "162694" }, { "db": "PACKETSTORM", "id": "162383" } ], "trust": 0.2 } }
var-202103-1464
Vulnerability from variot
An OpenSSL TLS server may crash if sent a maliciously crafted renegotiation ClientHello message from a client. If a TLSv1.2 renegotiation ClientHello omits the signature_algorithms extension (where it was present in the initial ClientHello), but includes a signature_algorithms_cert extension then a NULL pointer dereference will result, leading to a crash and a denial of service attack. A server is only vulnerable if it has TLSv1.2 and renegotiation enabled (which is the default configuration). OpenSSL TLS clients are not impacted by this issue. All OpenSSL 1.1.1 versions are affected by this issue. Users of these versions should upgrade to OpenSSL 1.1.1k. OpenSSL 1.0.2 is not impacted by this issue. Fixed in OpenSSL 1.1.1k (Affected 1.1.1-1.1.1j). OpenSSL is an open source general encryption library of the Openssl team that can implement the Secure Sockets Layer (SSLv2/v3) and Transport Layer Security (TLSv1) protocols. The product supports a variety of encryption algorithms, including symmetric ciphers, hash algorithms, secure hash algorithms, etc. On March 25, 2021, the OpenSSL Project released a security advisory, OpenSSL Security Advisory [25 March 2021], that disclosed two vulnerabilities. Exploitation of these vulnerabilities could allow an malicious user to use a valid non-certificate authority (CA) certificate to act as a CA and sign a certificate for an arbitrary organization, user or device, or to cause a denial of service (DoS) condition. This advisory is available at the following link:tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-openssl-2021-GHY28dJd. In addition to persistent storage, Red Hat OpenShift Container Storage provisions a multicloud data management service with an S3 compatible API.
Security Fix(es):
- NooBaa: noobaa-operator leaking RPC AuthToken into log files (CVE-2021-3528)
For more details about the security issue(s), including the impact, a CVSS score, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fix(es):
-
Currently, a newly restored PVC cannot be mounted if some of the OpenShift Container Platform nodes are running on a version of Red Hat Enterprise Linux which is less than 8.2, and the snapshot from which the PVC was restored is deleted. Workaround: Do not delete the snapshot from which the PVC was restored until the restored PVC is deleted. (BZ#1962483)
-
Previously, the default backingstore was not created on AWS S3 when OpenShift Container Storage was deployed, due to incorrect identification of AWS S3. With this update, the default backingstore gets created when OpenShift Container Storage is deployed on AWS S3. (BZ#1927307)
-
Previously, log messages were printed to the endpoint pod log even if the debug option was not set. With this update, the log messages are printed to the endpoint pod log only when the debug option is set. (BZ#1938106)
-
Previously, the PVCs could not be provisioned as the
rook-ceph-mds
did not register the pod IP on the monitor servers, and hence every mount on the filesystem timed out, resulting in CephFS volume provisioning failure. With this update, an argument--public-addr=podIP
is added to the MDS pod when the host network is not enabled, and hence the CephFS volume provisioning does not fail. (BZ#1949558) -
Previously, OpenShift Container Storage 4.2 clusters were not updated with the correct cache value, and hence MDSs in standby-replay might report an oversized cache, as rook did not apply the
mds_cache_memory_limit
argument during upgrades. With this update, themds_cache_memory_limit
argument is applied during upgrades and the mds daemon operates normally. (BZ#1951348) -
Previously, the coredumps were not generated in the correct location as rook was setting the config option
log_file
to an empty string since logging happened on stdout and not on the files, and hence Ceph read the value of thelog_file
to build the dump path. With this update, rook does not set thelog_file
and keeps Ceph's internal default, and hence the coredumps are generated in the correct location and are accessible under/var/log/ceph/
. (BZ#1938049) -
Previously, Ceph became inaccessible, as the mons lose quorum if a mon pod was drained while another mon was failing over. With this update, voluntary mon drains are prevented while a mon is failing over, and hence Ceph does not become inaccessible. (BZ#1946573)
-
Previously, the mon quorum was at risk, as the operator could erroneously remove the new mon if the operator was restarted during a mon failover. With this update, the operator completes the same mon failover after the operator is restarted, and hence the mon quorum is more reliable in the node drains and mon failover scenarios. Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied.
For details on how to apply this update, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1938106 - [GSS][RFE]Reduce debug level for logs of Nooba Endpoint pod 1950915 - XSS Vulnerability with Noobaa version 5.5.0-3bacc6b 1951348 - [GSS][CephFS] health warning "MDS cache is too large (3GB/1GB); 0 inodes in use by clients, 0 stray files" for the standby-replay 1951600 - [4.6.z][Clone of BZ #1936545] setuid and setgid file bits are not retained after a OCS CephFS CSI restore 1955601 - CVE-2021-3528 NooBaa: noobaa-operator leaking RPC AuthToken into log files 1957189 - [Rebase] Use RHCS4.2z1 container image with OCS 4..6.5[may require doc update for external mode min supported RHCS version] 1959980 - When a node is being drained, increase the mon failover timeout to prevent unnecessary mon failover 1959983 - [GSS][mon] rook-operator scales mons to 4 after healthCheck timeout 1962483 - [RHEL7][RBD][4.6.z clone] FailedMount error when using restored PVC on app pod
Bug Fix(es):
-
WMCO patch pub-key-hash annotation to Linux node (BZ#1945248)
-
LoadBalancer Service type with invalid external loadbalancer IP breaks the datapath (BZ#1952917)
-
Telemetry info not completely available to identify windows nodes (BZ#1955319)
-
WMCO incorrectly shows node as ready after a failed configuration (BZ#1956412)
-
kube-proxy service terminated unexpectedly after recreated LB service (BZ#1963263)
-
Solution:
For Windows Machine Config Operator upgrades, see the following documentation:
https://docs.openshift.com/container-platform/4.7/windows_containers/window s-node-upgrades.html
- Bugs fixed (https://bugzilla.redhat.com/):
1945248 - WMCO patch pub-key-hash annotation to Linux node 1946538 - CVE-2021-25736 kubernetes: LoadBalancer Service type don't create a HNS policy for empty or invalid external loadbalancer IP, what could lead to MITM 1952917 - LoadBalancer Service type with invalid external loadbalancer IP breaks the datapath 1955319 - Telemetry info not completely available to identify windows nodes 1956412 - WMCO incorrectly shows node as ready after a failed configuration 1963263 - kube-proxy service terminated unexpectedly after recreated LB service
- Description:
Red Hat Advanced Cluster Management for Kubernetes 2.3.0 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana gement_for_kubernetes/2.3/html/release_notes/
Security:
-
fastify-reply-from: crafted URL allows prefix scape of the proxied backend service (CVE-2021-21321)
-
fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service (CVE-2021-21322)
-
nodejs-netmask: improper input validation of octal input data (CVE-2021-28918)
-
redis: Integer overflow via STRALGO LCS command (CVE-2021-29477)
-
redis: Integer overflow via COPY command for large intsets (CVE-2021-29478)
-
nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)
-
nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions (CVE-2020-28500)
-
golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing
-
-u- extension (CVE-2020-28851)
-
golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag (CVE-2020-28852)
-
nodejs-ansi_up: XSS due to insufficient URL sanitization (CVE-2021-3377)
-
oras: zip-slip vulnerability via oras-pull (CVE-2021-21272)
-
redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms (CVE-2021-21309)
-
nodejs-lodash: command injection via template (CVE-2021-23337)
-
nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() (CVE-2021-23362)
-
browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS) (CVE-2021-23364)
-
nodejs-postcss: Regular expression denial of service during source map parsing (CVE-2021-23368)
-
nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option (CVE-2021-23369)
-
nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js (CVE-2021-23382)
-
nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option (CVE-2021-23383)
-
openssl: integer overflow in CipherUpdate (CVE-2021-23840)
-
openssl: NULL pointer dereference in X509_issuer_and_serial_hash() (CVE-2021-23841)
-
nodejs-ua-parser-js: ReDoS via malicious User-Agent header (CVE-2021-27292)
-
grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call (CVE-2021-27358)
-
nodejs-is-svg: ReDoS via malicious string (CVE-2021-28092)
-
nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character (CVE-2021-29418)
-
ulikunitz/xz: Infinite loop in readUvarint allows for denial of service (CVE-2021-29482)
-
normalize-url: ReDoS for data URLs (CVE-2021-33502)
-
nodejs-trim-newlines: ReDoS in .end() method (CVE-2021-33623)
-
nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe (CVE-2021-23343)
-
html-parse-stringify: Regular Expression DoS (CVE-2021-23346)
-
openssl: incorrect SSLv2 rollback protection (CVE-2021-23839)
For more details about the security issues, including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE pages listed in the References section.
Bugs:
-
RFE Make the source code for the endpoint-metrics-operator public (BZ# 1913444)
-
cluster became offline after apiserver health check (BZ# 1942589)
-
Bugs fixed (https://bugzilla.redhat.com/):
1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension 1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag 1913444 - RFE Make the source code for the endpoint-metrics-operator public 1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull 1927520 - RHACM 2.3.0 images 1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection 1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash() 1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate 1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms 1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization 1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string 1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application 1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header 1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call 1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS 1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service 1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service 1942589 - cluster became offline after apiserver health check 1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() 1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character 1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data 1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service 1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option 1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing 1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js 1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service 1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS) 1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option 1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe 1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command 1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets 1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs 1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method 1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions 1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id 1983131 - Defragmenting an etcd member doesn't reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters
- It is comprised of the Apache Tomcat Servlet container, JBoss HTTP Connector (mod_cluster), the PicketLink Vault extension for Apache Tomcat, and the Tomcat Native library. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: OpenShift Container Platform 4.10.3 security update Advisory ID: RHSA-2022:0056-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2022:0056 Issue date: 2022-03-10 CVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 CVE-2022-24407 =====================================================================
- Summary:
Red Hat OpenShift Container Platform release 4.10.3 is now available with updates to packages and images that fix several bugs and add enhancements.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.10.3. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHSA-2022:0055
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html
Security Fix(es):
- gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)
- grafana: Snapshot authentication bypass (CVE-2021-39226)
- golang: net/http: limit growth of header canonicalization cache (CVE-2021-44716)
- nodejs-axios: Regular expression denial of service in trim function (CVE-2021-3749)
- golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)
- grafana: Forward OAuth Identity Token can allow users to access some data sources (CVE-2022-21673)
- grafana: directory traversal vulnerability (CVE-2021-43813)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-x86_64
The image digest is sha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-s390x
The image digest is sha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le
The image digest is sha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c
All OpenShift Container Platform 4.10 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.10 see the following documentation, which will be updated shortly for this release, for moderate instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1808240 - Always return metrics value for pods under the user's namespace
1815189 - feature flagged UI does not always become available after operator installation
1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters
1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly
1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal
1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered
1878925 - 'oc adm upgrade --to ...' rejects versions which occur only in history, while the cluster-version operator supports history fallback
1880738 - origin e2e test deletes original worker
1882983 - oVirt csi driver should refuse to provision RWX and ROX PV
1886450 - Keepalived router id check not documented for RHV/VMware IPI
1889488 - The metrics endpoint for the Scheduler is not protected by RBAC
1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom
1896474 - Path based routing is broken for some combinations
1897431 - CIDR support for additional network attachment with the bridge CNI plug-in
1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes
1907433 - Excessive logging in image operator
1909906 - The router fails with PANIC error when stats port already in use
1911173 - [MSTR-998] Many charts' legend names show {{}} instead of words
1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting.
1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)
1917893 - [ovirt] install fails: due to terraform error "Cannot attach Virtual Disk: Disk is locked" on vm resource
1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name
1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation
1926522 - oc adm catalog does not clean temporary files
1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes.
1928141 - kube-storage-version-migrator constantly reporting type "Upgradeable" status Unknown
1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it's storageclass is not yet finished, confusing users
1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x
1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade
1937085 - RHV UPI inventory playbook missing guarantee_memory
1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion
1938236 - vsphere-problem-detector does not support overriding log levels via storage CR
1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods
1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer
1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
1942913 - ThanosSidecarUnhealthy isn't resilient to WAL replays.
1943363 - [ovn] CNO should gracefully terminate ovn-northd
1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17
1948080 - authentication should not set Available=False APIServices_Error with 503s
1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set
1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0
1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer
1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs
1953264 - "remote error: tls: bad certificate" logs in prometheus-operator container
1955300 - Machine config operator reports unavailable for 23m during upgrade
1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set
1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set
1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters
1956496 - Needs SR-IOV Docs Upstream
1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret
1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid
1956964 - upload a boot-source to OpenShift virtualization using the console
1957547 - [RFE]VM name is not auto filled in dev console
1958349 - ovn-controller doesn't release the memory after cluster-density run
1959352 - [scale] failed to get pod annotation: timed out waiting for annotations
1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not
1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]
1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects
1961391 - String updates
1961509 - DHCP daemon pod should have CPU and memory requests set but not limits
1962066 - Edit machine/machineset specs not working
1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent
1963053 - oc whoami --show-console
should show the web console URL, not the server api URL
1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters
1964327 - Support containers with name:tag@digest
1964789 - Send keys and disconnect does not work for VNC console
1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7
1966445 - Unmasking a service doesn't work if it masked using MCO
1966477 - Use GA version in KAS/OAS/OauthAS to avoid: "audit.k8s.io/v1beta1" is deprecated and will be removed in a future release, use "audit.k8s.io/v1" instead
1966521 - kube-proxy's userspace implementation consumes excessive CPU
1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up
1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount
1970218 - MCO writes incorrect file contents if compression field is specified
1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]
1970805 - Cannot create build when docker image url contains dir structure
1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io
1972827 - image registry does not remain available during upgrade
1972962 - Should set the minimum value for the --max-icsp-size
flag of oc adm catalog mirror
1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run
1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established
1976301 - [ci] e2e-azure-upi is permafailing
1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change.
1976674 - CCO didn't set Upgradeable to False when cco mode is configured to Manual on azure platform
1976894 - Unidling a StatefulSet does not work as expected
1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases
1977414 - Build Config timed out waiting for condition 400: Bad Request
1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus
1978528 - systemd-coredump started and failed intermittently for unknown reasons
1978581 - machine-config-operator: remove runlevel from mco namespace
1979562 - Cluster operators: don't show messages when neither progressing, degraded or unavailable
1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9
1979966 - OCP builds always fail when run on RHEL7 nodes
1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading
1981549 - Machine-config daemon does not recover from broken Proxy configuration
1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]
1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues
1982063 - 'Control Plane' is not translated in Simplified Chinese language in Home->Overview page
1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands
1982662 - Workloads - DaemonSets - Add storage: i18n misses
1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE "/secrets/encryption-config" on single node clusters
1983758 - upgrades are failing on disruptive tests
1983964 - Need Device plugin configuration for the NIC "needVhostNet" & "isRdma"
1984592 - global pull secret not working in OCP4.7.4+ for additional private registries
1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs
1985486 - Cluster Proxy not used during installation on OSP with Kuryr
1985724 - VM Details Page missing translations
1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted
1985933 - Downstream image registry recommendation
1985965 - oVirt CSI driver does not report volume stats
1986216 - [scale] SNO: Slow Pod recovery due to "timed out waiting for OVS port binding"
1986237 - "MachineNotYetDeleted" in Pending state , alert not fired
1986239 - crictl create fails with "PID namespace requested, but sandbox infra container invalid"
1986302 - console continues to fetch prometheus alert and silences for normal user
1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI
1986338 - error creating list of resources in Import YAML
1986502 - yaml multi file dnd duplicates previous dragged files
1986819 - fix string typos for hot-plug disks
1987044 - [OCPV48] Shutoff VM is being shown as "Starting" in WebUI when using spec.runStrategy Manual/RerunOnFailure
1987136 - Declare operatorframework.io/arch. labels for all operators
1987257 - Go-http-client user-agent being used for oc adm mirror requests
1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold
1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP
1988406 - SSH key dropped when selecting "Customize virtual machine" in UI
1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade
1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with "Unable to connect to the server"
1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs
1989438 - expected replicas is wrong
1989502 - Developer Catalog is disappearing after short time
1989843 - 'More' and 'Show Less' functions are not translated on several page
1990014 - oc debug does not work for Windows pods
1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created
1990193 - 'more' and 'Show Less' is not being translated on Home -> Search page
1990255 - Partial or all of the Nodes/StorageClasses don't appear back on UI after text is removed from search bar
1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI
1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi- symlinks
1990556 - get-resources.sh doesn't honor the no_proxy settings even with no_proxy var
1990625 - Ironic agent registers with SLAAC address with privacy-stable
1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time
1991067 - github.com can not be resolved inside pods where cluster is running on openstack.
1991573 - Enable typescript strictNullCheck on network-policies files
1991641 - Baremetal Cluster Operator still Available After Delete Provisioning
1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator
1991819 - Misspelled word "ocurred" in oc inspect cmd
1991942 - Alignment and spacing fixes
1992414 - Two rootdisks show on storage step if 'This is a CD-ROM boot source' is checked
1992453 - The configMap failed to save on VM environment tab
1992466 - The button 'Save' and 'Reload' are not translated on vm environment tab
1992475 - The button 'Open console in New Window' and 'Disconnect' are not translated on vm console tab
1992509 - Could not customize boot source due to source PVC not found
1992541 - all the alert rules' annotations "summary" and "description" should comply with the OpenShift alerting guidelines
1992580 - storageProfile should stay with the same value by check/uncheck the apply button
1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply
1992777 - [IBMCLOUD] Default "ibm_iam_authorization_policy" is not working as expected in all scenarios
1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed)
1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing
1994094 - Some hardcodes are detected at the code level in OpenShift console components
1994142 - Missing required cloud config fields for IBM Cloud
1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools
1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart
1995335 - [SCALE] ovnkube CNI: remove ovs flows check
1995493 - Add Secret to workload button and Actions button are not aligned on secret details page
1995531 - Create RDO-based Ironic image to be promoted to OKD
1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator
1995887 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs
1995924 - CMO should report Upgradeable: false
when HA workload is incorrectly spread
1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole
1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN
1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down
1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page
1996647 - Provide more useful degraded message in auth operator on DNS errors
1996736 - Large number of 501 lr-policies in INCI2 env
1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes
1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP
1996928 - Enable default operator indexes on ARM
1997028 - prometheus-operator update removes env var support for thanos-sidecar
1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used
1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller.
1997245 - "Subscription already exists in openshift-storage namespace" error message is seen while installing odf-operator via UI
1997269 - Have to refresh console to install kube-descheduler
1997478 - Storage operator is not available after reboot cluster instances
1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
1997967 - storageClass is not reserved from default wizard to customize wizard
1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order
1998038 - [e2e][automation] add tests for UI for VM disk hot-plug
1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus
1998174 - Create storageclass gp3-csi after install ocp cluster on aws
1998183 - "r: Bad Gateway" info is improper
1998235 - Firefox warning: Cookie “csrf-token” will be soon rejected
1998377 - Filesystem table head is not full displayed in disk tab
1998378 - Virtual Machine is 'Not available' in Home -> Overview -> Cluster inventory
1998519 - Add fstype when create localvolumeset instance on web console
1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses
1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page
1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable
1999091 - Console update toast notification can appear multiple times
1999133 - removing and recreating static pod manifest leaves pod in error state
1999246 - .indexignore is not ingore when oc command load dc configuration
1999250 - ArgoCD in GitOps operator can't manage namespaces
1999255 - ovnkube-node always crashes out the first time it starts
1999261 - ovnkube-node log spam (and security token leak?)
1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -> Operator Installation page
1999314 - console-operator is slow to mark Degraded as False once console starts working
1999425 - kube-apiserver with "[SHOULD NOT HAPPEN] failed to update managedFields" err="failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)
1999556 - "master" pool should be updated before the CVO reports available at the new version occurred
1999578 - AWS EFS CSI tests are constantly failing
1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages
1999619 - cloudinit is malformatted if a user sets a password during VM creation flow
1999621 - Empty ssh_authorized_keys entry is added to VM's cloudinit if created from a customize flow
1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined
1999668 - openshift-install destroy cluster panic's when given invalid credentials to cloud provider (Azure Stack Hub)
1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource
1999771 - revert "force cert rotation every couple days for development" in 4.10
1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function
1999796 - Openshift Console Helm
tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace.
1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions
1999903 - Click "This is a CD-ROM boot source" ticking "Use template size PVC" on pvc upload form
1999983 - No way to clear upload error from template boot source
2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter
2000096 - Git URL is not re-validated on edit build-config form reload
2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig
2000236 - Confusing usage message from dynkeepalived CLI
2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported
2000430 - bump cluster-api-provider-ovirt version in installer
2000450 - 4.10: Enable static PV multi-az test
2000490 - All critical alerts shipped by CMO should have links to a runbook
2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)
2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster
2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled
2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console
2000754 - IPerf2 tests should be lower
2000846 - Structure logs in the entire codebase of Local Storage Operator
2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24
2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM
2000938 - CVO does not respect changes to a Deployment strategy
2000963 - 'Inline-volume (default fs)] volumes should store data' tests are failing on OKD with updated selinux-policy
2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don't have snapshot and should be fullClone
2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole
2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api
2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error
2001337 - Details Card in ODF Dashboard mentions OCS
2001339 - fix text content hotplug
2001413 - [e2e][automation] add/delete nic and disk to template
2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log
2001442 - Empty termination.log file for the kube-apiserver has too permissive mode
2001479 - IBM Cloud DNS unable to create/update records
2001566 - Enable alerts for prometheus operator in UWM
2001575 - Clicking on the perspective switcher shows a white page with loader
2001577 - Quick search placeholder is not displayed properly when the search string is removed
2001578 - [e2e][automation] add tests for vm dashboard tab
2001605 - PVs remain in Released state for a long time after the claim is deleted
2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options
2001620 - Cluster becomes degraded if it can't talk to Manila
2001760 - While creating 'Backing Store', 'Bucket Class', 'Namespace Store' user is navigated to 'Installed Operators' page after clicking on ODF
2001761 - Unable to apply cluster operator storage for SNO on GCP platform.
2001765 - Some error message in the log of diskmaker-manager caused confusion
2001784 - show loading page before final results instead of showing a transient message No log files exist
2001804 - Reload feature on Environment section in Build Config form does not work properly
2001810 - cluster admin unable to view BuildConfigs in all namespaces
2001817 - Failed to load RoleBindings list that will lead to ‘Role name’ is not able to be selected on Create RoleBinding page as well
2001823 - OCM controller must update operator status
2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start
2001835 - Could not select image tag version when create app from dev console
2001855 - Add capacity is disabled for ocs-storagecluster
2001856 - Repeating event: MissingVersion no image found for operand pod
2001959 - Side nav list borders don't extend to edges of container
2002007 - Layout issue on "Something went wrong" page
2002010 - ovn-kube may never attempt to retry a pod creation
2002012 - Cannot change volume mode when cloning a VM from a template
2002027 - Two instances of Dotnet helm chart show as one in topology
2002075 - opm render does not automatically pulling in the image(s) used in the deployments
2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster
2002125 - Network policy details page heading should be updated to Network Policy details
2002133 - [e2e][automation] add support/virtualization and improve deleteResource
2002134 - [e2e][automation] add test to verify vm details tab
2002215 - Multipath day1 not working on s390x
2002238 - Image stream tag is not persisted when switching from yaml to form editor
2002262 - [vSphere] Incorrect user agent in vCenter sessions list
2002266 - SinkBinding create form doesn't allow to use subject name, instead of label selector
2002276 - OLM fails to upgrade operators immediately
2002300 - Altering the Schedule Profile configurations doesn't affect the placement of the pods
2002354 - Missing DU configuration "Done" status reporting during ZTP flow
2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn't use commonjs
2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation
2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN
2002397 - Resources search is inconsistent
2002434 - CRI-O leaks some children PIDs
2002443 - Getting undefined error on create local volume set page
2002461 - DNS operator performs spurious updates in response to API's defaulting of service's internalTrafficPolicy
2002504 - When the openshift-cluster-storage-operator is degraded because of "VSphereProblemDetectorController_SyncError", the insights operator is not sending the logs from all pods.
2002559 - User preference for topology list view does not follow when a new namespace is created
2002567 - Upstream SR-IOV worker doc has broken links
2002588 - Change text to be sentence case to align with PF
2002657 - ovn-kube egress IP monitoring is using a random port over the node network
2002713 - CNO: OVN logs should have millisecond resolution
2002748 - [ICNI2] 'ErrorAddingLogicalPort' failed to handle external GW check: timeout waiting for namespace event
2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite
2002763 - Two storage systems getting created with external mode RHCS
2002808 - KCM does not use web identity credentials
2002834 - Cluster-version operator does not remove unrecognized volume mounts
2002896 - Incorrect result return when user filter data by name on search page
2002950 - Why spec.containers.command is not created with "oc create deploymentconfig --image= -- "
2003096 - [e2e][automation] check bootsource URL is displaying on review step
2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role
2003120 - CI: Uncaught error with ResizeObserver on operand details page
2003145 - Duplicate operand tab titles causes "two children with the same key" warning
2003164 - OLM, fatal error: concurrent map writes
2003178 - [FLAKE][knative] The UI doesn't show updated traffic distribution after accepting the form
2003193 - Kubelet/crio leaks netns and veth ports in the host
2003195 - OVN CNI should ensure host veths are removed
2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting '-e JENKINS_PASSWORD=password' ENV which was working for old container images
2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace
2003239 - "[sig-builds][Feature:Builds][Slow] can use private repositories as build input" tests fail outside of CI
2003244 - Revert libovsdb client code
2003251 - Patternfly components with list element has list item bullet when they should not.
2003252 - "[sig-builds][Feature:Builds][Slow] starting a build using CLI start-build test context override environment BUILD_LOGLEVEL in buildconfig" tests do not work as expected outside of CI
2003269 - Rejected pods should be filtered from admission regression
2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release
2003426 - [e2e][automation] add test for vm details bootorder
2003496 - [e2e][automation] add test for vm resources requirment settings
2003641 - All metal ipi jobs are failing in 4.10
2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state
2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node
2003683 - Samples operator is panicking in CI
2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster "Connection Details" page
2003715 - Error on creating local volume set after selection of the volume mode
2003743 - Remove workaround keeping /boot RW for kdump support
2003775 - etcd pod on CrashLoopBackOff after master replacement procedure
2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver
2003792 - Monitoring metrics query graph flyover panel is useless
2003808 - Add Sprint 207 translations
2003845 - Project admin cannot access image vulnerabilities view
2003859 - sdn emits events with garbage messages
2003896 - (release-4.10) ApiRequestCounts conditional gatherer
2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas
2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes
2004059 - [e2e][automation] fix current tests for downstream
2004060 - Trying to use basic spring boot sample causes crash on Firefox
2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn't close after selection
2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently
2004203 - build config's created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver
2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory
2004449 - Boot option recovery menu prevents image boot
2004451 - The backup filename displayed in the RecentBackup message is incorrect
2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts
2004508 - TuneD issues with the recent ConfigParser changes.
2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions
2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs
2004578 - Monitoring and node labels missing for an external storage platform
2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days
2004596 - [4.10] Bootimage bump tracker
2004597 - Duplicate ramdisk log containers running
2004600 - Duplicate ramdisk log containers running
2004609 - output of "crictl inspectp" is not complete
2004625 - BMC credentials could be logged if they change
2004632 - When LE takes a large amount of time, multiple whereabouts are seen
2004721 - ptp/worker custom threshold doesn't change ptp events threshold
2004736 - [knative] Create button on new Broker form is inactive despite form being filled
2004796 - [e2e][automation] add test for vm scheduling policy
2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque
2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card
2004901 - [e2e][automation] improve kubevirt devconsole tests
2004962 - Console frontend job consuming too much CPU in CI
2005014 - state of ODF StorageSystem is misreported during installation or uninstallation
2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines
2005179 - pods status filter is not taking effect
2005182 - sync list of deprecated apis about to be removed
2005282 - Storage cluster name is given as title in StorageSystem details page
2005355 - setuptools 58 makes Kuryr CI fail
2005407 - ClusterNotUpgradeable Alert should be set to Severity Info
2005415 - PTP operator with sidecar api configured throws bind: address already in use
2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console
2005554 - The switch status of the button "Show default project" is not revealed correctly in code
2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
2005761 - QE - Implementing crw-basic feature file
2005783 - Fix accessibility issues in the "Internal" and "Internal - Attached Mode" Installation Flow
2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty
2005854 - SSH NodePort service is created for each VM
2005901 - KS, KCM and KA going Degraded during master nodes upgrade
2005902 - Current UI flow for MCG only deployment is confusing and doesn't reciprocate any message to the end-user
2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics
2005971 - Change telemeter to report the Application Services product usage metrics
2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files
2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased
2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types
2006101 - Power off fails for drivers that don't support Soft power off
2006243 - Metal IPI upgrade jobs are running out of disk space
2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn't use the 0th address
2006308 - Backing Store YAML tab on click displays a blank screen on UI
2006325 - Multicast is broken across nodes
2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators
2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource
2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn't have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2006690 - OS boot failure "x64 Exception Type 06 - Invalid Opcode Exception"
2006714 - add retry for etcd errors in kube-apiserver
2006767 - KubePodCrashLooping may not fire
2006803 - Set CoreDNS cache entries for forwarded zones
2006861 - Add Sprint 207 part 2 translations
2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap
2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors
2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded
2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick
2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails
2007271 - CI Integration for Knative test cases
2007289 - kubevirt tests are failing in CI
2007322 - Devfile/Dockerfile import does not work for unsupported git host
2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3.
2007379 - Events are not generated for master offset for ordinary clock
2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace
2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address
2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error
2007522 - No new local-storage-operator-metadata-container is build for 4.10
2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10
2007580 - Azure cilium installs are failing e2e tests
2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10
2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes
2007692 - 4.9 "old-rhcos" jobs are permafailing with storage test failures
2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow
2007757 - must-gather extracts imagestreams in the "openshift" namespace, but not Templates
2007802 - AWS machine actuator get stuck if machine is completely missing
2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator
2008119 - The serviceAccountIssuer field on Authentication CR is reseted to “” when installation process
2008151 - Topology breaks on clicking in empty state
2008185 - Console operator go.mod should use go 1.16.version
2008201 - openstack-az job is failing on haproxy idle test
2008207 - vsphere CSI driver doesn't set resource limits
2008223 - gather_audit_logs: fix oc command line to get the current audit profile
2008235 - The Save button in the Edit DC form remains disabled
2008256 - Update Internationalization README with scope info
2008321 - Add correct documentation link for MON_DISK_LOW
2008462 - Disable PodSecurity feature gate for 4.10
2008490 - Backing store details page does not contain all the kebab actions.
2008521 - gcp-hostname service should correct invalid search entries in resolv.conf
2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount
2008539 - Registry doesn't fall back to secondary ImageContentSourcePolicy Mirror
2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers
2008599 - Azure Stack UPI does not have Internal Load Balancer
2008612 - Plugin asset proxy does not pass through browser cache headers
2008712 - VPA webhook timeout prevents all pods from starting
2008733 - kube-scheduler: exposed /debug/pprof port
2008911 - Prometheus repeatedly scaling prometheus-operator replica set
2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]
2008987 - OpenShift SDN Hosted Egress IP's are not being scheduled to nodes after upgrade to 4.8.12
2009055 - Instances of OCS to be replaced with ODF on UI
2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs
2009083 - opm blocks pruning of existing bundles during add
2009111 - [IPI-on-GCP] 'Install a cluster with nested virtualization enabled' failed due to unable to launch compute instances
2009131 - [e2e][automation] add more test about vmi
2009148 - [e2e][automation] test vm nic presets and options
2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator
2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family
2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted
2009384 - UI changes to support BindableKinds CRD changes
2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped
2009424 - Deployment upgrade is failing availability check
2009454 - Change web terminal subscription permissions from get to list
2009465 - container-selinux should come from rhel8-appstream
2009514 - Bump OVS to 2.16-15
2009555 - Supermicro X11 system not booting from vMedia with AI
2009623 - Console: Observe > Metrics page: Table pagination menu shows bullet points
2009664 - Git Import: Edit of knative service doesn't work as expected for git import flow
2009699 - Failure to validate flavor RAM
2009754 - Footer is not sticky anymore in import forms
2009785 - CRI-O's version file should be pinned by MCO
2009791 - Installer: ibmcloud ignores install-config values
2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13
2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo
2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests
2009873 - Stale Logical Router Policies and Annotations for a given node
2009879 - There should be test-suite coverage to ensure admin-acks work as expected
2009888 - SRO package name collision between official and community version
2010073 - uninstalling and then reinstalling sriov-network-operator is not working
2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node.
2010181 - Environment variables not getting reset on reload on deployment edit form
2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2010341 - OpenShift Alerting Rules Style-Guide Compliance
2010342 - Local console builds can have out of memory errors
2010345 - OpenShift Alerting Rules Style-Guide Compliance
2010348 - Reverts PIE build mode for K8S components
2010352 - OpenShift Alerting Rules Style-Guide Compliance
2010354 - OpenShift Alerting Rules Style-Guide Compliance
2010359 - OpenShift Alerting Rules Style-Guide Compliance
2010368 - OpenShift Alerting Rules Style-Guide Compliance
2010376 - OpenShift Alerting Rules Style-Guide Compliance
2010662 - Cluster is unhealthy after image-registry-operator tests
2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)
2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API
2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address
2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing
2010864 - Failure building EFS operator
2010910 - ptp worker events unable to identify interface for multiple interfaces
2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24
2010921 - Azure Stack Hub does not handle additionalTrustBundle
2010931 - SRO CSV uses non default category "Drivers and plugins"
2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well.
2011038 - optional operator conditions are confusing
2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass
2011171 - diskmaker-manager constantly redeployed by LSO when creating LV's
2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image
2011368 - Tooltip in pipeline visualization shows misleading data
2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels
2011411 - Managed Service's Cluster overview page contains link to missing Storage dashboards
2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster
2011513 - Kubelet rejects pods that use resources that should be freed by completed pods
2011668 - Machine stuck in deleting phase in VMware "reconciler failed to Delete machine"
2011693 - (release-4.10) "insightsclient_request_recvreport_total" metric is always incremented
2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn't export namespace labels anymore
2011733 - Repository README points to broken documentarion link
2011753 - Ironic resumes clean before raid configuration job is actually completed
2011809 - The nodes page in the openshift console doesn't work. You just get a blank page
2011822 - Obfuscation doesn't work at clusters with OVN
2011882 - SRO helm charts not synced with templates
2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot
2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages
2011903 - vsphere-problem-detector: session leak
2011927 - OLM should allow users to specify a proxy for GRPC connections
2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods
2011960 - [tracker] Storage operator is not available after reboot cluster instances
2011971 - ICNI2 pods are stuck in ContainerCreating state
2011972 - Ingress operator not creating wildcard route for hypershift clusters
2011977 - SRO bundle references non-existent image
2012069 - Refactoring Status controller
2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI
2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group
2012233 - [IBMCLOUD] IPI: "Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)"
2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig
2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off
2012407 - [e2e][automation] improve vm tab console tests
2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don't have namespace label
2012562 - migration condition is not detected in list view
2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written
2012780 - The port 50936 used by haproxy is occupied by kube-apiserver
2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working
2012902 - Neutron Ports assigned to Completed Pods are not reused Edit
2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack
2012971 - Disable operands deletes
2013034 - Cannot install to openshift-nmstate namespace
2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)
2013199 - post reboot of node SRIOV policy taking huge time
2013203 - UI breaks when trying to create block pool before storage cluster/system creation
2013222 - Full breakage for nightly payload promotion
2013273 - Nil pointer exception when phc2sys options are missing
2013321 - TuneD: high CPU utilization of the TuneD daemon.
2013416 - Multiple assets emit different content to the same filename
2013431 - Application selector dropdown has incorrect font-size and positioning
2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8
2013545 - Service binding created outside topology is not visible
2013599 - Scorecard support storage is not included in ocp4.9
2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)
2013646 - fsync controller will show false positive if gaps in metrics are observed.
2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default
2013751 - Service details page is showing wrong in-cluster hostname
2013787 - There are two tittle 'Network Attachment Definition Details' on NAD details page
2013871 - Resource table headings are not aligned with their column data
2013895 - Cannot enable accelerated network via MachineSets on Azure
2013920 - "--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude"
2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG)
2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain
2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab)
2013996 - Project detail page: Action "Delete Project" does nothing for the default project
2014071 - Payload imagestream new tags not properly updated during cluster upgrade
2014153 - SRIOV exclusive pooling
2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace
2014238 - AWS console test is failing on importing duplicate YAML definitions
2014245 - Several aria-labels, external links, and labels aren't internationalized
2014248 - Several files aren't internationalized
2014352 - Could not filter out machine by using node name on machines page
2014464 - Unexpected spacing/padding below navigation groups in developer perspective
2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages
2014486 - Integration Tests: OLM single namespace operator tests failing
2014488 - Custom operator cannot change orders of condition tables
2014497 - Regex slows down different forms and creates too much recursion errors in the log
2014538 - Kuryr controller crash looping on self._get_vip_port(loadbalancer).id 'NoneType' object has no attribute 'id'
2014614 - Metrics scraping requests should be assigned to exempt priority level
2014710 - TestIngressStatus test is broken on Azure
2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly
2014995 - oc adm must-gather cannot gather audit logs with 'None' audit profile
2015115 - [RFE] PCI passthrough
2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl '--resource-group-name' parameter
2015154 - Support ports defined networks and primarySubnet
2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic
2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production
2015386 - Possibility to add labels to the built-in OCP alerts
2015395 - Table head on Affinity Rules modal is not fully expanded
2015416 - CI implementation for Topology plugin
2015418 - Project Filesystem query returns No datapoints found
2015420 - No vm resource in project view's inventory
2015422 - No conflict checking on snapshot name
2015472 - Form and YAML view switch button should have distinguishable status
2015481 - [4.10] sriov-network-operator daemon pods are failing to start
2015493 - Cloud Controller Manager Operator does not respect 'additionalTrustBundle' setting
2015496 - Storage - PersistentVolumes : Claim colum value 'No Claim' in English
2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on 'Add Capacity' button click
2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu
2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain.
2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart 'x% used' is in English
2015549 - Observe - Metrics: Column heading and pagination text is in English
2015557 - Workloads - DeploymentConfigs : Error message is in English
2015568 - Compute - Nodes : CPU column's values are in English
2015635 - Storage operator fails causing installation to fail on ASH
2015660 - "Finishing boot source customization" screen should not use term "patched"
2015793 - [hypershift] The collect-profiles job's pods should run on the control-plane node
2015806 - Metrics view in Deployment reports "Forbidden" when not cluster-admin
2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning
2015837 - OS_CLOUD overwrites install-config's platform.openstack.cloud
2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch
2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail
2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)
2016008 - [4.10] Bootimage bump tracker
2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver
2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator
2016054 - No e2e CI presubmit configured for release component cluster-autoscaler
2016055 - No e2e CI presubmit configured for release component console
2016058 - openshift-sync does not synchronise in "ose-jenkins:v4.8"
2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager
2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers
2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters.
2016179 - Add Sprint 208 translations
2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager
2016235 - should update to 7.5.11 for grafana resources version label
2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails
2016334 - shiftstack: SRIOV nic reported as not supported
2016352 - Some pods start before CA resources are present
2016367 - Empty task box is getting created for a pipeline without finally task
2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts
2016438 - Feature flag gating is missing in few extensions contributed via knative plugin
2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc
2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets
2016453 - Complete i18n for GaugeChart defaults
2016479 - iface-id-ver is not getting updated for existing lsp
2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear
2016951 - dynamic actions list is not disabling "open console" for stopped vms
2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available
2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances
2017016 - [REF] Virtualization menu
2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn
2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly
2017130 - t is not a function error navigating to details page
2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue
2017244 - ovirt csi operator static files creation is in the wrong order
2017276 - [4.10] Volume mounts not created with the correct security context
2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed.
2017427 - NTO does not restart TuneD daemon when profile application is taking too long
2017535 - Broken Argo CD link image on GitOps Details Page
2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references
2017564 - On-prem prepender dispatcher script overwrites DNS search settings
2017565 - CCMO does not handle additionalTrustBundle on Azure Stack
2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice
2017606 - [e2e][automation] add test to verify send key for VNC console
2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes
2017656 - VM IP address is "undefined" under VM details -> ssh field
2017663 - SSH password authentication is disabled when public key is not supplied
2017680 - [gcp] Couldn’t enable support for instances with GPUs on GCP
2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set
2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource
2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults
2017761 - [e2e][automation] dummy bug for 4.9 test dependency
2017872 - Add Sprint 209 translations
2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances
2017879 - Add Chinese translation for "alternate"
2017882 - multus: add handling of pod UIDs passed from runtime
2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods
2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI
2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS
2018094 - the tooltip length is limited
2018152 - CNI pod is not restarted when It cannot start servers due to ports being used
2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time
2018234 - user settings are saved in local storage instead of on cluster
2018264 - Delete Export button doesn't work in topology sidebar (general issue with unknown CSV?)
2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports)
2018275 - Topology graph doesn't show context menu for Export CSV
2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked
2018380 - Migrate docs links to access.redhat.com
2018413 - Error: context deadline exceeded, OCP 4.8.9
2018428 - PVC is deleted along with VM even with "Delete Disks" unchecked
2018445 - [e2e][automation] enhance tests for downstream
2018446 - [e2e][automation] move tests to different level
2018449 - [e2e][automation] add test about create/delete network attachment definition
2018490 - [4.10] Image provisioning fails with file name too long
2018495 - Fix typo in internationalization README
2018542 - Kernel upgrade does not reconcile DaemonSet
2018880 - Get 'No datapoints found.' when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit
2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes
2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950
2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10
2018985 - The rootdisk size is 15Gi of windows VM in customize wizard
2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync.
2019096 - Update SRO leader election timeout to support SNO
2019129 - SRO in operator hub points to wrong repo for README
2019181 - Performance profile does not apply
2019198 - ptp offset metrics are not named according to the log output
2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest
2019284 - Stop action should not in the action list while VMI is not running
2019346 - zombie processes accumulation and Argument list too long
2019360 - [RFE] Virtualization Overview page
2019452 - Logger object in LSO appends to existing logger recursively
2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect
2019634 - Pause and migration is enabled in action list for a user who has view only permission
2019636 - Actions in VM tabs should be disabled when user has view only permission
2019639 - "Take snapshot" should be disabled while VM image is still been importing
2019645 - Create button is not removed on "Virtual Machines" page for view only user
2019646 - Permission error should pop-up immediately while clicking "Create VM" button on template page for view only user
2019647 - "Remove favorite" and "Create new Template" should be disabled in template action list for view only user
2019717 - cant delete VM with un-owned pvc attached
2019722 - The shared-resource-csi-driver-node pod runs as “BestEffort” qosClass
2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as "Always"
2019744 - [RFE] Suggest users to download newest RHEL 8 version
2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level
2019827 - Display issue with top-level menu items running demo plugin
2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded
2019886 - Kuryr unable to finish ports recovery upon controller restart
2019948 - [RFE] Restructring Virtualization links
2019972 - The Nodes section doesn't display the csr of the nodes that are trying to join the cluster
2019977 - Installer doesn't validate region causing binary to hang with a 60 minute timeout
2019986 - Dynamic demo plugin fails to build
2019992 - instance:node_memory_utilisation:ratio metric is incorrect
2020001 - Update dockerfile for demo dynamic plugin to reflect dir change
2020003 - MCD does not regard "dangling" symlinks as a files, attempts to write through them on next backup, resulting in "not writing through dangling symlink" error and degradation.
2020107 - cluster-version-operator: remove runlevel from CVO namespace
2020153 - Creation of Windows high performance VM fails
2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn't be public
2020250 - Replacing deprecated ioutil
2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build
2020275 - ClusterOperators link in console returns blank page during upgrades
2020377 - permissions error while using tcpdump option with must-gather
2020489 - coredns_dns metrics don't include the custom zone metrics data due to CoreDNS prometheus plugin is not defined
2020498 - "Show PromQL" button is disabled
2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature
2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI
2020664 - DOWN subports are not cleaned up
2020904 - When trying to create a connection from the Developer view between VMs, it fails
2021016 - 'Prometheus Stats' of dashboard 'Prometheus Overview' miss data on console compared with Grafana
2021017 - 404 page not found error on knative eventing page
2021031 - QE - Fix the topology CI scripts
2021048 - [RFE] Added MAC Spoof check
2021053 - Metallb operator presented as community operator
2021067 - Extensive number of requests from storage version operator in cluster
2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes
2021135 - [azure-file-csi-driver] "make unit-test" returns non-zero code, but tests pass
2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node
2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating
2021152 - imagePullPolicy is "Always" for ptp operator images
2021191 - Project admins should be able to list available network attachment defintions
2021205 - Invalid URL in git import form causes validation to not happen on URL change
2021322 - cluster-api-provider-azure should populate purchase plan information
2021337 - Dynamic Plugins: ResourceLink doesn't render when passed a groupVersionKind
2021364 - Installer requires invalid AWS permission s3:GetBucketReplication
2021400 - Bump documentationBaseURL to 4.10
2021405 - [e2e][automation] VM creation wizard Cloud Init editor
2021433 - "[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified" test fail permanently on disconnected
2021466 - [e2e][automation] Windows guest tool mount
2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver
2021551 - Build is not recognizing the USER group from an s2i image
2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character
2021629 - api request counts for current hour are incorrect
2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page
2021693 - Modals assigned modal-lg class are no longer the correct width
2021724 - Observe > Dashboards: Graph lines are not visible when obscured by other lines
2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled
2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags
2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem
2022053 - dpdk application with vhost-net is not able to start
2022114 - Console logging every proxy request
2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack (Intermittent)
2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long
2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error .
2022447 - ServiceAccount in manifests conflicts with OLM
2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules.
2022509 - getOverrideForManifest does not check manifest.GVK.Group
2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache
2022612 - no namespace field for "Kubernetes / Compute Resources / Namespace (Pods)" admin console dashboard
2022627 - Machine object not picking up external FIP added to an openstack vm
2022646 - configure-ovs.sh failure - Error: unknown connection 'WARN:'
2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox
2022801 - Add Sprint 210 translations
2022811 - Fix kubelet log rotation file handle leak
2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations
2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests
2022880 - Pipeline renders with minor visual artifact with certain task dependencies
2022886 - Incorrect URL in operator description
2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config
2023060 - [e2e][automation] Windows VM with CDROM migration
2023077 - [e2e][automation] Home Overview Virtualization status
2023090 - [e2e][automation] Examples of Import URL for VM templates
2023102 - [e2e][automation] Cloudinit disk of VM from custom template
2023216 - ACL for a deleted egressfirewall still present on node join switch
2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9
2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy
2023342 - SCC admission should take ephemeralContainers into account
2023356 - Devfiles can't be loaded in Safari on macOS (403 - Forbidden)
2023434 - Update Azure Machine Spec API to accept Marketplace Images
2023500 - Latency experienced while waiting for volumes to attach to node
2023522 - can't remove package from index: database is locked
2023560 - "Network Attachment Definitions" has no project field on the top in the list view
2023592 - [e2e][automation] add mac spoof check for nad
2023604 - ACL violation when deleting a provisioning-configuration resource
2023607 - console returns blank page when normal user without any projects visit Installed Operators page
2023638 - Downgrade support level for extended control plane integration to Dev Preview
2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10
2023675 - Changing CNV Namespace
2023779 - Fix Patch 104847 in 4.9
2023781 - initial hardware devices is not loading in wizard
2023832 - CCO updates lastTransitionTime for non-Status changes
2023839 - Bump recommended FCOS to 34.20211031.3.0
2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly
2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from "registry:5000" repository
2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8
2024055 - External DNS added extra prefix for the TXT record
2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully
2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json
2024199 - 400 Bad Request error for some queries for the non admin user
2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode
2024262 - Sample catalog is not displayed when one API call to the backend fails
2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability
2024316 - modal about support displays wrong annotation
2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected
2024399 - Extra space is in the translated text of "Add/Remove alternate service" on Create Route page
2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view
2024493 - Observe > Alerting > Alerting rules page throws error trying to destructure undefined
2024515 - test-blocker: Ceph-storage-plugin tests failing
2024535 - hotplug disk missing OwnerReference
2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image
2024547 - Detail page is breaking for namespace store , backing store and bucket class.
2024551 - KMS resources not getting created for IBM FlashSystem storage
2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel
2024613 - pod-identity-webhook starts without tls
2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded
2024665 - Bindable services are not shown on topology
2024731 - linuxptp container: unnecessary checking of interfaces
2024750 - i18n some remaining OLM items
2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured
2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack
2024841 - test Keycloak with latest tag
2024859 - Not able to deploy an existing image from private image registry using developer console
2024880 - Egress IP breaks when network policies are applied
2024900 - Operator upgrade kube-apiserver
2024932 - console throws "Unauthorized" error after logging out
2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up
2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick
2025230 - ClusterAutoscalerUnschedulablePods should not be a warning
2025266 - CreateResource route has exact prop which need to be removed
2025301 - [e2e][automation] VM actions availability in different VM states
2025304 - overwrite storage section of the DV spec instead of the pvc section
2025431 - [RFE]Provide specific windows source link
2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36
2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node
2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn't work for ExternalTrafficPolicy=local
2025481 - Update VM Snapshots UI
2025488 - [DOCS] Update the doc for nmstate operator installation
2025592 - ODC 4.9 supports invalid devfiles only
2025765 - It should not try to load from storageProfile after unchecking"Apply optimized StorageProfile settings"
2025767 - VMs orphaned during machineset scaleup
2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns "kubevirt-hyperconverged" while using customize wizard
2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size’s vCPUsAvailable instead of vCPUs for the sku.
2025821 - Make "Network Attachment Definitions" available to regular user
2025823 - The console nav bar ignores plugin separator in existing sections
2025830 - CentOS capitalizaion is wrong
2025837 - Warn users that the RHEL URL expire
2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-
2025903 - [UI] RoleBindings tab doesn't show correct rolebindings
2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2026178 - OpenShift Alerting Rules Style-Guide Compliance
2026209 - Updation of task is getting failed (tekton hub integration)
2026223 - Internal error occurred: failed calling webhook "ptpconfigvalidationwebhook.openshift.io"
2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates
2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct
2026352 - Kube-Scheduler revision-pruner fail during install of new cluster
2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment
2026383 - Error when rendering custom Grafana dashboard through ConfigMap
2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation
2026396 - Cachito Issues: sriov-network-operator Image build failure
2026488 - openshift-controller-manager - delete event is repeating pathologically
2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined.
2026560 - Cluster-version operator does not remove unrecognized volume mounts
2026699 - fixed a bug with missing metadata
2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator
2026898 - Description/details are missing for Local Storage Operator
2027132 - Use the specific icon for Fedora and CentOS template
2027238 - "Node Exporter / USE Method / Cluster" CPU utilization graph shows incorrect legend
2027272 - KubeMemoryOvercommit alert should be human readable
2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group
2027288 - Devfile samples can't be loaded after fixing it on Safari (redirect caching issue)
2027299 - The status of checkbox component is not revealed correctly in code
2027311 - K8s watch hooks do not work when fetching core resources
2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation
2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don't use the downstream images
2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation
2027498 - [IBMCloud] SG Name character length limitation
2027501 - [4.10] Bootimage bump tracker
2027524 - Delete Application doesn't delete Channels or Brokers
2027563 - e2e/add-flow-ci.feature fix accessibility violations
2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges
2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions
2027685 - openshift-cluster-csi-drivers pods crashing on PSI
2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced
2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string
2027917 - No settings in hostfirmwaresettings and schema objects for masters
2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf
2027982 - nncp stucked at ConfigurationProgressing
2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters
2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed
2028030 - Panic detected in cluster-image-registry-operator pod
2028042 - Desktop viewer for Windows VM shows "no Service for the RDP (Remote Desktop Protocol) can be found"
2028054 - Cloud controller manager operator can't get leader lease when upgrading from 4.8 up to 4.9
2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin
2028141 - Console tests doesn't pass on Node.js 15 and 16
2028160 - Remove i18nKey in network-policy-peer-selectors.tsx
2028162 - Add Sprint 210 translations
2028170 - Remove leading and trailing whitespace
2028174 - Add Sprint 210 part 2 translations
2028187 - Console build doesn't pass on Node.js 16 because node-sass doesn't support it
2028217 - Cluster-version operator does not default Deployment replicas to one
2028240 - Multiple CatalogSources causing higher CPU use than necessary
2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn't be set in HostFirmwareSettings
2028325 - disableDrain should be set automatically on SNO
2028484 - AWS EBS CSI driver's livenessprobe does not respect operator's loglevel
2028531 - Missing netFilter to the list of parameters when platform is OpenStack
2028610 - Installer doesn't retry on GCP rate limiting
2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting
2028695 - destroy cluster does not prune bootstrap instance profile
2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs
2028802 - CRI-O panic due to invalid memory address or nil pointer dereference
2028816 - VLAN IDs not released on failures
2028881 - Override not working for the PerformanceProfile template
2028885 - Console should show an error context if it logs an error object
2028949 - Masthead dropdown item hover text color is incorrect
2028963 - Whereabouts should reconcile stranded IP addresses
2029034 - enabling ExternalCloudProvider leads to inoperative cluster
2029178 - Create VM with wizard - page is not displayed
2029181 - Missing CR from PGT
2029273 - wizard is not able to use if project field is "All Projects"
2029369 - Cypress tests github rate limit errors
2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out
2029394 - missing empty text for hardware devices at wizard review
2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used
2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl
2029521 - EFS CSI driver cannot delete volumes under load
2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle
2029579 - Clicking on an Application which has a Helm Release in it causes an error
2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn't for HPE
2029645 - Sync upstream 1.15.0 downstream
2029671 - VM action "pause" and "clone" should be disabled while VM disk is still being importing
2029742 - [ovn] Stale lr-policy-list and snat rules left for egressip
2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage
2029785 - CVO panic when an edge is included in both edges and conditionaledges
2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp)
2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error
2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace
2030228 - Fix StorageSpec resources field to use correct API
2030229 - Mirroring status card reflect wrong data
2030240 - Hide overview page for non-privileged user
2030305 - Export App job do not completes
2030347 - kube-state-metrics exposes metrics about resource annotations
2030364 - Shared resource CSI driver monitoring is not setup correctly
2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets
2030534 - Node selector/tolerations rules are evaluated too early
2030539 - Prometheus is not highly available
2030556 - Don't display Description or Message fields for alerting rules if those annotations are missing
2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation
2030574 - console service uses older "service.alpha.openshift.io" for the service serving certificates.
2030677 - BOND CNI: There is no option to configure MTU on a Bond interface
2030692 - NPE in PipelineJobListener.upsertWorkflowJob
2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache
2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error
2030847 - PerformanceProfile API version should be v2
2030961 - Customizing the OAuth server URL does not apply to upgraded cluster
2031006 - Application name input field is not autofocused when user selects "Create application"
2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex
2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn't be started
2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue
2031057 - Topology sidebar for Knative services shows a small pod ring with "0 undefined" as tooltip
2031060 - Failing CSR Unit test due to expired test certificate
2031085 - ovs-vswitchd running more threads than expected
2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1
2031228 - CVE-2021-43813 grafana: directory traversal vulnerability
2031502 - [RFE] New common templates crash the ui
2031685 - Duplicated forward upstreams should be removed from the dns operator
2031699 - The displayed ipv6 address of a dns upstream should be case sensitive
2031797 - [RFE] Order and text of Boot source type input are wrong
2031826 - CI tests needed to confirm driver-toolkit image contents
2031831 - OCP Console - Global CSS overrides affecting dynamic plugins
2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional
2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled)
2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain)
2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself
2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource
2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64
2032141 - open the alertrule link in new tab, got empty page
2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy
2032296 - Cannot create machine with ephemeral disk on Azure
2032407 - UI will show the default openshift template wizard for HANA template
2032415 - Templates page - remove "support level" badge and add "support level" column which should not be hard coded
2032421 - [RFE] UI integration with automatic updated images
2032516 - Not able to import git repo with .devfile.yaml
2032521 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the aws_vpc_dhcp_options_association resource
2032547 - hardware devices table have filter when table is empty
2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool
2032566 - Cluster-ingress-router does not support Azure Stack
2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso
2032589 - DeploymentConfigs ignore resolve-names annotation
2032732 - Fix styling conflicts due to recent console-wide CSS changes
2032831 - Knative Services and Revisions are not shown when Service has no ownerReference
2032851 - Networking is "not available" in Virtualization Overview
2032926 - Machine API components should use K8s 1.23 dependencies
2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24
2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster
2033013 - Project dropdown in user preferences page is broken
2033044 - Unable to change import strategy if devfile is invalid
2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable
2033111 - IBM VPC operator library bump removed global CLI args
2033138 - "No model registered for Templates" shows on customize wizard
2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected
2033239 - [IPI on Alibabacloud] 'openshift-install' gets the wrong region (‘cn-hangzhou’) selected
2033257 - unable to use configmap for helm charts
2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn’t triggered
2033290 - Product builds for console are failing
2033382 - MAPO is missing machine annotations
2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations
2033403 - Devfile catalog does not show provider information
2033404 - Cloud event schema is missing source type and resource field is using wrong value
2033407 - Secure route data is not pre-filled in edit flow form
2033422 - CNO not allowing LGW conversion from SGW in runtime
2033434 - Offer darwin/arm64 oc in clidownloads
2033489 - CCM operator failing on baremetal platform
2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver
2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains
2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating "cluster-infrastructure-02-config.yml" status, which leads to bootstrap failed and all master nodes NotReady
2033538 - Gather Cost Management Metrics Custom Resource
2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined
2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page
2033634 - list-style-type: disc is applied to the modal dropdowns
2033720 - Update samples in 4.10
2033728 - Bump OVS to 2.16.0-33
2033729 - remove runtime request timeout restriction for azure
2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended
2033749 - Azure Stack Terraform fails without Local Provider
2033750 - Local volume should pull multi-arch image for kube-rbac-proxy
2033751 - Bump kubernetes to 1.23
2033752 - make verify fails due to missing yaml-patch
2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource
2034004 - [e2e][automation] add tests for VM snapshot improvements
2034068 - [e2e][automation] Enhance tests for 4.10 downstream
2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore
2034097 - [OVN] After edit EgressIP object, the status is not correct
2034102 - [OVN] Recreate the deleted EgressIP object got InvalidEgressIP warning
2034129 - blank page returned when clicking 'Get started' button
2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0
2034153 - CNO does not verify MTU migration for OpenShiftSDN
2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled
2034170 - Use function.knative.dev for Knative Functions related labels
2034190 - unable to add new VirtIO disks to VMs
2034192 - Prometheus fails to insert reporting metrics when the sample limit is met
2034243 - regular user cant load template list
2034245 - installing a cluster on aws, gcp always fails with "Error: Incompatible provider version"
2034248 - GPU/Host device modal is too small
2034257 - regular user Create VM
missing permissions alert
2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]
2034287 - do not block upgrades if we can't create storageclass in 4.10 in vsphere
2034300 - Du validator policy is NonCompliant after DU configuration completed
2034319 - Negation constraint is not validating packages
2034322 - CNO doesn't pick up settings required when ExternalControlPlane topology
2034350 - The CNO should implement the Whereabouts IP reconciliation cron job
2034362 - update description of disk interface
2034398 - The Whereabouts IPPools CRD should include the podref field
2034409 - Default CatalogSources should be pointing to 4.10 index images
2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics
2034413 - cloud-network-config-controller fails to init with secret "cloud-credentials" not found in manual credential mode
2034460 - Summary: cloud-network-config-controller does not account for different environment
2034474 - Template's boot source is "Unknown source" before and after set enableCommonBootImageImport to true
2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren't working properly
2034493 - Change cluster version operator log level
2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list
2034527 - IPI deployment fails 'timeout reached while inspecting the node' when provisioning network ipv6
2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer
2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART
2034537 - Update team
2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds
2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success
2034577 - Current OVN gateway mode should be reflected on node annotation as well
2034621 - context menu not popping up for application group
2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10
2034624 - Warn about unsupported CSI driver in vsphere operator
2034647 - missing volumes list in snapshot modal
2034648 - Rebase openshift-controller-manager to 1.23
2034650 - Rebase openshift/builder to 1.23
2034705 - vSphere: storage e2e tests logging configuration data
2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail.
2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment
2034785 - ptpconfig with summary_interval cannot be applied
2034823 - RHEL9 should be starred in template list
2034838 - An external router can inject routes if no service is added
2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent
2034879 - Lifecycle hook's name and owner shouldn't be allowed to be empty
2034881 - Cloud providers components should use K8s 1.23 dependencies
2034884 - ART cannot build the image because it tries to download controller-gen
2034889 - oc adm prune deployments
does not work
2034898 - Regression in recently added Events feature
2034957 - update openshift-apiserver to kube 1.23.1
2035015 - ClusterLogForwarding CR remains stuck remediating forever
2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster
2035141 - [RFE] Show GPU/Host devices in template's details tab
2035146 - "kubevirt-plugin~PVC cannot be empty" shows on add-disk modal while adding existing PVC
2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting
2035199 - IPv6 support in mtu-migration-dispatcher.yaml
2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing
2035250 - Peering with ebgp peer over multi-hops doesn't work
2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices
2035315 - invalid test cases for AWS passthrough mode
2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env
2035321 - Add Sprint 211 translations
2035326 - [ExternalCloudProvider] installation with additional network on workers fails
2035328 - Ccoctl does not ignore credentials request manifest marked for deletion
2035333 - Kuryr orphans ports on 504 errors from Neutron
2035348 - Fix two grammar issues in kubevirt-plugin.json strings
2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets
2035409 - OLM E2E test depends on operator package that's no longer published
2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address
2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to 'ecs-cn-hangzhou.aliyuncs.com' timeout, although the specified region is 'us-east-1'
2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster
2035467 - UI: Queried metrics can't be ordered on Oberve->Metrics page
2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers
2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class
2035602 - [e2e][automation] add tests for Virtualization Overview page cards
2035703 - Roles -> RoleBindings tab doesn't show RoleBindings correctly
2035704 - RoleBindings list page filter doesn't apply
2035705 - Azure 'Destroy cluster' get stuck when the cluster resource group is already not existing.
2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed
2035772 - AccessMode and VolumeMode is not reserved for customize wizard
2035847 - Two dashes in the Cronjob / Job pod name
2035859 - the output of opm render doesn't contain olm.constraint which is defined in dependencies.yaml
2035882 - [BIOS setting values] Create events for all invalid settings in spec
2035903 - One redundant capi-operator credential requests in “oc adm extract --credentials-requests”
2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen
2035927 - Cannot enable HighNodeUtilization scheduler profile
2035933 - volume mode and access mode are empty in customize wizard review tab
2035969 - "ip a " shows "Error: Peer netns reference is invalid" after create test pods
2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation
2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error
2036029 - New added cloud-network-config operator doesn’t supported aws sts format credential
2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend
2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes
2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23
2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23
2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments
2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists
2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected
2036826 - oc adm prune deployments
can prune the RC/RS
2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform
2036861 - kube-apiserver is degraded while enable multitenant
2036937 - Command line tools page shows wrong download ODO link
2036940 - oc registry login fails if the file is empty or stdout
2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container
2036989 - Route URL copy to clipboard button wraps to a separate line by itself
2036990 - ZTP "DU Done inform policy" never becomes compliant on multi-node clusters
2036993 - Machine API components should use Go lang version 1.17
2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log.
2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api
2037073 - Alertmanager container fails to start because of startup probe never being successful
2037075 - Builds do not support CSI volumes
2037167 - Some log level in ibm-vpc-block-csi-controller are hard code
2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles
2037182 - PingSource badge color is not matched with knativeEventing color
2037203 - "Running VMs" card is too small in Virtualization Overview
2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly
2037237 - Add "This is a CD-ROM boot source" to customize wizard
2037241 - default TTL for noobaa cache buckets should be 0
2037246 - Cannot customize auto-update boot source
2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately
2037288 - Remove stale image reference
2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources
2037483 - Rbacs for Pods within the CBO should be more restrictive
2037484 - Bump dependencies to k8s 1.23
2037554 - Mismatched wave number error message should include the wave numbers that are in conflict
2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]
2037635 - impossible to configure custom certs for default console route in ingress config
2037637 - configure custom certificate for default console route doesn't take effect for OCP >= 4.8
2037638 - Builds do not support CSI volumes as volume sources
2037664 - text formatting issue in Installed Operators list table
2037680 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080
2037689 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080
2037801 - Serverless installation is failing on CI jobs for e2e tests
2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format
2037856 - use lease for leader election
2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10
2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests
2037904 - upgrade operator deployment failed due to memory limit too low for manager container
2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]
2038034 - non-privileged user cannot see auto-update boot source
2038053 - Bump dependencies to k8s 1.23
2038088 - Remove ipa-downloader references
2038160 - The default
project missed the annotation : openshift.io/node-selector: ""
2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional
2038196 - must-gather is missing collecting some metal3 resources
2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)
2038253 - Validator Policies are long lived
2038272 - Failures to build a PreprovisioningImage are not reported
2038384 - Azure Default Instance Types are Incorrect
2038389 - Failing test: [sig-arch] events should not repeat pathologically
2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket
2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips
2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained
2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect
2038663 - update kubevirt-plugin OWNERS
2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via "oc adm groups new"
2038705 - Update ptp reviewers
2038761 - Open Observe->Targets page, wait for a while, page become blank
2038768 - All the filters on the Observe->Targets page can't work
2038772 - Some monitors failed to display on Observe->Targets page
2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node
2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces
2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard
2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation
2038864 - E2E tests fail because multi-hop-net was not created
2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console
2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured
2038968 - Move feature gates from a carry patch to openshift/api
2039056 - Layout issue with breadcrumbs on API explorer page
2039057 - Kind column is not wide enough in API explorer page
2039064 - Bulk Import e2e test flaking at a high rate
2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled
2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters
2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost
2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy
2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator
2039170 - [upgrade]Error shown on registry operator "missing the cloud-provider-config configmap" after upgrade
2039227 - Improve image customization server parameter passing during installation
2039241 - Improve image customization server parameter passing during installation
2039244 - Helm Release revision history page crashes the UI
2039294 - SDN controller metrics cannot be consumed correctly by prometheus
2039311 - oc Does Not Describe Build CSI Volumes
2039315 - Helm release list page should only fetch secrets for deployed charts
2039321 - SDN controller metrics are not being consumed by prometheus
2039330 - Create NMState button doesn't work in OperatorHub web console
2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations
2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters.
2039359 - oc adm prune deployments
can't prune the RS where the associated Deployment no longer exists
2039382 - gather_metallb_logs does not have execution permission
2039406 - logout from rest session after vsphere operator sync is finished
2039408 - Add GCP region northamerica-northeast2 to allowed regions
2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration
2039425 - No need to set KlusterletAddonConfig CR applicationManager->enabled: true in RAN ztp deployment
2039491 - oc - git:// protocol used in unit tests
2039516 - Bump OVN to ovn21.12-21.12.0-25
2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate
2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled
2039541 - Resolv-prepender script duplicating entries
2039586 - [e2e] update centos8 to centos stream8
2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty
2039619 - [AWS] In tree provisioner storageclass aws disk type should contain 'gp3' and csi provisioner storageclass default aws disk type should be 'gp3'
2039670 - Create PDBs for control plane components
2039678 - Page goes blank when create image pull secret
2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported
2039743 - React missing key warning when open operator hub detail page (and maybe others as well)
2039756 - React missing key warning when open KnativeServing details
2039770 - Observe dashboard doesn't react on time-range changes after browser reload when perspective is changed in another tab
2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard
2039781 - [GSS] OBC is not visible by admin of a Project on Console
2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector
2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled
2039880 - Log level too low for control plane metrics
2039919 - Add E2E test for router compression feature
2039981 - ZTP for standard clusters installs stalld on master nodes
2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead
2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced
2040143 - [IPI on Alibabacloud] suggest to remove region "cn-nanjing" or provide better error message
2040150 - Update ConfigMap keys for IBM HPCS
2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth
2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository
2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp
2040376 - "unknown instance type" error for supported m6i.xlarge instance
2040394 - Controller: enqueue the failed configmap till services update
2040467 - Cannot build ztp-site-generator container image
2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn't take affect in OpenShift 4
2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps
2040535 - Auto-update boot source is not available in customize wizard
2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name
2040603 - rhel worker scaleup playbook failed because missing some dependency of podman
2040616 - rolebindings page doesn't load for normal users
2040620 - [MAPO] Error pulling MAPO image on installation
2040653 - Topology sidebar warns that another component is updated while rendering
2040655 - User settings update fails when selecting application in topology sidebar
2040661 - Different react warnings about updating state on unmounted components when leaving topology
2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation
2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi
2040694 - Three upstream HTTPClientConfig struct fields missing in the operator
2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers
2040710 - cluster-baremetal-operator cannot update BMC subscription CR
2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms
2040782 - Import YAML page blocks input with more then one generateName attribute
2040783 - The Import from YAML summary page doesn't show the resource name if created via generateName attribute
2040791 - Default PGT policies must be 'inform' to integrate with the Lifecycle Operator
2040793 - Fix snapshot e2e failures
2040880 - do not block upgrades if we can't connect to vcenter
2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10
2041093 - autounattend.xml missing
2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates
2041319 - [IPI on Alibabacloud] installation in region "cn-shanghai" failed, due to "Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped"
2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23
2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller
2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener
2041441 - Provision volume with size 3000Gi even if sizeRange: '[10-2000]GiB' in storageclass on IBM cloud
2041466 - Kubedescheduler version is missing from the operator logs
2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses
2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods)
2041492 - Spacing between resources in inventory card is too small
2041509 - GCP Cloud provider components should use K8s 1.23 dependencies
2041510 - cluster-baremetal-operator doesn't run baremetal-operator's subscription webhook
2041541 - audit: ManagedFields are dropped using API not annotation
2041546 - ovnkube: set election timer at RAFT cluster creation time
2041554 - use lease for leader election
2041581 - KubeDescheduler operator log shows "Use of insecure cipher detected"
2041583 - etcd and api server cpu mask interferes with a guaranteed workload
2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure
2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation
2041620 - bundle CSV alm-examples does not parse
2041641 - Fix inotify leak and kubelet retaining memory
2041671 - Delete templates leads to 404 page
2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category
2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled
2041750 - [IPI on Alibabacloud] trying "create install-config" with region "cn-wulanchabu (China (Ulanqab))" (or "ap-southeast-6 (Philippines (Manila))", "cn-guangzhou (China (Guangzhou))") failed due to invalid endpoint
2041763 - The Observe > Alerting pages no longer have their default sort order applied
2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken
2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied
2041882 - cloud-network-config operator can't work normal on GCP workload identity cluster
2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases
2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist
2041971 - [vsphere] Reconciliation of mutating webhooks didn't happen
2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile
2041999 - [PROXY] external dns pod cannot recognize custom proxy CA
2042001 - unexpectedly found multiple load balancers
2042029 - kubedescheduler fails to install completely
2042036 - [IBMCLOUD] "openshift-install explain installconfig.platform.ibmcloud" contains not yet supported custom vpc parameters
2042049 - Seeing warning related to unrecognized feature gate in kubescheduler & KCM logs
2042059 - update discovery burst to reflect lots of CRDs on openshift clusters
2042069 - Revert toolbox to rhcos-toolbox
2042169 - Can not delete egressnetworkpolicy in Foreground propagation
2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool
2042265 - [IBM]"--scale-down-utilization-threshold" doesn't work on IBMCloud
2042274 - Storage API should be used when creating a PVC
2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection
2042366 - Lifecycle hooks should be independently managed
2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway
2042382 - [e2e][automation] CI takes more then 2 hours to run
2042395 - Add prerequisites for active health checks test
2042438 - Missing rpms in openstack-installer image
2042466 - Selection does not happen when switching from Topology Graph to List View
2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver
2042567 - insufficient info on CodeReady Containers configuration
2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk
2042619 - Overview page of the console is broken for hypershift clusters
2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running
2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud
2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud
2042770 - [IPI on Alibabacloud] with vpcID & vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly
2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)
2042851 - Create template from SAP HANA template flow - VM is created instead of a new template
2042906 - Edit machineset with same machine deletion hook name succeed
2042960 - azure-file CI fails with "gid(0) in storageClass and pod fsgroup(1000) are not equal"
2043003 - [IPI on Alibabacloud] 'destroy cluster' of a failed installation (bug2041694) stuck after 'stage=Nat gateways'
2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]
2043043 - Cluster Autoscaler should use K8s 1.23 dependencies
2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)
2043078 - Favorite system projects not visible in the project selector after toggling "Show default projects".
2043117 - Recommended operators links are erroneously treated as external
2043130 - Update CSI sidecars to the latest release for 4.10
2043234 - Missing validation when creating several BGPPeers with the same peerAddress
2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler
2043254 - crio does not bind the security profiles directory
2043296 - Ignition fails when reusing existing statically-keyed LUKS volume
2043297 - [4.10] Bootimage bump tracker
2043316 - RHCOS VM fails to boot on Nutanix AOS
2043446 - Rebase aws-efs-utils to the latest upstream version.
2043556 - Add proper ci-operator configuration to ironic and ironic-agent images
2043577 - DPU network operator
2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator
2043675 - Too many machines deleted by cluster autoscaler when scaling down
2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation
2043709 - Logging flags no longer being bound to command line
2043721 - Installer bootstrap hosts using outdated kubelet containing bugs
2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather
2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23
2043780 - Bump router to k8s.io/api 1.23
2043787 - Bump cluster-dns-operator to k8s.io/api 1.23
2043801 - Bump CoreDNS to k8s.io/api 1.23
2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown
2043961 - [OVN-K] If pod creation fails, retry doesn't work as expected.
2044201 - Templates golden image parameters names should be supported
2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]
2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter “csi.storage.k8s.io/fstype” create pvc,pod successfully but write data to the pod's volume failed of "Permission denied"
2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects
2044347 - Bump to kubernetes 1.23.3
2044481 - collect sharedresource cluster scoped instances with must-gather
2044496 - Unable to create hardware events subscription - failed to add finalizers
2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources
2044680 - Additional libovsdb performance and resource consumption fixes
2044704 - Observe > Alerting pages should not show runbook links in 4.10
2044717 - [e2e] improve tests for upstream test environment
2044724 - Remove namespace column on VM list page when a project is selected
2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff
2044808 - machine-config-daemon-pull.service: use cp
instead of cat
when extracting MCD in OKD
2045024 - CustomNoUpgrade alerts should be ignored
2045112 - vsphere-problem-detector has missing rbac rules for leases
2045199 - SnapShot with Disk Hot-plug hangs
2045561 - Cluster Autoscaler should use the same default Group value as Cluster API
2045591 - Reconciliation of aws pod identity mutating webhook did not happen
2045849 - Add Sprint 212 translations
2045866 - MCO Operator pod spam "Error creating event" warning messages in 4.10
2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin
2045916 - [IBMCloud] Default machine profile in installer is unreliable
2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment
2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify
2046137 - oc output for unknown commands is not human readable
2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance
2046297 - Bump DB reconnect timeout
2046517 - In Notification drawer, the "Recommendations" header shows when there isn't any recommendations
2046597 - Observe > Targets page may show the wrong service monitor is multiple monitors have the same namespace & label selectors
2046626 - Allow setting custom metrics for Ansible-based Operators
2046683 - [AliCloud]"--scale-down-utilization-threshold" doesn't work on AliCloud
2047025 - Installation fails because of Alibaba CSI driver operator is degraded
2047190 - Bump Alibaba CSI driver for 4.10
2047238 - When using communities and localpreferences together, only localpreference gets applied
2047255 - alibaba: resourceGroupID not found
2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions
2047317 - Update HELM OWNERS files under Dev Console
2047455 - [IBM Cloud] Update custom image os type
2047496 - Add image digest feature
2047779 - do not degrade cluster if storagepolicy creation fails
2047927 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2047929 - use lease for leader election
2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2048046 - New route annotation to show another URL or hide topology URL decorator doesn't work for Knative Services
2048048 - Application tab in User Preferences dropdown menus are too wide.
2048050 - Topology list view items are not highlighted on keyboard navigation
2048117 - [IBM]Shouldn't change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value
2048413 - Bond CNI: Failed to attach Bond NAD to pod
2048443 - Image registry operator panics when finalizes config deletion
2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*
2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt
2048598 - Web terminal view is broken
2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure
2048891 - Topology page is crashed
2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class
2049043 - Cannot create VM from template
2049156 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2049886 - Placeholder bug for OCP 4.10.0 metadata release
2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning
2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2
2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0
2050227 - Installation on PSI fails with: 'openstack platform does not have the required standard-attr-tag network extension'
2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]
2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members
2050310 - ContainerCreateError when trying to launch large (>500) numbers of pods across nodes
2050370 - alert data for burn budget needs to be updated to prevent regression
2050393 - ZTP missing support for local image registry and custom machine config
2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud
2050737 - Remove metrics and events for master port offsets
2050801 - Vsphere upi tries to access vsphere during manifests generation phase
2050883 - Logger object in LSO does not log source location accurately
2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit
2052062 - Whereabouts should implement client-go 1.22+
2052125 - [4.10] Crio appears to be coredumping in some scenarios
2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config
2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade.
2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests
2052598 - kube-scheduler should use configmap lease
2052599 - kube-controller-manger should use configmap lease
2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh
2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total
not valid
2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop
2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set.
2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1
2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch
2052756 - [4.10] PVs are not being cleaned up after PVC deletion
2053175 - oc adm catalog mirror throws 'missing signature key' error when using file://local/index
2053218 - ImagePull fails with error "unable to pull manifest from example.com/busy.box:v5 invalid reference format"
2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs
2053268 - inability to detect static lifecycle failure
2053314 - requestheader IDP test doesn't wait for cleanup, causing high failure rates
2053323 - OpenShift-Ansible BYOH Unit Tests are Broken
2053339 - Remove dev preview badge from IBM FlashSystem deployment windows
2053751 - ztp-site-generate container is missing convenience entrypoint
2053945 - [4.10] Failed to apply sriov policy on intel nics
2054109 - Missing "app" label
2054154 - RoleBinding in project without subject is causing "Project access" page to fail
2054244 - Latest pipeline run should be listed on the top of the pipeline run list
2054288 - console-master-e2e-gcp-console is broken
2054562 - DPU network operator 4.10 branch need to sync with master
2054897 - Unable to deploy hw-event-proxy operator
2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently
2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line
2055371 - Remove Check which enforces summary_interval must match logSyncInterval
2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11
2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API
2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured
2056479 - ovirt-csi-driver-node pods are crashing intermittently
2056572 - reconcilePrecaching error: cannot list resource "clusterserviceversions" in API group "operators.coreos.com" at the cluster scope"
2056629 - [4.10] EFS CSI driver can't unmount volumes with "wait: no child processes"
2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs
2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation
2056948 - post 1.23 rebase: regression in service-load balancer reliability
2057438 - Service Level Agreement (SLA) always show 'Unknown'
2057721 - Fix Proxy support in RHACM 2.4.2
2057724 - Image creation fails when NMstateConfig CR is empty
2058641 - [4.10] Pod density test causing problems when using kube-burner
2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install
2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials
2060956 - service domain can't be resolved when networkpolicy is used in OCP 4.10-rc
- References:
https://access.redhat.com/security/cve/CVE-2014-3577 https://access.redhat.com/security/cve/CVE-2016-10228 https://access.redhat.com/security/cve/CVE-2017-14502 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2018-1000858 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9169 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-25013 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-8927 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-9952 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-13434 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-15358 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-25660 https://access.redhat.com/security/cve/CVE-2020-25677 https://access.redhat.com/security/cve/CVE-2020-27618 https://access.redhat.com/security/cve/CVE-2020-27781 https://access.redhat.com/security/cve/CVE-2020-29361 https://access.redhat.com/security/cve/CVE-2020-29362 https://access.redhat.com/security/cve/CVE-2020-29363 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/cve/CVE-2021-3326 https://access.redhat.com/security/cve/CVE-2021-3449 https://access.redhat.com/security/cve/CVE-2021-3450 https://access.redhat.com/security/cve/CVE-2021-3516 https://access.redhat.com/security/cve/CVE-2021-3517 https://access.redhat.com/security/cve/CVE-2021-3518 https://access.redhat.com/security/cve/CVE-2021-3520 https://access.redhat.com/security/cve/CVE-2021-3521 https://access.redhat.com/security/cve/CVE-2021-3537 https://access.redhat.com/security/cve/CVE-2021-3541 https://access.redhat.com/security/cve/CVE-2021-3733 https://access.redhat.com/security/cve/CVE-2021-3749 https://access.redhat.com/security/cve/CVE-2021-20305 https://access.redhat.com/security/cve/CVE-2021-21684 https://access.redhat.com/security/cve/CVE-2021-22946 https://access.redhat.com/security/cve/CVE-2021-22947 https://access.redhat.com/security/cve/CVE-2021-25215 https://access.redhat.com/security/cve/CVE-2021-27218 https://access.redhat.com/security/cve/CVE-2021-30666 https://access.redhat.com/security/cve/CVE-2021-30761 https://access.redhat.com/security/cve/CVE-2021-30762 https://access.redhat.com/security/cve/CVE-2021-33928 https://access.redhat.com/security/cve/CVE-2021-33929 https://access.redhat.com/security/cve/CVE-2021-33930 https://access.redhat.com/security/cve/CVE-2021-33938 https://access.redhat.com/security/cve/CVE-2021-36222 https://access.redhat.com/security/cve/CVE-2021-37750 https://access.redhat.com/security/cve/CVE-2021-39226 https://access.redhat.com/security/cve/CVE-2021-41190 https://access.redhat.com/security/cve/CVE-2021-43813 https://access.redhat.com/security/cve/CVE-2021-44716 https://access.redhat.com/security/cve/CVE-2021-44717 https://access.redhat.com/security/cve/CVE-2022-0532 https://access.redhat.com/security/cve/CVE-2022-21673 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL 0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne eGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM CEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF aDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC Y/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp sQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO RDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN rs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry bSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z 7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT b5PUYUBIZLc= =GUDA -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . This software, such as Apache HTTP Server, is common to multiple JBoss middleware products, and is packaged under Red Hat JBoss Core Services to allow for faster distribution of updates, and for a more consistent update experience.
This release adds the new Apache HTTP Server 2.4.37 Service Pack 7 packages that are part of the JBoss Core Services offering. Refer to the Release Notes for information on the most significant bug fixes and enhancements included in this release. Solution:
Before applying the update, back up your existing installation, including all applications, configuration files, databases and database settings, and so on.
The References section of this erratum contains a download link for the update. You must be logged in to download the update. Bugs fixed (https://bugzilla.redhat.com/):
1941547 - CVE-2021-3450 openssl: CA certificate check bypass with X509_V_FLAG_X509_STRICT 1941554 - CVE-2021-3449 openssl: NULL pointer dereference in signature_algorithms processing
- ========================================================================== Ubuntu Security Notice USN-5038-1 August 12, 2021
postgresql-10, postgresql-12, postgresql-13 vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 21.04
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
Summary:
Several security issues were fixed in PostgreSQL.
Software Description: - postgresql-13: Object-relational SQL database - postgresql-12: Object-relational SQL database - postgresql-10: Object-relational SQL database
Details:
It was discovered that the PostgresQL planner could create incorrect plans in certain circumstances. A remote attacker could use this issue to cause PostgreSQL to crash, resulting in a denial of service, or possibly obtain sensitive information from memory. (CVE-2021-3677)
It was discovered that PostgreSQL incorrectly handled certain SSL renegotiation ClientHello messages from clients. A remote attacker could possibly use this issue to cause PostgreSQL to crash, resulting in a denial of service. (CVE-2021-3449)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 21.04: postgresql-13 13.4-0ubuntu0.21.04.1
Ubuntu 20.04 LTS: postgresql-12 12.8-0ubuntu0.20.04.1
Ubuntu 18.04 LTS: postgresql-10 10.18-0ubuntu0.18.04.1
This update uses a new upstream release, which includes additional bug fixes. After a standard system update you need to restart PostgreSQL to make all the necessary changes.
Security Fix(es):
- golang: crypto/tls: certificate of wrong type is causing TLS client to panic (CVE-2021-34558)
- golang: net: lookup functions may return invalid host names (CVE-2021-33195)
- golang: net/http/httputil: ReverseProxy forwards connection headers if first one is empty (CVE-2021-33197)
- golang: match/big.Rat: may cause a panic or an unrecoverable fatal error if passed inputs with very large exponents (CVE-2021-33198)
- golang: encoding/xml: infinite loop when using xml.NewTokenDecoder with a custom TokenReader (CVE-2021-27918)
- golang: net/http: panic in ReadRequest and ReadResponse when reading a very large header (CVE-2021-31525)
- golang: archive/zip: malformed archive may cause panic or memory exhaustion (CVE-2021-33196)
It was found that the CVE-2021-27918, CVE-2021-31525 and CVE-2021-33196 have been incorrectly mentioned as fixed in RHSA for Serverless client kn 1.16.0. Bugs fixed (https://bugzilla.redhat.com/):
1983596 - CVE-2021-34558 golang: crypto/tls: certificate of wrong type is causing TLS client to panic 1983651 - Release of OpenShift Serverless Serving 1.17.0 1983654 - Release of OpenShift Serverless Eventing 1.17.0 1989564 - CVE-2021-33195 golang: net: lookup functions may return invalid host names 1989570 - CVE-2021-33197 golang: net/http/httputil: ReverseProxy forwards connection headers if first one is empty 1989575 - CVE-2021-33198 golang: math/big.Rat: may cause a panic or an unrecoverable fatal error if passed inputs with very large exponents 1992955 - CVE-2021-3703 serverless: incomplete fix for CVE-2021-27918 / CVE-2021-31525 / CVE-2021-33196
5. OpenSSL Security Advisory [25 March 2021]
CA certificate check bypass with X509_V_FLAG_X509_STRICT (CVE-2021-3450)
Severity: High
The X509_V_FLAG_X509_STRICT flag enables additional security checks of the certificates present in a certificate chain. It is not set by default.
Starting from OpenSSL version 1.1.1h a check to disallow certificates in the chain that have explicitly encoded elliptic curve parameters was added as an additional strict check.
An error in the implementation of this check meant that the result of a previous check to confirm that certificates in the chain are valid CA certificates was overwritten. This effectively bypasses the check that non-CA certificates must not be able to issue other certificates.
If a "purpose" has been configured then there is a subsequent opportunity for checks that the certificate is a valid CA. All of the named "purpose" values implemented in libcrypto perform this check. Therefore, where a purpose is set the certificate chain will still be rejected even when the strict flag has been used. A purpose is set by default in libssl client and server certificate verification routines, but it can be overridden or removed by an application.
In order to be affected, an application must explicitly set the X509_V_FLAG_X509_STRICT verification flag and either not set a purpose for the certificate verification or, in the case of TLS client or server applications, override the default purpose.
This issue was reported to OpenSSL on 18th March 2021 by Benjamin Kaduk from Akamai and was discovered by Xiang Ding and others at Akamai. The fix was developed by Tomáš Mráz.
This issue was reported to OpenSSL on 17th March 2021 by Nokia. The fix was developed by Peter Kästle and Samuel Sapalski from Nokia.
Note
OpenSSL 1.0.2 is out of support and no longer receiving public updates. Extended support is available for premium support customers: https://www.openssl.org/support/contracts.html
OpenSSL 1.1.0 is out of support and no longer receiving updates of any kind.
References
URL for this Security Advisory: https://www.openssl.org/news/secadv/20210325.txt
Note: the online version of the advisory may be updated with additional details over time.
For details of OpenSSL severity classifications please see: https://www.openssl.org/policies/secpolicy.html
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202103-1464", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "nessus", "scope": "lte", "trust": 1.0, "vendor": "tenable", "version": "8.13.1" }, { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.57" }, { "model": "mysql workbench", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.0.23" }, { "model": "scalance s627-2m", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "4.1" }, { "model": "sonicos", "scope": "eq", "trust": 1.0, "vendor": "sonicwall", "version": "7.0.1.0" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "12.12.0" }, { "model": "scalance xr-300wg", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "4.3" }, { "model": "mysql server", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "8.0.15" }, { "model": "quantum security management", "scope": "eq", "trust": 1.0, "vendor": "checkpoint", "version": "r81" }, { "model": "scalance xp-200", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "4.3" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "10.0.0" }, { "model": "sinamics connect 300", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "scalance xc-200", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "4.3" }, { "model": "simatic net cp 1543-1", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "3.0" }, { "model": "simatic net cp 1542sp-1 irc", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.1" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "12.13.0" }, { "model": "scalance xr526-8c", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "6.4" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "10.0" }, { "model": "simatic net cp1243-7 lte us", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "3.1" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "scalance w700", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "6.5" }, { "model": "sinec infrastructure network services", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0.1.1" }, { "model": "scalance xr552-12", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "6.4" }, { "model": "simatic mv500", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "scalance xm-400", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "6.4" }, { "model": "scalance s615", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "6.2" }, { "model": "tia administrator", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "log correlation engine", "scope": "lt", "trust": 1.0, "vendor": "tenable", "version": "6.0.9" }, { "model": "multi-domain management", "scope": "eq", "trust": 1.0, "vendor": "checkpoint", "version": "r80.40" }, { "model": "capture client", "scope": "eq", "trust": 1.0, "vendor": "sonicwall", "version": "3.5" }, { "model": "simatic cloud connect 7", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": null }, { "model": "web gateway cloud service", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "8.2.19" }, { "model": "simatic pcs neo", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "simatic rf186c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "simatic s7-1200 cpu 1212c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "simatic cp 1242-7 gprs v2", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": null }, { "model": "simatic s7-1200 cpu 1215 fc", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "simatic logon", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "1.6.0.2" }, { "model": "freebsd", "scope": "eq", "trust": 1.0, "vendor": "freebsd", "version": "12.2" }, { "model": "simatic logon", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.5" }, { "model": "simatic hmi basic panels 2nd generation", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "oncommand workflow automation", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "quantum security gateway", "scope": "eq", "trust": 1.0, "vendor": "checkpoint", "version": "r81" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "14.16.1" }, { "model": "primavera unifier", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "17.12" }, { "model": "simatic rf188ci", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "oncommand insight", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "sinec nms", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "14.0.0" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.12.0" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.12.1" }, { "model": "scalance xr524-8c", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "6.4" }, { "model": "tim 1531 irc", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.0" }, { "model": "sinec pni", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": null }, { "model": "storagegrid", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "12.22.1" }, { "model": "communications communications policy management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "12.6.0.0.0" }, { "model": "scalance s612", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "4.1" }, { "model": "simatic hmi ktp mobile panels", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "15.0.0" }, { "model": "sinumerik opc ua server", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "web gateway cloud service", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "9.2.10" }, { "model": "e-series performance analyzer", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "simatic cp 1242-7 gprs v2", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "3.1" }, { "model": "tenable.sc", "scope": "lte", "trust": 1.0, "vendor": "tenable", "version": "5.17.0" }, { "model": "zfs storage appliance kit", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.8" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "10.24.0" }, { "model": "jd edwards world security", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "a9.4" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "10.12.0" }, { "model": "simatic wincc runtime advanced", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "simatic process historian opc ua server", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2019" }, { "model": "simatic net cp 1545-1", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "ruggedcom rcm1224", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "6.2" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "34" }, { "model": "web gateway", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "8.2.19" }, { "model": "simatic cloud connect 7", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "1.1" }, { "model": "scalance xr528-6m", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "6.4" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "14.14.0" }, { "model": "cloud volumes ontap mediator", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "multi-domain management", "scope": "eq", "trust": 1.0, "vendor": "checkpoint", "version": "r81" }, { "model": "scalance xf-200ba", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "4.3" }, { "model": "simatic s7-1500 cpu 1518-4 pn\\/dp mfp", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.13.0" }, { "model": "primavera unifier", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "19.12" }, { "model": "scalance m-800", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "6.2" }, { "model": "openssl", "scope": "lt", "trust": 1.0, "vendor": "openssl", "version": "1.1.1k" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "14.15.0" }, { "model": "snapcenter", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "scalance lpe9403", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "simatic hmi comfort outdoor panels", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "scalance sc-600", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.0" }, { "model": "quantum security management", "scope": "eq", "trust": 1.0, "vendor": "checkpoint", "version": "r80.40" }, { "model": "active iq unified manager", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "graalvm", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "19.3.5" }, { "model": "mysql server", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.0.23" }, { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.59" }, { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.58" }, { "model": "sma100", "scope": "lt", "trust": 1.0, "vendor": "sonicwall", "version": "10.2.1.0-17sv" }, { "model": "simatic rf186ci", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "primavera unifier", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "20.12" }, { "model": "sinema server", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "14.0" }, { "model": "simatic net cp 1243-1", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "3.1" }, { "model": "simatic s7-1200 cpu 1217c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "primavera unifier", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "17.7" }, { "model": "simatic rf185c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "tenable.sc", "scope": "gte", "trust": 1.0, "vendor": "tenable", "version": "5.13.0" }, { "model": "santricity smi-s provider", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "primavera unifier", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "21.12" }, { "model": "simatic s7-1200 cpu 1211c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "scalance s623", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "4.1" }, { "model": "simatic s7-1200 cpu 1215c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "web gateway", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "9.2.10" }, { "model": "graalvm", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "21.0.0.2" }, { "model": "simatic rf166c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "tim 1531 irc", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "2.2" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "10.13.0" }, { "model": "simatic net cp 1543sp-1", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.1" }, { "model": "web gateway", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "10.1.1" }, { "model": "scalance w1700", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.0" }, { "model": "mysql server", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "5.7.33" }, { "model": "simatic s7-1200 cpu 1212fc", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "simatic net cp1243-7 lte eu", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "3.1" }, { "model": "simatic wincc telecontrol", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": null }, { "model": "simatic net cp 1243-8 irc", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "3.1" }, { "model": "simatic pcs 7 telecontrol", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "15.14.0" }, { "model": "simatic rf360r", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "9.0" }, { "model": "jd edwards enterpriseone tools", "scope": "lt", "trust": 1.0, "vendor": "oracle", "version": "9.2.6.0" }, { "model": "sma100", "scope": "gte", "trust": 1.0, "vendor": "sonicwall", "version": "10.2.0.0" }, { "model": "scalance xb-200", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "4.3" }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "1.1.1" }, { "model": "scalance s602", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "4.1" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "12.0.0" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.11.0" }, { "model": "quantum security gateway", "scope": "eq", "trust": 1.0, "vendor": "checkpoint", "version": "r80.40" }, { "model": "secure global desktop", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "5.6" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.11.1" }, { "model": "essbase", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "21.2" }, { "model": "graalvm", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "20.3.1.2" }, { "model": "mysql connectors", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.0.23" }, { "model": "secure backup", "scope": "lt", "trust": 1.0, "vendor": "oracle", "version": "18.1.0.1.0" }, { "model": "simatic net cp 1543-1", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.2" }, { "model": "simatic s7-1200 cpu 1214 fc", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "web gateway cloud service", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "10.1.1" }, { "model": "simatic rf188c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "simatic s7-1200 cpu 1214c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "enterprise manager for storage management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "13.4.0.0" }, { "model": "simatic pdm", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "9.1.0.7" }, { "model": "hitachi ops center analyzer viewpoint", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "storagegrid", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "ontap select deploy administration utility", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "quantum security gateway", "scope": null, "trust": 0.8, "vendor": "\u30c1\u30a7\u30c3\u30af \u30dd\u30a4\u30f3\u30c8 \u30bd\u30d5\u30c8\u30a6\u30a7\u30a2 \u30c6\u30af\u30ce\u30ed\u30b8\u30fc\u30ba", "version": null }, { "model": "tenable.sc", "scope": null, "trust": 0.8, "vendor": "tenable", "version": null }, { "model": "nessus", "scope": null, "trust": 0.8, "vendor": "tenable", "version": null }, { "model": "oncommand workflow automation", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "freebsd", "scope": null, "trust": 0.8, "vendor": "freebsd", "version": null }, { "model": "hitachi ops center common services", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "santricity smi-s provider", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "mcafee web gateway \u30bd\u30d5\u30c8\u30a6\u30a7\u30a2", "scope": null, "trust": 0.8, "vendor": "\u30de\u30ab\u30d5\u30a3\u30fc", "version": null }, { "model": "e-series performance analyzer", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "fedora", "scope": null, "trust": 0.8, "vendor": "fedora", "version": null }, { "model": "jp1/file transmission server/ftp", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "quantum security management", "scope": null, "trust": 0.8, "vendor": "\u30c1\u30a7\u30c3\u30af \u30dd\u30a4\u30f3\u30c8 \u30bd\u30d5\u30c8\u30a6\u30a7\u30a2 \u30c6\u30af\u30ce\u30ed\u30b8\u30fc\u30ba", "version": null }, { "model": "openssl", "scope": null, "trust": 0.8, "vendor": "openssl", "version": null }, { "model": "cloud volumes ontap \u30e1\u30c7\u30a3\u30a8\u30fc\u30bf", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "jp1/base", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "gnu/linux", "scope": null, "trust": 0.8, "vendor": "debian", "version": null }, { "model": "web gateway cloud service", "scope": null, "trust": 0.8, "vendor": "\u30de\u30ab\u30d5\u30a3\u30fc", "version": null }, { "model": "multi-domain management", "scope": null, "trust": 0.8, "vendor": "\u30c1\u30a7\u30c3\u30af \u30dd\u30a4\u30f3\u30c8 \u30bd\u30d5\u30c8\u30a6\u30a7\u30a2 \u30c6\u30af\u30ce\u30ed\u30b8\u30fc\u30ba", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "db": "NVD", "id": "CVE-2021-3449" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.1.1k", "versionStartIncluding": "1.1.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:freebsd:freebsd:12.2:p1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:freebsd:freebsd:12.2:p2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:freebsd:freebsd:12.2:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:santricity_smi-s_provider:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:snapcenter:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:oncommand_workflow_automation:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:storagegrid:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:oncommand_insight:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:vmware_vsphere:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:cloud_volumes_ontap_mediator:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:e-series_performance_analyzer:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:tenable:tenable.sc:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "5.17.0", "versionStartIncluding": "5.13.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.13.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.11.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.12.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.12.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.13.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.11.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:log_correlation_engine:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "6.0.9", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:34:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway_cloud_service:10.1.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway_cloud_service:9.2.10:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway_cloud_service:8.2.19:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway:10.1.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway:9.2.10:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway:8.2.19:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:checkpoint:quantum_security_management_firmware:r80.40:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:checkpoint:quantum_security_management_firmware:r81:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:checkpoint:quantum_security_management:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:checkpoint:multi-domain_management_firmware:r80.40:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:checkpoint:multi-domain_management_firmware:r81:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:checkpoint:multi-domain_management:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:checkpoint:quantum_security_gateway_firmware:r80.40:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:checkpoint:quantum_security_gateway_firmware:r81:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:checkpoint:quantum_security_gateway:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.57:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_world_security:a9.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "17.12", "versionStartIncluding": "17.7", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.58:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:19.12:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_manager_for_storage_management:13.4.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:20.12:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:zfs_storage_appliance_kit:8.8:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:secure_global_desktop:5.6:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:graalvm:20.3.1.2:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:graalvm:21.0.0.2:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:graalvm:19.3.5:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_server:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.0.23", "versionStartIncluding": "8.0.15", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_server:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "5.7.33", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_workbench:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.0.23", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.59:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:essbase:21.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_connectors:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.0.23", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_enterpriseone_tools:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "9.2.6.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:21.12:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:secure_backup:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "18.1.0.1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_communications_policy_management:12.6.0.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:sonicwall:sma100_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "10.2.1.0-17sv", "versionStartIncluding": "10.2.0.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:sonicwall:sma100:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:sonicwall:capture_client:3.5:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:sonicwall:sonicos:7.0.1.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:ruggedcom_rcm1224_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "6.2", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:ruggedcom_rcm1224:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_lpe9403_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_lpe9403:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_m-800_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "6.2", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_m-800:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_s602_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "4.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_s602:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_s612_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "4.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_s612:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_s615_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "6.2", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_s615:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_s623_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "4.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_s623:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_s627-2m_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "4.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_s627-2m:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_sc-600_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "2.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_sc-600:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_w700_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "6.5", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_w700:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_w1700_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "2.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_w1700:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xb-200_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "4.3", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xb-200:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xc-200_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "4.3", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xc-200:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xf-200ba_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "4.3", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xf-200ba:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xm-400_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "6.4", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xm-400:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xp-200_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "4.3", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xp-200:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xr-300wg_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "4.3", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xr-300wg:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xr524-8c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "6.4", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xr524-8c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xr526-8c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "6.4", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xr526-8c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xr528-6m_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "6.4", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xr528-6m:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xr552-12_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "6.4", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xr552-12:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_cloud_connect_7_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "1.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:siemens:simatic_cloud_connect_7_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_cloud_connect_7:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_cp_1242-7_gprs_v2_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "3.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:siemens:simatic_cp_1242-7_gprs_v2_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_cp_1242-7_gprs_v2:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_hmi_basic_panels_2nd_generation_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_hmi_basic_panels_2nd_generation:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_hmi_comfort_outdoor_panels_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_hmi_comfort_outdoor_panels:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_hmi_ktp_mobile_panels_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_hmi_ktp_mobile_panels:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_mv500_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_mv500:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_net_cp_1243-1_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "3.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_net_cp_1243-1:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_net_cp1243-7_lte_eu_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "3.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_net_cp1243-7_lte_eu:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_net_cp1243-7_lte_us_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "3.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_net_cp1243-7_lte_us:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_net_cp_1243-8_irc_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "3.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_net_cp_1243-8_irc:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_net_cp_1542sp-1_irc_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "2.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_net_cp_1542sp-1_irc:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_net_cp_1543-1_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.0", "versionStartIncluding": "2.2", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_net_cp_1543-1:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_net_cp_1543sp-1_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "2.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_net_cp_1543sp-1:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_net_cp_1545-1_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "1.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_net_cp_1545-1:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_pcs_7_telecontrol_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_pcs_7_telecontrol:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_pcs_neo_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_pcs_neo:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_pdm_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "9.1.0.7", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_pdm:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_process_historian_opc_ua_server_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "2019", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_process_historian_opc_ua_server:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_rf166c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_rf166c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_rf185c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_rf185c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_rf186c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_rf186c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_rf186ci_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_rf186ci:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_rf188c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_rf188c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_rf188ci_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_rf188ci:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_rf360r_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_rf360r:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1211c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1211c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1212c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1212c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1212fc_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1212fc:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1214_fc_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1214_fc:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1214c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1214c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1214_fc_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1214_fc:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1215_fc_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1215_fc:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1215c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1215c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1217c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1217c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1500_cpu_1518-4_pn\\/dp_mfp_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1500_cpu_1518-4_pn\\/dp_mfp:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:sinamics_connect_300_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:sinamics_connect_300:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:tim_1531_irc_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.2", "versionStartIncluding": "2.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:tim_1531_irc:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:simatic_wincc_runtime_advanced:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinema_server:14.0:sp2_update1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinema_server:14.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinema_server:14.0:sp2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinema_server:14.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:simatic_logon:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "1.6.0.2", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:simatic_logon:1.5:sp3_update_1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:simatic_wincc_telecontrol:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_nms:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_nms:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_pni:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:tia_administrator:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinema_server:14.0:sp2_update2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinumerik_opc_ua_server:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_infrastructure_network_services:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0.1.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndIncluding": "14.14.0", "versionStartIncluding": "14.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndIncluding": "10.12.0", "versionStartIncluding": "10.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndIncluding": "12.12.0", "versionStartIncluding": "12.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:lts:*:*:*", "cpe_name": [], "versionEndExcluding": "14.16.1", "versionStartIncluding": "14.15.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:lts:*:*:*", "cpe_name": [], "versionEndExcluding": "12.22.1", "versionStartIncluding": "12.13.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:lts:*:*:*", "cpe_name": [], "versionEndIncluding": "10.24.0", "versionStartIncluding": "10.13.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndExcluding": "15.14.0", "versionStartIncluding": "15.0.0", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2021-3449" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "163209" }, { "db": "PACKETSTORM", "id": "163257" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "162183" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "162197" }, { "db": "PACKETSTORM", "id": "164192" } ], "trust": 0.7 }, "cve": "CVE-2021-3449", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 4.3, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Medium", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "Partial", "baseScore": 4.3, "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2021-3449", "impactScore": null, "integrityImpact": "None", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "PARTIAL", "baseScore": 4.3, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "id": "VHN-388130", "impactScore": 2.9, "integrityImpact": "NONE", "severity": "MEDIUM", "trust": 0.1, "vectorString": "AV:N/AC:M/AU:N/C:N/I:N/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "HIGH", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 5.9, "baseSeverity": "MEDIUM", "confidentialityImpact": "NONE", "exploitabilityScore": 2.2, "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H", "version": "3.1" }, { "attackComplexity": "High", "attackVector": "Network", "author": "NVD", "availabilityImpact": "High", "baseScore": 5.9, "baseSeverity": "Medium", "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2021-3449", "impactScore": null, "integrityImpact": "None", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2021-3449", "trust": 1.8, "value": "MEDIUM" }, { "author": "VULHUB", "id": "VHN-388130", "trust": 0.1, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2021-3449", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-388130" }, { "db": "VULMON", "id": "CVE-2021-3449" }, { "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "db": "NVD", "id": "CVE-2021-3449" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "An OpenSSL TLS server may crash if sent a maliciously crafted renegotiation ClientHello message from a client. If a TLSv1.2 renegotiation ClientHello omits the signature_algorithms extension (where it was present in the initial ClientHello), but includes a signature_algorithms_cert extension then a NULL pointer dereference will result, leading to a crash and a denial of service attack. A server is only vulnerable if it has TLSv1.2 and renegotiation enabled (which is the default configuration). OpenSSL TLS clients are not impacted by this issue. All OpenSSL 1.1.1 versions are affected by this issue. Users of these versions should upgrade to OpenSSL 1.1.1k. OpenSSL 1.0.2 is not impacted by this issue. Fixed in OpenSSL 1.1.1k (Affected 1.1.1-1.1.1j). OpenSSL is an open source general encryption library of the Openssl team that can implement the Secure Sockets Layer (SSLv2/v3) and Transport Layer Security (TLSv1) protocols. The product supports a variety of encryption algorithms, including symmetric ciphers, hash algorithms, secure hash algorithms, etc. On March 25, 2021, the OpenSSL Project released a security advisory, OpenSSL Security Advisory [25 March 2021], that disclosed two vulnerabilities. \nExploitation of these vulnerabilities could allow an malicious user to use a valid non-certificate authority (CA) certificate to act as a CA and sign a certificate for an arbitrary organization, user or device, or to cause a denial of service (DoS) condition. \nThis advisory is available at the following link:tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-openssl-2021-GHY28dJd. In addition to persistent storage, Red Hat\nOpenShift Container Storage provisions a multicloud data management service\nwith an S3 compatible API. \n\nSecurity Fix(es):\n\n* NooBaa: noobaa-operator leaking RPC AuthToken into log files\n(CVE-2021-3528)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, and other related information, refer to the CVE page(s) listed in\nthe References section. \n\nBug Fix(es):\n\n* Currently, a newly restored PVC cannot be mounted if some of the\nOpenShift Container Platform nodes are running on a version of Red Hat\nEnterprise Linux which is less than 8.2, and the snapshot from which the\nPVC was restored is deleted. \nWorkaround: Do not delete the snapshot from which the PVC was restored\nuntil the restored PVC is deleted. (BZ#1962483)\n\n* Previously, the default backingstore was not created on AWS S3 when\nOpenShift Container Storage was deployed, due to incorrect identification\nof AWS S3. With this update, the default backingstore gets created when\nOpenShift Container Storage is deployed on AWS S3. (BZ#1927307)\n\n* Previously, log messages were printed to the endpoint pod log even if the\ndebug option was not set. With this update, the log messages are printed to\nthe endpoint pod log only when the debug option is set. (BZ#1938106)\n\n* Previously, the PVCs could not be provisioned as the `rook-ceph-mds` did\nnot register the pod IP on the monitor servers, and hence every mount on\nthe filesystem timed out, resulting in CephFS volume provisioning failure. \nWith this update, an argument `--public-addr=podIP` is added to the MDS pod\nwhen the host network is not enabled, and hence the CephFS volume\nprovisioning does not fail. (BZ#1949558)\n\n* Previously, OpenShift Container Storage 4.2 clusters were not updated\nwith the correct cache value, and hence MDSs in standby-replay might report\nan oversized cache, as rook did not apply the `mds_cache_memory_limit`\nargument during upgrades. With this update, the `mds_cache_memory_limit`\nargument is applied during upgrades and the mds daemon operates normally. \n(BZ#1951348)\n\n* Previously, the coredumps were not generated in the correct location as\nrook was setting the config option `log_file` to an empty string since\nlogging happened on stdout and not on the files, and hence Ceph read the\nvalue of the `log_file` to build the dump path. With this update, rook does\nnot set the `log_file` and keeps Ceph\u0027s internal default, and hence the\ncoredumps are generated in the correct location and are accessible under\n`/var/log/ceph/`. (BZ#1938049)\n\n* Previously, Ceph became inaccessible, as the mons lose quorum if a mon\npod was drained while another mon was failing over. With this update,\nvoluntary mon drains are prevented while a mon is failing over, and hence\nCeph does not become inaccessible. (BZ#1946573)\n\n* Previously, the mon quorum was at risk, as the operator could erroneously\nremove the new mon if the operator was restarted during a mon failover. \nWith this update, the operator completes the same mon failover after the\noperator is restarted, and hence the mon quorum is more reliable in the\nnode drains and mon failover scenarios. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nFor details on how to apply this update, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1938106 - [GSS][RFE]Reduce debug level for logs of Nooba Endpoint pod\n1950915 - XSS Vulnerability with Noobaa version 5.5.0-3bacc6b\n1951348 - [GSS][CephFS] health warning \"MDS cache is too large (3GB/1GB); 0 inodes in use by clients, 0 stray files\" for the standby-replay\n1951600 - [4.6.z][Clone of BZ #1936545] setuid and setgid file bits are not retained after a OCS CephFS CSI restore\n1955601 - CVE-2021-3528 NooBaa: noobaa-operator leaking RPC AuthToken into log files\n1957189 - [Rebase] Use RHCS4.2z1 container image with OCS 4..6.5[may require doc update for external mode min supported RHCS version]\n1959980 - When a node is being drained, increase the mon failover timeout to prevent unnecessary mon failover\n1959983 - [GSS][mon] rook-operator scales mons to 4 after healthCheck timeout\n1962483 - [RHEL7][RBD][4.6.z clone] FailedMount error when using restored PVC on app pod\n\n5. \n\nBug Fix(es):\n\n* WMCO patch pub-key-hash annotation to Linux node (BZ#1945248)\n\n* LoadBalancer Service type with invalid external loadbalancer IP breaks\nthe datapath (BZ#1952917)\n\n* Telemetry info not completely available to identify windows nodes\n(BZ#1955319)\n\n* WMCO incorrectly shows node as ready after a failed configuration\n(BZ#1956412)\n\n* kube-proxy service terminated unexpectedly after recreated LB service\n(BZ#1963263)\n\n3. Solution:\n\nFor Windows Machine Config Operator upgrades, see the following\ndocumentation:\n\nhttps://docs.openshift.com/container-platform/4.7/windows_containers/window\ns-node-upgrades.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1945248 - WMCO patch pub-key-hash annotation to Linux node\n1946538 - CVE-2021-25736 kubernetes: LoadBalancer Service type don\u0027t create a HNS policy for empty or invalid external loadbalancer IP, what could lead to MITM\n1952917 - LoadBalancer Service type with invalid external loadbalancer IP breaks the datapath\n1955319 - Telemetry info not completely available to identify windows nodes\n1956412 - WMCO incorrectly shows node as ready after a failed configuration\n1963263 - kube-proxy service terminated unexpectedly after recreated LB service\n\n5. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.3.0 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See\nthe following Release Notes documentation, which will be updated shortly\nfor this release, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana\ngement_for_kubernetes/2.3/html/release_notes/\n\nSecurity:\n\n* fastify-reply-from: crafted URL allows prefix scape of the proxied\nbackend service (CVE-2021-21321)\n\n* fastify-http-proxy: crafted URL allows prefix scape of the proxied\nbackend service (CVE-2021-21322)\n\n* nodejs-netmask: improper input validation of octal input data\n(CVE-2021-28918)\n\n* redis: Integer overflow via STRALGO LCS command (CVE-2021-29477)\n\n* redis: Integer overflow via COPY command for large intsets\n(CVE-2021-29478)\n\n* nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)\n\n* nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n(CVE-2020-28500)\n\n* golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing\n- -u- extension (CVE-2020-28851)\n\n* golang.org/x/text: Panic in language.ParseAcceptLanguage while processing\nbcp47 tag (CVE-2020-28852)\n\n* nodejs-ansi_up: XSS due to insufficient URL sanitization (CVE-2021-3377)\n\n* oras: zip-slip vulnerability via oras-pull (CVE-2021-21272)\n\n* redis: integer overflow when configurable limit for maximum supported\nbulk input size is too big on 32-bit platforms (CVE-2021-21309)\n\n* nodejs-lodash: command injection via template (CVE-2021-23337)\n\n* nodejs-hosted-git-info: Regular Expression denial of service via\nshortcutMatch in fromUrl() (CVE-2021-23362)\n\n* browserslist: parsing of invalid queries could result in Regular\nExpression Denial of Service (ReDoS) (CVE-2021-23364)\n\n* nodejs-postcss: Regular expression denial of service during source map\nparsing (CVE-2021-23368)\n\n* nodejs-handlebars: Remote code execution when compiling untrusted compile\ntemplates with strict:true option (CVE-2021-23369)\n\n* nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in\nlib/previous-map.js (CVE-2021-23382)\n\n* nodejs-handlebars: Remote code execution when compiling untrusted compile\ntemplates with compat:true option (CVE-2021-23383)\n\n* openssl: integer overflow in CipherUpdate (CVE-2021-23840)\n\n* openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n(CVE-2021-23841)\n\n* nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n(CVE-2021-27292)\n\n* grafana: snapshot feature allow an unauthenticated remote attacker to\ntrigger a DoS via a remote API call (CVE-2021-27358)\n\n* nodejs-is-svg: ReDoS via malicious string (CVE-2021-28092)\n\n* nodejs-netmask: incorrectly parses an IP address that has octal integer\nwith invalid character (CVE-2021-29418)\n\n* ulikunitz/xz: Infinite loop in readUvarint allows for denial of service\n(CVE-2021-29482)\n\n* normalize-url: ReDoS for data URLs (CVE-2021-33502)\n\n* nodejs-trim-newlines: ReDoS in .end() method (CVE-2021-33623)\n\n* nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n(CVE-2021-23343)\n\n* html-parse-stringify: Regular Expression DoS (CVE-2021-23346)\n\n* openssl: incorrect SSLv2 rollback protection (CVE-2021-23839)\n\nFor more details about the security issues, including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npages listed in the References section. \n\nBugs:\n\n* RFE Make the source code for the endpoint-metrics-operator public (BZ#\n1913444)\n\n* cluster became offline after apiserver health check (BZ# 1942589)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension\n1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag\n1913444 - RFE Make the source code for the endpoint-metrics-operator public\n1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull\n1927520 - RHACM 2.3.0 images\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection\n1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate\n1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms\n1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization\n1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string\n1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application\n1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call\n1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS\n1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service\n1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service\n1942589 - cluster became offline after apiserver health check\n1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl()\n1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character\n1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data\n1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service\n1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option\n1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing\n1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js\n1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service\n1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS)\n1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option\n1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command\n1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets\n1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs\n1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method\n1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions\n1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id\n1983131 - Defragmenting an etcd member doesn\u0027t reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters\n\n5. It is comprised of the Apache\nTomcat Servlet container, JBoss HTTP Connector (mod_cluster), the\nPicketLink Vault extension for Apache Tomcat, and the Tomcat Native\nlibrary. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: OpenShift Container Platform 4.10.3 security update\nAdvisory ID: RHSA-2022:0056-01\nProduct: Red Hat OpenShift Enterprise\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:0056\nIssue date: 2022-03-10\nCVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 \n CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 \n CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 \n CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 \n CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 \n CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 \n CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 \n CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 \n CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 \n CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 \n CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 \n CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 \n CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 \n CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 \n CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 \n CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 \n CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 \n CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 \n CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 \n CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 \n CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 \n CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 \n CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 \n CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 \n CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 \n CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 \n CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 \n CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 \n CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 \n CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 \n CVE-2022-24407 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.10.3 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.10.3. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:0055\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nSecurity Fix(es):\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n* grafana: Snapshot authentication bypass (CVE-2021-39226)\n* golang: net/http: limit growth of header canonicalization cache\n(CVE-2021-44716)\n* nodejs-axios: Regular expression denial of service in trim function\n(CVE-2021-3749)\n* golang: syscall: don\u0027t close fd 0 on ForkExec error (CVE-2021-44717)\n* grafana: Forward OAuth Identity Token can allow users to access some data\nsources (CVE-2022-21673)\n* grafana: directory traversal vulnerability (CVE-2021-43813)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-x86_64\n\nThe image digest is\nsha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56\n\n(For s390x architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-s390x\n\nThe image digest is\nsha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69\n\n(For ppc64le architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le\n\nThe image digest is\nsha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c\n\nAll OpenShift Container Platform 4.10 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.10 see the following documentation,\nwhich will be updated shortly for this release, for moderate instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1808240 - Always return metrics value for pods under the user\u0027s namespace\n1815189 - feature flagged UI does not always become available after operator installation\n1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters\n1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly\n1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal\n1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered\n1878925 - \u0027oc adm upgrade --to ...\u0027 rejects versions which occur only in history, while the cluster-version operator supports history fallback\n1880738 - origin e2e test deletes original worker\n1882983 - oVirt csi driver should refuse to provision RWX and ROX PV\n1886450 - Keepalived router id check not documented for RHV/VMware IPI\n1889488 - The metrics endpoint for the Scheduler is not protected by RBAC\n1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom\n1896474 - Path based routing is broken for some combinations\n1897431 - CIDR support for additional network attachment with the bridge CNI plug-in\n1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes\n1907433 - Excessive logging in image operator\n1909906 - The router fails with PANIC error when stats port already in use\n1911173 - [MSTR-998] Many charts\u0027 legend names show {{}} instead of words\n1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. \n1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)\n1917893 - [ovirt] install fails: due to terraform error \"Cannot attach Virtual Disk: Disk is locked\" on vm resource\n1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1926522 - oc adm catalog does not clean temporary files\n1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. \n1928141 - kube-storage-version-migrator constantly reporting type \"Upgradeable\" status Unknown\n1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it\u0027s storageclass is not yet finished, confusing users\n1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x\n1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade\n1937085 - RHV UPI inventory playbook missing guarantee_memory\n1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion\n1938236 - vsphere-problem-detector does not support overriding log levels via storage CR\n1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods\n1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer\n1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]\n1942913 - ThanosSidecarUnhealthy isn\u0027t resilient to WAL replays. \n1943363 - [ovn] CNO should gracefully terminate ovn-northd\n1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17\n1948080 - authentication should not set Available=False APIServices_Error with 503s\n1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set\n1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0\n1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer\n1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs\n1953264 - \"remote error: tls: bad certificate\" logs in prometheus-operator container\n1955300 - Machine config operator reports unavailable for 23m during upgrade\n1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set\n1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set\n1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters\n1956496 - Needs SR-IOV Docs Upstream\n1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret\n1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid\n1956964 - upload a boot-source to OpenShift virtualization using the console\n1957547 - [RFE]VM name is not auto filled in dev console\n1958349 - ovn-controller doesn\u0027t release the memory after cluster-density run\n1959352 - [scale] failed to get pod annotation: timed out waiting for annotations\n1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not\n1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]\n1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects\n1961391 - String updates\n1961509 - DHCP daemon pod should have CPU and memory requests set but not limits\n1962066 - Edit machine/machineset specs not working\n1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1963053 - `oc whoami --show-console` should show the web console URL, not the server api URL\n1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1964327 - Support containers with name:tag@digest\n1964789 - Send keys and disconnect does not work for VNC console\n1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7\n1966445 - Unmasking a service doesn\u0027t work if it masked using MCO\n1966477 - Use GA version in KAS/OAS/OauthAS to avoid: \"audit.k8s.io/v1beta1\" is deprecated and will be removed in a future release, use \"audit.k8s.io/v1\" instead\n1966521 - kube-proxy\u0027s userspace implementation consumes excessive CPU\n1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up\n1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount\n1970218 - MCO writes incorrect file contents if compression field is specified\n1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]\n1970805 - Cannot create build when docker image url contains dir structure\n1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io\n1972827 - image registry does not remain available during upgrade\n1972962 - Should set the minimum value for the `--max-icsp-size` flag of `oc adm catalog mirror`\n1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run\n1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established\n1976301 - [ci] e2e-azure-upi is permafailing\n1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. \n1976674 - CCO didn\u0027t set Upgradeable to False when cco mode is configured to Manual on azure platform\n1976894 - Unidling a StatefulSet does not work as expected\n1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases\n1977414 - Build Config timed out waiting for condition 400: Bad Request\n1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus\n1978528 - systemd-coredump started and failed intermittently for unknown reasons\n1978581 - machine-config-operator: remove runlevel from mco namespace\n1979562 - Cluster operators: don\u0027t show messages when neither progressing, degraded or unavailable\n1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9\n1979966 - OCP builds always fail when run on RHEL7 nodes\n1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading\n1981549 - Machine-config daemon does not recover from broken Proxy configuration\n1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]\n1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues\n1982063 - \u0027Control Plane\u0027 is not translated in Simplified Chinese language in Home-\u003eOverview page\n1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands\n1982662 - Workloads - DaemonSets - Add storage: i18n misses\n1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE \"*/secrets/encryption-config\" on single node clusters\n1983758 - upgrades are failing on disruptive tests\n1983964 - Need Device plugin configuration for the NIC \"needVhostNet\" \u0026 \"isRdma\"\n1984592 - global pull secret not working in OCP4.7.4+ for additional private registries\n1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs\n1985486 - Cluster Proxy not used during installation on OSP with Kuryr\n1985724 - VM Details Page missing translations\n1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted\n1985933 - Downstream image registry recommendation\n1985965 - oVirt CSI driver does not report volume stats\n1986216 - [scale] SNO: Slow Pod recovery due to \"timed out waiting for OVS port binding\"\n1986237 - \"MachineNotYetDeleted\" in Pending state , alert not fired\n1986239 - crictl create fails with \"PID namespace requested, but sandbox infra container invalid\"\n1986302 - console continues to fetch prometheus alert and silences for normal user\n1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI\n1986338 - error creating list of resources in Import YAML\n1986502 - yaml multi file dnd duplicates previous dragged files\n1986819 - fix string typos for hot-plug disks\n1987044 - [OCPV48] Shutoff VM is being shown as \"Starting\" in WebUI when using spec.runStrategy Manual/RerunOnFailure\n1987136 - Declare operatorframework.io/arch.* labels for all operators\n1987257 - Go-http-client user-agent being used for oc adm mirror requests\n1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold\n1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP\n1988406 - SSH key dropped when selecting \"Customize virtual machine\" in UI\n1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade\n1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with \"Unable to connect to the server\"\n1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs\n1989438 - expected replicas is wrong\n1989502 - Developer Catalog is disappearing after short time\n1989843 - \u0027More\u0027 and \u0027Show Less\u0027 functions are not translated on several page\n1990014 - oc debug \u003cpod-name\u003e does not work for Windows pods\n1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created\n1990193 - \u0027more\u0027 and \u0027Show Less\u0027 is not being translated on Home -\u003e Search page\n1990255 - Partial or all of the Nodes/StorageClasses don\u0027t appear back on UI after text is removed from search bar\n1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI\n1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi-* symlinks\n1990556 - get-resources.sh doesn\u0027t honor the no_proxy settings even with no_proxy var\n1990625 - Ironic agent registers with SLAAC address with privacy-stable\n1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time\n1991067 - github.com can not be resolved inside pods where cluster is running on openstack. \n1991573 - Enable typescript strictNullCheck on network-policies files\n1991641 - Baremetal Cluster Operator still Available After Delete Provisioning\n1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator\n1991819 - Misspelled word \"ocurred\" in oc inspect cmd\n1991942 - Alignment and spacing fixes\n1992414 - Two rootdisks show on storage step if \u0027This is a CD-ROM boot source\u0027 is checked\n1992453 - The configMap failed to save on VM environment tab\n1992466 - The button \u0027Save\u0027 and \u0027Reload\u0027 are not translated on vm environment tab\n1992475 - The button \u0027Open console in New Window\u0027 and \u0027Disconnect\u0027 are not translated on vm console tab\n1992509 - Could not customize boot source due to source PVC not found\n1992541 - all the alert rules\u0027 annotations \"summary\" and \"description\" should comply with the OpenShift alerting guidelines\n1992580 - storageProfile should stay with the same value by check/uncheck the apply button\n1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply\n1992777 - [IBMCLOUD] Default \"ibm_iam_authorization_policy\" is not working as expected in all scenarios\n1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed)\n1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing\n1994094 - Some hardcodes are detected at the code level in OpenShift console components\n1994142 - Missing required cloud config fields for IBM Cloud\n1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools\n1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart\n1995335 - [SCALE] ovnkube CNI: remove ovs flows check\n1995493 - Add Secret to workload button and Actions button are not aligned on secret details page\n1995531 - Create RDO-based Ironic image to be promoted to OKD\n1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator\n1995887 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs\n1995924 - CMO should report `Upgradeable: false` when HA workload is incorrectly spread\n1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole\n1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN\n1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down\n1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page\n1996647 - Provide more useful degraded message in auth operator on DNS errors\n1996736 - Large number of 501 lr-policies in INCI2 env\n1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes\n1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP\n1996928 - Enable default operator indexes on ARM\n1997028 - prometheus-operator update removes env var support for thanos-sidecar\n1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used\n1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller. \n1997245 - \"Subscription already exists in openshift-storage namespace\" error message is seen while installing odf-operator via UI\n1997269 - Have to refresh console to install kube-descheduler\n1997478 - Storage operator is not available after reboot cluster instances\n1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n1997967 - storageClass is not reserved from default wizard to customize wizard\n1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order\n1998038 - [e2e][automation] add tests for UI for VM disk hot-plug\n1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus\n1998174 - Create storageclass gp3-csi after install ocp cluster on aws\n1998183 - \"r: Bad Gateway\" info is improper\n1998235 - Firefox warning: Cookie \u201ccsrf-token\u201d will be soon rejected\n1998377 - Filesystem table head is not full displayed in disk tab\n1998378 - Virtual Machine is \u0027Not available\u0027 in Home -\u003e Overview -\u003e Cluster inventory\n1998519 - Add fstype when create localvolumeset instance on web console\n1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses\n1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page\n1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable\n1999091 - Console update toast notification can appear multiple times\n1999133 - removing and recreating static pod manifest leaves pod in error state\n1999246 - .indexignore is not ingore when oc command load dc configuration\n1999250 - ArgoCD in GitOps operator can\u0027t manage namespaces\n1999255 - ovnkube-node always crashes out the first time it starts\n1999261 - ovnkube-node log spam (and security token leak?)\n1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -\u003e Operator Installation page\n1999314 - console-operator is slow to mark Degraded as False once console starts working\n1999425 - kube-apiserver with \"[SHOULD NOT HAPPEN] failed to update managedFields\" err=\"failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)\n1999556 - \"master\" pool should be updated before the CVO reports available at the new version occurred\n1999578 - AWS EFS CSI tests are constantly failing\n1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages\n1999619 - cloudinit is malformatted if a user sets a password during VM creation flow\n1999621 - Empty ssh_authorized_keys entry is added to VM\u0027s cloudinit if created from a customize flow\n1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined\n1999668 - openshift-install destroy cluster panic\u0027s when given invalid credentials to cloud provider (Azure Stack Hub)\n1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource\n1999771 - revert \"force cert rotation every couple days for development\" in 4.10\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n1999796 - Openshift Console `Helm` tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace. \n1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions\n1999903 - Click \"This is a CD-ROM boot source\" ticking \"Use template size PVC\" on pvc upload form\n1999983 - No way to clear upload error from template boot source\n2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter\n2000096 - Git URL is not re-validated on edit build-config form reload\n2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig\n2000236 - Confusing usage message from dynkeepalived CLI\n2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported\n2000430 - bump cluster-api-provider-ovirt version in installer\n2000450 - 4.10: Enable static PV multi-az test\n2000490 - All critical alerts shipped by CMO should have links to a runbook\n2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)\n2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster\n2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled\n2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console\n2000754 - IPerf2 tests should be lower\n2000846 - Structure logs in the entire codebase of Local Storage Operator\n2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24\n2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM\n2000938 - CVO does not respect changes to a Deployment strategy\n2000963 - \u0027Inline-volume (default fs)] volumes should store data\u0027 tests are failing on OKD with updated selinux-policy\n2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don\u0027t have snapshot and should be fullClone\n2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole\n2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error\n2001337 - Details Card in ODF Dashboard mentions OCS\n2001339 - fix text content hotplug\n2001413 - [e2e][automation] add/delete nic and disk to template\n2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log\n2001442 - Empty termination.log file for the kube-apiserver has too permissive mode\n2001479 - IBM Cloud DNS unable to create/update records\n2001566 - Enable alerts for prometheus operator in UWM\n2001575 - Clicking on the perspective switcher shows a white page with loader\n2001577 - Quick search placeholder is not displayed properly when the search string is removed\n2001578 - [e2e][automation] add tests for vm dashboard tab\n2001605 - PVs remain in Released state for a long time after the claim is deleted\n2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options\n2001620 - Cluster becomes degraded if it can\u0027t talk to Manila\n2001760 - While creating \u0027Backing Store\u0027, \u0027Bucket Class\u0027, \u0027Namespace Store\u0027 user is navigated to \u0027Installed Operators\u0027 page after clicking on ODF\n2001761 - Unable to apply cluster operator storage for SNO on GCP platform. \n2001765 - Some error message in the log of diskmaker-manager caused confusion\n2001784 - show loading page before final results instead of showing a transient message No log files exist\n2001804 - Reload feature on Environment section in Build Config form does not work properly\n2001810 - cluster admin unable to view BuildConfigs in all namespaces\n2001817 - Failed to load RoleBindings list that will lead to \u2018Role name\u2019 is not able to be selected on Create RoleBinding page as well\n2001823 - OCM controller must update operator status\n2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start\n2001835 - Could not select image tag version when create app from dev console\n2001855 - Add capacity is disabled for ocs-storagecluster\n2001856 - Repeating event: MissingVersion no image found for operand pod\n2001959 - Side nav list borders don\u0027t extend to edges of container\n2002007 - Layout issue on \"Something went wrong\" page\n2002010 - ovn-kube may never attempt to retry a pod creation\n2002012 - Cannot change volume mode when cloning a VM from a template\n2002027 - Two instances of Dotnet helm chart show as one in topology\n2002075 - opm render does not automatically pulling in the image(s) used in the deployments\n2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster\n2002125 - Network policy details page heading should be updated to Network Policy details\n2002133 - [e2e][automation] add support/virtualization and improve deleteResource\n2002134 - [e2e][automation] add test to verify vm details tab\n2002215 - Multipath day1 not working on s390x\n2002238 - Image stream tag is not persisted when switching from yaml to form editor\n2002262 - [vSphere] Incorrect user agent in vCenter sessions list\n2002266 - SinkBinding create form doesn\u0027t allow to use subject name, instead of label selector\n2002276 - OLM fails to upgrade operators immediately\n2002300 - Altering the Schedule Profile configurations doesn\u0027t affect the placement of the pods\n2002354 - Missing DU configuration \"Done\" status reporting during ZTP flow\n2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn\u0027t use commonjs\n2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation\n2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN\n2002397 - Resources search is inconsistent\n2002434 - CRI-O leaks some children PIDs\n2002443 - Getting undefined error on create local volume set page\n2002461 - DNS operator performs spurious updates in response to API\u0027s defaulting of service\u0027s internalTrafficPolicy\n2002504 - When the openshift-cluster-storage-operator is degraded because of \"VSphereProblemDetectorController_SyncError\", the insights operator is not sending the logs from all pods. \n2002559 - User preference for topology list view does not follow when a new namespace is created\n2002567 - Upstream SR-IOV worker doc has broken links\n2002588 - Change text to be sentence case to align with PF\n2002657 - ovn-kube egress IP monitoring is using a random port over the node network\n2002713 - CNO: OVN logs should have millisecond resolution\n2002748 - [ICNI2] \u0027ErrorAddingLogicalPort\u0027 failed to handle external GW check: timeout waiting for namespace event\n2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite\n2002763 - Two storage systems getting created with external mode RHCS\n2002808 - KCM does not use web identity credentials\n2002834 - Cluster-version operator does not remove unrecognized volume mounts\n2002896 - Incorrect result return when user filter data by name on search page\n2002950 - Why spec.containers.command is not created with \"oc create deploymentconfig \u003cdc-name\u003e --image=\u003cimage\u003e -- \u003ccommand\u003e\"\n2003096 - [e2e][automation] check bootsource URL is displaying on review step\n2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role\n2003120 - CI: Uncaught error with ResizeObserver on operand details page\n2003145 - Duplicate operand tab titles causes \"two children with the same key\" warning\n2003164 - OLM, fatal error: concurrent map writes\n2003178 - [FLAKE][knative] The UI doesn\u0027t show updated traffic distribution after accepting the form\n2003193 - Kubelet/crio leaks netns and veth ports in the host\n2003195 - OVN CNI should ensure host veths are removed\n2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting \u0027-e JENKINS_PASSWORD=password\u0027 ENV which was working for old container images\n2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2003239 - \"[sig-builds][Feature:Builds][Slow] can use private repositories as build input\" tests fail outside of CI\n2003244 - Revert libovsdb client code\n2003251 - Patternfly components with list element has list item bullet when they should not. \n2003252 - \"[sig-builds][Feature:Builds][Slow] starting a build using CLI start-build test context override environment BUILD_LOGLEVEL in buildconfig\" tests do not work as expected outside of CI\n2003269 - Rejected pods should be filtered from admission regression\n2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release\n2003426 - [e2e][automation] add test for vm details bootorder\n2003496 - [e2e][automation] add test for vm resources requirment settings\n2003641 - All metal ipi jobs are failing in 4.10\n2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state\n2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node\n2003683 - Samples operator is panicking in CI\n2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster \"Connection Details\" page\n2003715 - Error on creating local volume set after selection of the volume mode\n2003743 - Remove workaround keeping /boot RW for kdump support\n2003775 - etcd pod on CrashLoopBackOff after master replacement procedure\n2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver\n2003792 - Monitoring metrics query graph flyover panel is useless\n2003808 - Add Sprint 207 translations\n2003845 - Project admin cannot access image vulnerabilities view\n2003859 - sdn emits events with garbage messages\n2003896 - (release-4.10) ApiRequestCounts conditional gatherer\n2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas\n2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes\n2004059 - [e2e][automation] fix current tests for downstream\n2004060 - Trying to use basic spring boot sample causes crash on Firefox\n2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn\u0027t close after selection\n2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently\n2004203 - build config\u0027s created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver\n2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory\n2004449 - Boot option recovery menu prevents image boot\n2004451 - The backup filename displayed in the RecentBackup message is incorrect\n2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts\n2004508 - TuneD issues with the recent ConfigParser changes. \n2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions\n2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs\n2004578 - Monitoring and node labels missing for an external storage platform\n2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days\n2004596 - [4.10] Bootimage bump tracker\n2004597 - Duplicate ramdisk log containers running\n2004600 - Duplicate ramdisk log containers running\n2004609 - output of \"crictl inspectp\" is not complete\n2004625 - BMC credentials could be logged if they change\n2004632 - When LE takes a large amount of time, multiple whereabouts are seen\n2004721 - ptp/worker custom threshold doesn\u0027t change ptp events threshold\n2004736 - [knative] Create button on new Broker form is inactive despite form being filled\n2004796 - [e2e][automation] add test for vm scheduling policy\n2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque\n2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card\n2004901 - [e2e][automation] improve kubevirt devconsole tests\n2004962 - Console frontend job consuming too much CPU in CI\n2005014 - state of ODF StorageSystem is misreported during installation or uninstallation\n2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines\n2005179 - pods status filter is not taking effect\n2005182 - sync list of deprecated apis about to be removed\n2005282 - Storage cluster name is given as title in StorageSystem details page\n2005355 - setuptools 58 makes Kuryr CI fail\n2005407 - ClusterNotUpgradeable Alert should be set to Severity Info\n2005415 - PTP operator with sidecar api configured throws bind: address already in use\n2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console\n2005554 - The switch status of the button \"Show default project\" is not revealed correctly in code\n2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable\n2005761 - QE - Implementing crw-basic feature file\n2005783 - Fix accessibility issues in the \"Internal\" and \"Internal - Attached Mode\" Installation Flow\n2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty\n2005854 - SSH NodePort service is created for each VM\n2005901 - KS, KCM and KA going Degraded during master nodes upgrade\n2005902 - Current UI flow for MCG only deployment is confusing and doesn\u0027t reciprocate any message to the end-user\n2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics\n2005971 - Change telemeter to report the Application Services product usage metrics\n2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files\n2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased\n2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types\n2006101 - Power off fails for drivers that don\u0027t support Soft power off\n2006243 - Metal IPI upgrade jobs are running out of disk space\n2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn\u0027t use the 0th address\n2006308 - Backing Store YAML tab on click displays a blank screen on UI\n2006325 - Multicast is broken across nodes\n2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators\n2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource\n2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2006690 - OS boot failure \"x64 Exception Type 06 - Invalid Opcode Exception\"\n2006714 - add retry for etcd errors in kube-apiserver\n2006767 - KubePodCrashLooping may not fire\n2006803 - Set CoreDNS cache entries for forwarded zones\n2006861 - Add Sprint 207 part 2 translations\n2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap\n2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors\n2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded\n2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick\n2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails\n2007271 - CI Integration for Knative test cases\n2007289 - kubevirt tests are failing in CI\n2007322 - Devfile/Dockerfile import does not work for unsupported git host\n2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3. \n2007379 - Events are not generated for master offset for ordinary clock\n2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace\n2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address\n2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error\n2007522 - No new local-storage-operator-metadata-container is build for 4.10\n2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10\n2007580 - Azure cilium installs are failing e2e tests\n2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10\n2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes\n2007692 - 4.9 \"old-rhcos\" jobs are permafailing with storage test failures\n2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow\n2007757 - must-gather extracts imagestreams in the \"openshift\" namespace, but not Templates\n2007802 - AWS machine actuator get stuck if machine is completely missing\n2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator\n2008119 - The serviceAccountIssuer field on Authentication CR is reseted to \u201c\u201d when installation process\n2008151 - Topology breaks on clicking in empty state\n2008185 - Console operator go.mod should use go 1.16.version\n2008201 - openstack-az job is failing on haproxy idle test\n2008207 - vsphere CSI driver doesn\u0027t set resource limits\n2008223 - gather_audit_logs: fix oc command line to get the current audit profile\n2008235 - The Save button in the Edit DC form remains disabled\n2008256 - Update Internationalization README with scope info\n2008321 - Add correct documentation link for MON_DISK_LOW\n2008462 - Disable PodSecurity feature gate for 4.10\n2008490 - Backing store details page does not contain all the kebab actions. \n2008521 - gcp-hostname service should correct invalid search entries in resolv.conf\n2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount\n2008539 - Registry doesn\u0027t fall back to secondary ImageContentSourcePolicy Mirror\n2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers\n2008599 - Azure Stack UPI does not have Internal Load Balancer\n2008612 - Plugin asset proxy does not pass through browser cache headers\n2008712 - VPA webhook timeout prevents all pods from starting\n2008733 - kube-scheduler: exposed /debug/pprof port\n2008911 - Prometheus repeatedly scaling prometheus-operator replica set\n2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2008987 - OpenShift SDN Hosted Egress IP\u0027s are not being scheduled to nodes after upgrade to 4.8.12\n2009055 - Instances of OCS to be replaced with ODF on UI\n2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs\n2009083 - opm blocks pruning of existing bundles during add\n2009111 - [IPI-on-GCP] \u0027Install a cluster with nested virtualization enabled\u0027 failed due to unable to launch compute instances\n2009131 - [e2e][automation] add more test about vmi\n2009148 - [e2e][automation] test vm nic presets and options\n2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator\n2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family\n2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted\n2009384 - UI changes to support BindableKinds CRD changes\n2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped\n2009424 - Deployment upgrade is failing availability check\n2009454 - Change web terminal subscription permissions from get to list\n2009465 - container-selinux should come from rhel8-appstream\n2009514 - Bump OVS to 2.16-15\n2009555 - Supermicro X11 system not booting from vMedia with AI\n2009623 - Console: Observe \u003e Metrics page: Table pagination menu shows bullet points\n2009664 - Git Import: Edit of knative service doesn\u0027t work as expected for git import flow\n2009699 - Failure to validate flavor RAM\n2009754 - Footer is not sticky anymore in import forms\n2009785 - CRI-O\u0027s version file should be pinned by MCO\n2009791 - Installer: ibmcloud ignores install-config values\n2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13\n2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo\n2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2009873 - Stale Logical Router Policies and Annotations for a given node\n2009879 - There should be test-suite coverage to ensure admin-acks work as expected\n2009888 - SRO package name collision between official and community version\n2010073 - uninstalling and then reinstalling sriov-network-operator is not working\n2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node. \n2010181 - Environment variables not getting reset on reload on deployment edit form\n2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2010341 - OpenShift Alerting Rules Style-Guide Compliance\n2010342 - Local console builds can have out of memory errors\n2010345 - OpenShift Alerting Rules Style-Guide Compliance\n2010348 - Reverts PIE build mode for K8S components\n2010352 - OpenShift Alerting Rules Style-Guide Compliance\n2010354 - OpenShift Alerting Rules Style-Guide Compliance\n2010359 - OpenShift Alerting Rules Style-Guide Compliance\n2010368 - OpenShift Alerting Rules Style-Guide Compliance\n2010376 - OpenShift Alerting Rules Style-Guide Compliance\n2010662 - Cluster is unhealthy after image-registry-operator tests\n2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)\n2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API\n2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address\n2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing\n2010864 - Failure building EFS operator\n2010910 - ptp worker events unable to identify interface for multiple interfaces\n2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24\n2010921 - Azure Stack Hub does not handle additionalTrustBundle\n2010931 - SRO CSV uses non default category \"Drivers and plugins\"\n2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. \n2011038 - optional operator conditions are confusing\n2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass\n2011171 - diskmaker-manager constantly redeployed by LSO when creating LV\u0027s\n2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image\n2011368 - Tooltip in pipeline visualization shows misleading data\n2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels\n2011411 - Managed Service\u0027s Cluster overview page contains link to missing Storage dashboards\n2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster\n2011513 - Kubelet rejects pods that use resources that should be freed by completed pods\n2011668 - Machine stuck in deleting phase in VMware \"reconciler failed to Delete machine\"\n2011693 - (release-4.10) \"insightsclient_request_recvreport_total\" metric is always incremented\n2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn\u0027t export namespace labels anymore\n2011733 - Repository README points to broken documentarion link\n2011753 - Ironic resumes clean before raid configuration job is actually completed\n2011809 - The nodes page in the openshift console doesn\u0027t work. You just get a blank page\n2011822 - Obfuscation doesn\u0027t work at clusters with OVN\n2011882 - SRO helm charts not synced with templates\n2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot\n2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages\n2011903 - vsphere-problem-detector: session leak\n2011927 - OLM should allow users to specify a proxy for GRPC connections\n2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods\n2011960 - [tracker] Storage operator is not available after reboot cluster instances\n2011971 - ICNI2 pods are stuck in ContainerCreating state\n2011972 - Ingress operator not creating wildcard route for hypershift clusters\n2011977 - SRO bundle references non-existent image\n2012069 - Refactoring Status controller\n2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI\n2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group\n2012233 - [IBMCLOUD] IPI: \"Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)\"\n2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig\n2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off\n2012407 - [e2e][automation] improve vm tab console tests\n2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don\u0027t have namespace label\n2012562 - migration condition is not detected in list view\n2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written\n2012780 - The port 50936 used by haproxy is occupied by kube-apiserver\n2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working\n2012902 - Neutron Ports assigned to Completed Pods are not reused Edit\n2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack\n2012971 - Disable operands deletes\n2013034 - Cannot install to openshift-nmstate namespace\n2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)\n2013199 - post reboot of node SRIOV policy taking huge time\n2013203 - UI breaks when trying to create block pool before storage cluster/system creation\n2013222 - Full breakage for nightly payload promotion\n2013273 - Nil pointer exception when phc2sys options are missing\n2013321 - TuneD: high CPU utilization of the TuneD daemon. \n2013416 - Multiple assets emit different content to the same filename\n2013431 - Application selector dropdown has incorrect font-size and positioning\n2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2013545 - Service binding created outside topology is not visible\n2013599 - Scorecard support storage is not included in ocp4.9\n2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)\n2013646 - fsync controller will show false positive if gaps in metrics are observed. \n2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default\n2013751 - Service details page is showing wrong in-cluster hostname\n2013787 - There are two tittle \u0027Network Attachment Definition Details\u0027 on NAD details page\n2013871 - Resource table headings are not aligned with their column data\n2013895 - Cannot enable accelerated network via MachineSets on Azure\n2013920 - \"--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude\"\n2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG)\n2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain\n2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab)\n2013996 - Project detail page: Action \"Delete Project\" does nothing for the default project\n2014071 - Payload imagestream new tags not properly updated during cluster upgrade\n2014153 - SRIOV exclusive pooling\n2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace\n2014238 - AWS console test is failing on importing duplicate YAML definitions\n2014245 - Several aria-labels, external links, and labels aren\u0027t internationalized\n2014248 - Several files aren\u0027t internationalized\n2014352 - Could not filter out machine by using node name on machines page\n2014464 - Unexpected spacing/padding below navigation groups in developer perspective\n2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages\n2014486 - Integration Tests: OLM single namespace operator tests failing\n2014488 - Custom operator cannot change orders of condition tables\n2014497 - Regex slows down different forms and creates too much recursion errors in the log\n2014538 - Kuryr controller crash looping on self._get_vip_port(loadbalancer).id \u0027NoneType\u0027 object has no attribute \u0027id\u0027\n2014614 - Metrics scraping requests should be assigned to exempt priority level\n2014710 - TestIngressStatus test is broken on Azure\n2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly\n2014995 - oc adm must-gather cannot gather audit logs with \u0027None\u0027 audit profile\n2015115 - [RFE] PCI passthrough\n2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl \u0027--resource-group-name\u0027 parameter\n2015154 - Support ports defined networks and primarySubnet\n2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic\n2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production\n2015386 - Possibility to add labels to the built-in OCP alerts\n2015395 - Table head on Affinity Rules modal is not fully expanded\n2015416 - CI implementation for Topology plugin\n2015418 - Project Filesystem query returns No datapoints found\n2015420 - No vm resource in project view\u0027s inventory\n2015422 - No conflict checking on snapshot name\n2015472 - Form and YAML view switch button should have distinguishable status\n2015481 - [4.10] sriov-network-operator daemon pods are failing to start\n2015493 - Cloud Controller Manager Operator does not respect \u0027additionalTrustBundle\u0027 setting\n2015496 - Storage - PersistentVolumes : Claim colum value \u0027No Claim\u0027 in English\n2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on \u0027Add Capacity\u0027 button click\n2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu\n2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. \n2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart \u0027x% used\u0027 is in English\n2015549 - Observe - Metrics: Column heading and pagination text is in English\n2015557 - Workloads - DeploymentConfigs : Error message is in English\n2015568 - Compute - Nodes : CPU column\u0027s values are in English\n2015635 - Storage operator fails causing installation to fail on ASH\n2015660 - \"Finishing boot source customization\" screen should not use term \"patched\"\n2015793 - [hypershift] The collect-profiles job\u0027s pods should run on the control-plane node\n2015806 - Metrics view in Deployment reports \"Forbidden\" when not cluster-admin\n2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning\n2015837 - OS_CLOUD overwrites install-config\u0027s platform.openstack.cloud\n2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch\n2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail\n2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)\n2016008 - [4.10] Bootimage bump tracker\n2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver\n2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator\n2016054 - No e2e CI presubmit configured for release component cluster-autoscaler\n2016055 - No e2e CI presubmit configured for release component console\n2016058 - openshift-sync does not synchronise in \"ose-jenkins:v4.8\"\n2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager\n2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers\n2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. \n2016179 - Add Sprint 208 translations\n2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager\n2016235 - should update to 7.5.11 for grafana resources version label\n2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails\n2016334 - shiftstack: SRIOV nic reported as not supported\n2016352 - Some pods start before CA resources are present\n2016367 - Empty task box is getting created for a pipeline without finally task\n2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts\n2016438 - Feature flag gating is missing in few extensions contributed via knative plugin\n2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc\n2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets\n2016453 - Complete i18n for GaugeChart defaults\n2016479 - iface-id-ver is not getting updated for existing lsp\n2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear\n2016951 - dynamic actions list is not disabling \"open console\" for stopped vms\n2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available\n2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances\n2017016 - [REF] Virtualization menu\n2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn\n2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly\n2017130 - t is not a function error navigating to details page\n2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue\n2017244 - ovirt csi operator static files creation is in the wrong order\n2017276 - [4.10] Volume mounts not created with the correct security context\n2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. \n2017427 - NTO does not restart TuneD daemon when profile application is taking too long\n2017535 - Broken Argo CD link image on GitOps Details Page\n2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references\n2017564 - On-prem prepender dispatcher script overwrites DNS search settings\n2017565 - CCMO does not handle additionalTrustBundle on Azure Stack\n2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice\n2017606 - [e2e][automation] add test to verify send key for VNC console\n2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes\n2017656 - VM IP address is \"undefined\" under VM details -\u003e ssh field\n2017663 - SSH password authentication is disabled when public key is not supplied\n2017680 - [gcp] Couldn\u2019t enable support for instances with GPUs on GCP\n2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set\n2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource\n2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults\n2017761 - [e2e][automation] dummy bug for 4.9 test dependency\n2017872 - Add Sprint 209 translations\n2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances\n2017879 - Add Chinese translation for \"alternate\"\n2017882 - multus: add handling of pod UIDs passed from runtime\n2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods\n2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI\n2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS\n2018094 - the tooltip length is limited\n2018152 - CNI pod is not restarted when It cannot start servers due to ports being used\n2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time\n2018234 - user settings are saved in local storage instead of on cluster\n2018264 - Delete Export button doesn\u0027t work in topology sidebar (general issue with unknown CSV?)\n2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports)\n2018275 - Topology graph doesn\u0027t show context menu for Export CSV\n2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked\n2018380 - Migrate docs links to access.redhat.com\n2018413 - Error: context deadline exceeded, OCP 4.8.9\n2018428 - PVC is deleted along with VM even with \"Delete Disks\" unchecked\n2018445 - [e2e][automation] enhance tests for downstream\n2018446 - [e2e][automation] move tests to different level\n2018449 - [e2e][automation] add test about create/delete network attachment definition\n2018490 - [4.10] Image provisioning fails with file name too long\n2018495 - Fix typo in internationalization README\n2018542 - Kernel upgrade does not reconcile DaemonSet\n2018880 - Get \u0027No datapoints found.\u0027 when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit\n2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes\n2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950\n2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10\n2018985 - The rootdisk size is 15Gi of windows VM in customize wizard\n2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync. \n2019096 - Update SRO leader election timeout to support SNO\n2019129 - SRO in operator hub points to wrong repo for README\n2019181 - Performance profile does not apply\n2019198 - ptp offset metrics are not named according to the log output\n2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest\n2019284 - Stop action should not in the action list while VMI is not running\n2019346 - zombie processes accumulation and Argument list too long\n2019360 - [RFE] Virtualization Overview page\n2019452 - Logger object in LSO appends to existing logger recursively\n2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect\n2019634 - Pause and migration is enabled in action list for a user who has view only permission\n2019636 - Actions in VM tabs should be disabled when user has view only permission\n2019639 - \"Take snapshot\" should be disabled while VM image is still been importing\n2019645 - Create button is not removed on \"Virtual Machines\" page for view only user\n2019646 - Permission error should pop-up immediately while clicking \"Create VM\" button on template page for view only user\n2019647 - \"Remove favorite\" and \"Create new Template\" should be disabled in template action list for view only user\n2019717 - cant delete VM with un-owned pvc attached\n2019722 - The shared-resource-csi-driver-node pod runs as \u201cBestEffort\u201d qosClass\n2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as \"Always\"\n2019744 - [RFE] Suggest users to download newest RHEL 8 version\n2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level\n2019827 - Display issue with top-level menu items running demo plugin\n2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded\n2019886 - Kuryr unable to finish ports recovery upon controller restart\n2019948 - [RFE] Restructring Virtualization links\n2019972 - The Nodes section doesn\u0027t display the csr of the nodes that are trying to join the cluster\n2019977 - Installer doesn\u0027t validate region causing binary to hang with a 60 minute timeout\n2019986 - Dynamic demo plugin fails to build\n2019992 - instance:node_memory_utilisation:ratio metric is incorrect\n2020001 - Update dockerfile for demo dynamic plugin to reflect dir change\n2020003 - MCD does not regard \"dangling\" symlinks as a files, attempts to write through them on next backup, resulting in \"not writing through dangling symlink\" error and degradation. \n2020107 - cluster-version-operator: remove runlevel from CVO namespace\n2020153 - Creation of Windows high performance VM fails\n2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn\u0027t be public\n2020250 - Replacing deprecated ioutil\n2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build\n2020275 - ClusterOperators link in console returns blank page during upgrades\n2020377 - permissions error while using tcpdump option with must-gather\n2020489 - coredns_dns metrics don\u0027t include the custom zone metrics data due to CoreDNS prometheus plugin is not defined\n2020498 - \"Show PromQL\" button is disabled\n2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature\n2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI\n2020664 - DOWN subports are not cleaned up\n2020904 - When trying to create a connection from the Developer view between VMs, it fails\n2021016 - \u0027Prometheus Stats\u0027 of dashboard \u0027Prometheus Overview\u0027 miss data on console compared with Grafana\n2021017 - 404 page not found error on knative eventing page\n2021031 - QE - Fix the topology CI scripts\n2021048 - [RFE] Added MAC Spoof check\n2021053 - Metallb operator presented as community operator\n2021067 - Extensive number of requests from storage version operator in cluster\n2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes\n2021135 - [azure-file-csi-driver] \"make unit-test\" returns non-zero code, but tests pass\n2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node\n2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating\n2021152 - imagePullPolicy is \"Always\" for ptp operator images\n2021191 - Project admins should be able to list available network attachment defintions\n2021205 - Invalid URL in git import form causes validation to not happen on URL change\n2021322 - cluster-api-provider-azure should populate purchase plan information\n2021337 - Dynamic Plugins: ResourceLink doesn\u0027t render when passed a groupVersionKind\n2021364 - Installer requires invalid AWS permission s3:GetBucketReplication\n2021400 - Bump documentationBaseURL to 4.10\n2021405 - [e2e][automation] VM creation wizard Cloud Init editor\n2021433 - \"[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified\" test fail permanently on disconnected\n2021466 - [e2e][automation] Windows guest tool mount\n2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver\n2021551 - Build is not recognizing the USER group from an s2i image\n2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character\n2021629 - api request counts for current hour are incorrect\n2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page\n2021693 - Modals assigned modal-lg class are no longer the correct width\n2021724 - Observe \u003e Dashboards: Graph lines are not visible when obscured by other lines\n2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled\n2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags\n2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem\n2022053 - dpdk application with vhost-net is not able to start\n2022114 - Console logging every proxy request\n2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack (Intermittent)\n2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long\n2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error . \n2022447 - ServiceAccount in manifests conflicts with OLM\n2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. \n2022509 - getOverrideForManifest does not check manifest.GVK.Group\n2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache\n2022612 - no namespace field for \"Kubernetes / Compute Resources / Namespace (Pods)\" admin console dashboard\n2022627 - Machine object not picking up external FIP added to an openstack vm\n2022646 - configure-ovs.sh failure - Error: unknown connection \u0027WARN:\u0027\n2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox\n2022801 - Add Sprint 210 translations\n2022811 - Fix kubelet log rotation file handle leak\n2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations\n2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2022880 - Pipeline renders with minor visual artifact with certain task dependencies\n2022886 - Incorrect URL in operator description\n2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config\n2023060 - [e2e][automation] Windows VM with CDROM migration\n2023077 - [e2e][automation] Home Overview Virtualization status\n2023090 - [e2e][automation] Examples of Import URL for VM templates\n2023102 - [e2e][automation] Cloudinit disk of VM from custom template\n2023216 - ACL for a deleted egressfirewall still present on node join switch\n2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9\n2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy\n2023342 - SCC admission should take ephemeralContainers into account\n2023356 - Devfiles can\u0027t be loaded in Safari on macOS (403 - Forbidden)\n2023434 - Update Azure Machine Spec API to accept Marketplace Images\n2023500 - Latency experienced while waiting for volumes to attach to node\n2023522 - can\u0027t remove package from index: database is locked\n2023560 - \"Network Attachment Definitions\" has no project field on the top in the list view\n2023592 - [e2e][automation] add mac spoof check for nad\n2023604 - ACL violation when deleting a provisioning-configuration resource\n2023607 - console returns blank page when normal user without any projects visit Installed Operators page\n2023638 - Downgrade support level for extended control plane integration to Dev Preview\n2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10\n2023675 - Changing CNV Namespace\n2023779 - Fix Patch 104847 in 4.9\n2023781 - initial hardware devices is not loading in wizard\n2023832 - CCO updates lastTransitionTime for non-Status changes\n2023839 - Bump recommended FCOS to 34.20211031.3.0\n2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly\n2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from \"registry:5000\" repository\n2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8\n2024055 - External DNS added extra prefix for the TXT record\n2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully\n2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json\n2024199 - 400 Bad Request error for some queries for the non admin user\n2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode\n2024262 - Sample catalog is not displayed when one API call to the backend fails\n2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability\n2024316 - modal about support displays wrong annotation\n2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected\n2024399 - Extra space is in the translated text of \"Add/Remove alternate service\" on Create Route page\n2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view\n2024493 - Observe \u003e Alerting \u003e Alerting rules page throws error trying to destructure undefined\n2024515 - test-blocker: Ceph-storage-plugin tests failing\n2024535 - hotplug disk missing OwnerReference\n2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image\n2024547 - Detail page is breaking for namespace store , backing store and bucket class. \n2024551 - KMS resources not getting created for IBM FlashSystem storage\n2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel\n2024613 - pod-identity-webhook starts without tls\n2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded\n2024665 - Bindable services are not shown on topology\n2024731 - linuxptp container: unnecessary checking of interfaces\n2024750 - i18n some remaining OLM items\n2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured\n2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack\n2024841 - test Keycloak with latest tag\n2024859 - Not able to deploy an existing image from private image registry using developer console\n2024880 - Egress IP breaks when network policies are applied\n2024900 - Operator upgrade kube-apiserver\n2024932 - console throws \"Unauthorized\" error after logging out\n2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up\n2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick\n2025230 - ClusterAutoscalerUnschedulablePods should not be a warning\n2025266 - CreateResource route has exact prop which need to be removed\n2025301 - [e2e][automation] VM actions availability in different VM states\n2025304 - overwrite storage section of the DV spec instead of the pvc section\n2025431 - [RFE]Provide specific windows source link\n2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36\n2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node\n2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn\u0027t work for ExternalTrafficPolicy=local\n2025481 - Update VM Snapshots UI\n2025488 - [DOCS] Update the doc for nmstate operator installation\n2025592 - ODC 4.9 supports invalid devfiles only\n2025765 - It should not try to load from storageProfile after unchecking\"Apply optimized StorageProfile settings\"\n2025767 - VMs orphaned during machineset scaleup\n2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns \"kubevirt-hyperconverged\" while using customize wizard\n2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size\u2019s vCPUsAvailable instead of vCPUs for the sku. \n2025821 - Make \"Network Attachment Definitions\" available to regular user\n2025823 - The console nav bar ignores plugin separator in existing sections\n2025830 - CentOS capitalizaion is wrong\n2025837 - Warn users that the RHEL URL expire\n2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-*\n2025903 - [UI] RoleBindings tab doesn\u0027t show correct rolebindings\n2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2026178 - OpenShift Alerting Rules Style-Guide Compliance\n2026209 - Updation of task is getting failed (tekton hub integration)\n2026223 - Internal error occurred: failed calling webhook \"ptpconfigvalidationwebhook.openshift.io\"\n2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates\n2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct\n2026352 - Kube-Scheduler revision-pruner fail during install of new cluster\n2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment\n2026383 - Error when rendering custom Grafana dashboard through ConfigMap\n2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation\n2026396 - Cachito Issues: sriov-network-operator Image build failure\n2026488 - openshift-controller-manager - delete event is repeating pathologically\n2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. \n2026560 - Cluster-version operator does not remove unrecognized volume mounts\n2026699 - fixed a bug with missing metadata\n2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator\n2026898 - Description/details are missing for Local Storage Operator\n2027132 - Use the specific icon for Fedora and CentOS template\n2027238 - \"Node Exporter / USE Method / Cluster\" CPU utilization graph shows incorrect legend\n2027272 - KubeMemoryOvercommit alert should be human readable\n2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group\n2027288 - Devfile samples can\u0027t be loaded after fixing it on Safari (redirect caching issue)\n2027299 - The status of checkbox component is not revealed correctly in code\n2027311 - K8s watch hooks do not work when fetching core resources\n2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation\n2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don\u0027t use the downstream images\n2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation\n2027498 - [IBMCloud] SG Name character length limitation\n2027501 - [4.10] Bootimage bump tracker\n2027524 - Delete Application doesn\u0027t delete Channels or Brokers\n2027563 - e2e/add-flow-ci.feature fix accessibility violations\n2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges\n2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions\n2027685 - openshift-cluster-csi-drivers pods crashing on PSI\n2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced\n2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string\n2027917 - No settings in hostfirmwaresettings and schema objects for masters\n2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf\n2027982 - nncp stucked at ConfigurationProgressing\n2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters\n2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed\n2028030 - Panic detected in cluster-image-registry-operator pod\n2028042 - Desktop viewer for Windows VM shows \"no Service for the RDP (Remote Desktop Protocol) can be found\"\n2028054 - Cloud controller manager operator can\u0027t get leader lease when upgrading from 4.8 up to 4.9\n2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin\n2028141 - Console tests doesn\u0027t pass on Node.js 15 and 16\n2028160 - Remove i18nKey in network-policy-peer-selectors.tsx\n2028162 - Add Sprint 210 translations\n2028170 - Remove leading and trailing whitespace\n2028174 - Add Sprint 210 part 2 translations\n2028187 - Console build doesn\u0027t pass on Node.js 16 because node-sass doesn\u0027t support it\n2028217 - Cluster-version operator does not default Deployment replicas to one\n2028240 - Multiple CatalogSources causing higher CPU use than necessary\n2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn\u0027t be set in HostFirmwareSettings\n2028325 - disableDrain should be set automatically on SNO\n2028484 - AWS EBS CSI driver\u0027s livenessprobe does not respect operator\u0027s loglevel\n2028531 - Missing netFilter to the list of parameters when platform is OpenStack\n2028610 - Installer doesn\u0027t retry on GCP rate limiting\n2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting\n2028695 - destroy cluster does not prune bootstrap instance profile\n2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs\n2028802 - CRI-O panic due to invalid memory address or nil pointer dereference\n2028816 - VLAN IDs not released on failures\n2028881 - Override not working for the PerformanceProfile template\n2028885 - Console should show an error context if it logs an error object\n2028949 - Masthead dropdown item hover text color is incorrect\n2028963 - Whereabouts should reconcile stranded IP addresses\n2029034 - enabling ExternalCloudProvider leads to inoperative cluster\n2029178 - Create VM with wizard - page is not displayed\n2029181 - Missing CR from PGT\n2029273 - wizard is not able to use if project field is \"All Projects\"\n2029369 - Cypress tests github rate limit errors\n2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out\n2029394 - missing empty text for hardware devices at wizard review\n2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used\n2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl\n2029521 - EFS CSI driver cannot delete volumes under load\n2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle\n2029579 - Clicking on an Application which has a Helm Release in it causes an error\n2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn\u0027t for HPE\n2029645 - Sync upstream 1.15.0 downstream\n2029671 - VM action \"pause\" and \"clone\" should be disabled while VM disk is still being importing\n2029742 - [ovn] Stale lr-policy-list and snat rules left for egressip\n2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage\n2029785 - CVO panic when an edge is included in both edges and conditionaledges\n2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp)\n2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error\n2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2030228 - Fix StorageSpec resources field to use correct API\n2030229 - Mirroring status card reflect wrong data\n2030240 - Hide overview page for non-privileged user\n2030305 - Export App job do not completes\n2030347 - kube-state-metrics exposes metrics about resource annotations\n2030364 - Shared resource CSI driver monitoring is not setup correctly\n2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets\n2030534 - Node selector/tolerations rules are evaluated too early\n2030539 - Prometheus is not highly available\n2030556 - Don\u0027t display Description or Message fields for alerting rules if those annotations are missing\n2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation\n2030574 - console service uses older \"service.alpha.openshift.io\" for the service serving certificates. \n2030677 - BOND CNI: There is no option to configure MTU on a Bond interface\n2030692 - NPE in PipelineJobListener.upsertWorkflowJob\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2030847 - PerformanceProfile API version should be v2\n2030961 - Customizing the OAuth server URL does not apply to upgraded cluster\n2031006 - Application name input field is not autofocused when user selects \"Create application\"\n2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex\n2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn\u0027t be started\n2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue\n2031057 - Topology sidebar for Knative services shows a small pod ring with \"0 undefined\" as tooltip\n2031060 - Failing CSR Unit test due to expired test certificate\n2031085 - ovs-vswitchd running more threads than expected\n2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1\n2031228 - CVE-2021-43813 grafana: directory traversal vulnerability\n2031502 - [RFE] New common templates crash the ui\n2031685 - Duplicated forward upstreams should be removed from the dns operator\n2031699 - The displayed ipv6 address of a dns upstream should be case sensitive\n2031797 - [RFE] Order and text of Boot source type input are wrong\n2031826 - CI tests needed to confirm driver-toolkit image contents\n2031831 - OCP Console - Global CSS overrides affecting dynamic plugins\n2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional\n2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled)\n2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain)\n2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself\n2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource\n2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64\n2032141 - open the alertrule link in new tab, got empty page\n2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy\n2032296 - Cannot create machine with ephemeral disk on Azure\n2032407 - UI will show the default openshift template wizard for HANA template\n2032415 - Templates page - remove \"support level\" badge and add \"support level\" column which should not be hard coded\n2032421 - [RFE] UI integration with automatic updated images\n2032516 - Not able to import git repo with .devfile.yaml\n2032521 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the aws_vpc_dhcp_options_association resource\n2032547 - hardware devices table have filter when table is empty\n2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool\n2032566 - Cluster-ingress-router does not support Azure Stack\n2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso\n2032589 - DeploymentConfigs ignore resolve-names annotation\n2032732 - Fix styling conflicts due to recent console-wide CSS changes\n2032831 - Knative Services and Revisions are not shown when Service has no ownerReference\n2032851 - Networking is \"not available\" in Virtualization Overview\n2032926 - Machine API components should use K8s 1.23 dependencies\n2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24\n2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster\n2033013 - Project dropdown in user preferences page is broken\n2033044 - Unable to change import strategy if devfile is invalid\n2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable\n2033111 - IBM VPC operator library bump removed global CLI args\n2033138 - \"No model registered for Templates\" shows on customize wizard\n2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected\n2033239 - [IPI on Alibabacloud] \u0027openshift-install\u0027 gets the wrong region (\u2018cn-hangzhou\u2019) selected\n2033257 - unable to use configmap for helm charts\n2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn\u2019t triggered\n2033290 - Product builds for console are failing\n2033382 - MAPO is missing machine annotations\n2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations\n2033403 - Devfile catalog does not show provider information\n2033404 - Cloud event schema is missing source type and resource field is using wrong value\n2033407 - Secure route data is not pre-filled in edit flow form\n2033422 - CNO not allowing LGW conversion from SGW in runtime\n2033434 - Offer darwin/arm64 oc in clidownloads\n2033489 - CCM operator failing on baremetal platform\n2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver\n2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains\n2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating \"cluster-infrastructure-02-config.yml\" status, which leads to bootstrap failed and all master nodes NotReady\n2033538 - Gather Cost Management Metrics Custom Resource\n2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined\n2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page\n2033634 - list-style-type: disc is applied to the modal dropdowns\n2033720 - Update samples in 4.10\n2033728 - Bump OVS to 2.16.0-33\n2033729 - remove runtime request timeout restriction for azure\n2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended\n2033749 - Azure Stack Terraform fails without Local Provider\n2033750 - Local volume should pull multi-arch image for kube-rbac-proxy\n2033751 - Bump kubernetes to 1.23\n2033752 - make verify fails due to missing yaml-patch\n2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource\n2034004 - [e2e][automation] add tests for VM snapshot improvements\n2034068 - [e2e][automation] Enhance tests for 4.10 downstream\n2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore\n2034097 - [OVN] After edit EgressIP object, the status is not correct\n2034102 - [OVN] Recreate the deleted EgressIP object got InvalidEgressIP warning\n2034129 - blank page returned when clicking \u0027Get started\u0027 button\n2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0\n2034153 - CNO does not verify MTU migration for OpenShiftSDN\n2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled\n2034170 - Use function.knative.dev for Knative Functions related labels\n2034190 - unable to add new VirtIO disks to VMs\n2034192 - Prometheus fails to insert reporting metrics when the sample limit is met\n2034243 - regular user cant load template list\n2034245 - installing a cluster on aws, gcp always fails with \"Error: Incompatible provider version\"\n2034248 - GPU/Host device modal is too small\n2034257 - regular user `Create VM` missing permissions alert\n2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2034287 - do not block upgrades if we can\u0027t create storageclass in 4.10 in vsphere\n2034300 - Du validator policy is NonCompliant after DU configuration completed\n2034319 - Negation constraint is not validating packages\n2034322 - CNO doesn\u0027t pick up settings required when ExternalControlPlane topology\n2034350 - The CNO should implement the Whereabouts IP reconciliation cron job\n2034362 - update description of disk interface\n2034398 - The Whereabouts IPPools CRD should include the podref field\n2034409 - Default CatalogSources should be pointing to 4.10 index images\n2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics\n2034413 - cloud-network-config-controller fails to init with secret \"cloud-credentials\" not found in manual credential mode\n2034460 - Summary: cloud-network-config-controller does not account for different environment\n2034474 - Template\u0027s boot source is \"Unknown source\" before and after set enableCommonBootImageImport to true\n2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren\u0027t working properly\n2034493 - Change cluster version operator log level\n2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list\n2034527 - IPI deployment fails \u0027timeout reached while inspecting the node\u0027 when provisioning network ipv6\n2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer\n2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART\n2034537 - Update team\n2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds\n2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success\n2034577 - Current OVN gateway mode should be reflected on node annotation as well\n2034621 - context menu not popping up for application group\n2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10\n2034624 - Warn about unsupported CSI driver in vsphere operator\n2034647 - missing volumes list in snapshot modal\n2034648 - Rebase openshift-controller-manager to 1.23\n2034650 - Rebase openshift/builder to 1.23\n2034705 - vSphere: storage e2e tests logging configuration data\n2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail. \n2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment\n2034785 - ptpconfig with summary_interval cannot be applied\n2034823 - RHEL9 should be starred in template list\n2034838 - An external router can inject routes if no service is added\n2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent\n2034879 - Lifecycle hook\u0027s name and owner shouldn\u0027t be allowed to be empty\n2034881 - Cloud providers components should use K8s 1.23 dependencies\n2034884 - ART cannot build the image because it tries to download controller-gen\n2034889 - `oc adm prune deployments` does not work\n2034898 - Regression in recently added Events feature\n2034957 - update openshift-apiserver to kube 1.23.1\n2035015 - ClusterLogForwarding CR remains stuck remediating forever\n2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster\n2035141 - [RFE] Show GPU/Host devices in template\u0027s details tab\n2035146 - \"kubevirt-plugin~PVC cannot be empty\" shows on add-disk modal while adding existing PVC\n2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting\n2035199 - IPv6 support in mtu-migration-dispatcher.yaml\n2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing\n2035250 - Peering with ebgp peer over multi-hops doesn\u0027t work\n2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices\n2035315 - invalid test cases for AWS passthrough mode\n2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env\n2035321 - Add Sprint 211 translations\n2035326 - [ExternalCloudProvider] installation with additional network on workers fails\n2035328 - Ccoctl does not ignore credentials request manifest marked for deletion\n2035333 - Kuryr orphans ports on 504 errors from Neutron\n2035348 - Fix two grammar issues in kubevirt-plugin.json strings\n2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets\n2035409 - OLM E2E test depends on operator package that\u0027s no longer published\n2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address\n2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to \u0027ecs-cn-hangzhou.aliyuncs.com\u0027 timeout, although the specified region is \u0027us-east-1\u0027\n2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster\n2035467 - UI: Queried metrics can\u0027t be ordered on Oberve-\u003eMetrics page\n2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers\n2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class\n2035602 - [e2e][automation] add tests for Virtualization Overview page cards\n2035703 - Roles -\u003e RoleBindings tab doesn\u0027t show RoleBindings correctly\n2035704 - RoleBindings list page filter doesn\u0027t apply\n2035705 - Azure \u0027Destroy cluster\u0027 get stuck when the cluster resource group is already not existing. \n2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed\n2035772 - AccessMode and VolumeMode is not reserved for customize wizard\n2035847 - Two dashes in the Cronjob / Job pod name\n2035859 - the output of opm render doesn\u0027t contain olm.constraint which is defined in dependencies.yaml\n2035882 - [BIOS setting values] Create events for all invalid settings in spec\n2035903 - One redundant capi-operator credential requests in \u201coc adm extract --credentials-requests\u201d\n2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen\n2035927 - Cannot enable HighNodeUtilization scheduler profile\n2035933 - volume mode and access mode are empty in customize wizard review tab\n2035969 - \"ip a \" shows \"Error: Peer netns reference is invalid\" after create test pods\n2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation\n2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error\n2036029 - New added cloud-network-config operator doesn\u2019t supported aws sts format credential\n2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend\n2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes\n2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23\n2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23\n2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments\n2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists\n2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected\n2036826 - `oc adm prune deployments` can prune the RC/RS\n2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform\n2036861 - kube-apiserver is degraded while enable multitenant\n2036937 - Command line tools page shows wrong download ODO link\n2036940 - oc registry login fails if the file is empty or stdout\n2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container\n2036989 - Route URL copy to clipboard button wraps to a separate line by itself\n2036990 - ZTP \"DU Done inform policy\" never becomes compliant on multi-node clusters\n2036993 - Machine API components should use Go lang version 1.17\n2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log. \n2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api\n2037073 - Alertmanager container fails to start because of startup probe never being successful\n2037075 - Builds do not support CSI volumes\n2037167 - Some log level in ibm-vpc-block-csi-controller are hard code\n2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles\n2037182 - PingSource badge color is not matched with knativeEventing color\n2037203 - \"Running VMs\" card is too small in Virtualization Overview\n2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly\n2037237 - Add \"This is a CD-ROM boot source\" to customize wizard\n2037241 - default TTL for noobaa cache buckets should be 0\n2037246 - Cannot customize auto-update boot source\n2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately\n2037288 - Remove stale image reference\n2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources\n2037483 - Rbacs for Pods within the CBO should be more restrictive\n2037484 - Bump dependencies to k8s 1.23\n2037554 - Mismatched wave number error message should include the wave numbers that are in conflict\n2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]\n2037635 - impossible to configure custom certs for default console route in ingress config\n2037637 - configure custom certificate for default console route doesn\u0027t take effect for OCP \u003e= 4.8\n2037638 - Builds do not support CSI volumes as volume sources\n2037664 - text formatting issue in Installed Operators list table\n2037680 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037689 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037801 - Serverless installation is failing on CI jobs for e2e tests\n2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format\n2037856 - use lease for leader election\n2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10\n2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests\n2037904 - upgrade operator deployment failed due to memory limit too low for manager container\n2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]\n2038034 - non-privileged user cannot see auto-update boot source\n2038053 - Bump dependencies to k8s 1.23\n2038088 - Remove ipa-downloader references\n2038160 - The `default` project missed the annotation : openshift.io/node-selector: \"\"\n2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional\n2038196 - must-gather is missing collecting some metal3 resources\n2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)\n2038253 - Validator Policies are long lived\n2038272 - Failures to build a PreprovisioningImage are not reported\n2038384 - Azure Default Instance Types are Incorrect\n2038389 - Failing test: [sig-arch] events should not repeat pathologically\n2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket\n2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips\n2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained\n2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect\n2038663 - update kubevirt-plugin OWNERS\n2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via \"oc adm groups new\"\n2038705 - Update ptp reviewers\n2038761 - Open Observe-\u003eTargets page, wait for a while, page become blank\n2038768 - All the filters on the Observe-\u003eTargets page can\u0027t work\n2038772 - Some monitors failed to display on Observe-\u003eTargets page\n2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node\n2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces\n2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard\n2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation\n2038864 - E2E tests fail because multi-hop-net was not created\n2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console\n2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured\n2038968 - Move feature gates from a carry patch to openshift/api\n2039056 - Layout issue with breadcrumbs on API explorer page\n2039057 - Kind column is not wide enough in API explorer page\n2039064 - Bulk Import e2e test flaking at a high rate\n2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled\n2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters\n2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost\n2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy\n2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator\n2039170 - [upgrade]Error shown on registry operator \"missing the cloud-provider-config configmap\" after upgrade\n2039227 - Improve image customization server parameter passing during installation\n2039241 - Improve image customization server parameter passing during installation\n2039244 - Helm Release revision history page crashes the UI\n2039294 - SDN controller metrics cannot be consumed correctly by prometheus\n2039311 - oc Does Not Describe Build CSI Volumes\n2039315 - Helm release list page should only fetch secrets for deployed charts\n2039321 - SDN controller metrics are not being consumed by prometheus\n2039330 - Create NMState button doesn\u0027t work in OperatorHub web console\n2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations\n2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters. \n2039359 - `oc adm prune deployments` can\u0027t prune the RS where the associated Deployment no longer exists\n2039382 - gather_metallb_logs does not have execution permission\n2039406 - logout from rest session after vsphere operator sync is finished\n2039408 - Add GCP region northamerica-northeast2 to allowed regions\n2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration\n2039425 - No need to set KlusterletAddonConfig CR applicationManager-\u003eenabled: true in RAN ztp deployment\n2039491 - oc - git:// protocol used in unit tests\n2039516 - Bump OVN to ovn21.12-21.12.0-25\n2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate\n2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled\n2039541 - Resolv-prepender script duplicating entries\n2039586 - [e2e] update centos8 to centos stream8\n2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty\n2039619 - [AWS] In tree provisioner storageclass aws disk type should contain \u0027gp3\u0027 and csi provisioner storageclass default aws disk type should be \u0027gp3\u0027\n2039670 - Create PDBs for control plane components\n2039678 - Page goes blank when create image pull secret\n2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported\n2039743 - React missing key warning when open operator hub detail page (and maybe others as well)\n2039756 - React missing key warning when open KnativeServing details\n2039770 - Observe dashboard doesn\u0027t react on time-range changes after browser reload when perspective is changed in another tab\n2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard\n2039781 - [GSS] OBC is not visible by admin of a Project on Console\n2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector\n2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled\n2039880 - Log level too low for control plane metrics\n2039919 - Add E2E test for router compression feature\n2039981 - ZTP for standard clusters installs stalld on master nodes\n2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead\n2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced\n2040143 - [IPI on Alibabacloud] suggest to remove region \"cn-nanjing\" or provide better error message\n2040150 - Update ConfigMap keys for IBM HPCS\n2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth\n2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository\n2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp\n2040376 - \"unknown instance type\" error for supported m6i.xlarge instance\n2040394 - Controller: enqueue the failed configmap till services update\n2040467 - Cannot build ztp-site-generator container image\n2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn\u0027t take affect in OpenShift 4\n2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps\n2040535 - Auto-update boot source is not available in customize wizard\n2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name\n2040603 - rhel worker scaleup playbook failed because missing some dependency of podman\n2040616 - rolebindings page doesn\u0027t load for normal users\n2040620 - [MAPO] Error pulling MAPO image on installation\n2040653 - Topology sidebar warns that another component is updated while rendering\n2040655 - User settings update fails when selecting application in topology sidebar\n2040661 - Different react warnings about updating state on unmounted components when leaving topology\n2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation\n2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi\n2040694 - Three upstream HTTPClientConfig struct fields missing in the operator\n2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers\n2040710 - cluster-baremetal-operator cannot update BMC subscription CR\n2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms\n2040782 - Import YAML page blocks input with more then one generateName attribute\n2040783 - The Import from YAML summary page doesn\u0027t show the resource name if created via generateName attribute\n2040791 - Default PGT policies must be \u0027inform\u0027 to integrate with the Lifecycle Operator\n2040793 - Fix snapshot e2e failures\n2040880 - do not block upgrades if we can\u0027t connect to vcenter\n2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10\n2041093 - autounattend.xml missing\n2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates\n2041319 - [IPI on Alibabacloud] installation in region \"cn-shanghai\" failed, due to \"Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped\"\n2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23\n2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller\n2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener\n2041441 - Provision volume with size 3000Gi even if sizeRange: \u0027[10-2000]GiB\u0027 in storageclass on IBM cloud\n2041466 - Kubedescheduler version is missing from the operator logs\n2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses\n2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods)\n2041492 - Spacing between resources in inventory card is too small\n2041509 - GCP Cloud provider components should use K8s 1.23 dependencies\n2041510 - cluster-baremetal-operator doesn\u0027t run baremetal-operator\u0027s subscription webhook\n2041541 - audit: ManagedFields are dropped using API not annotation\n2041546 - ovnkube: set election timer at RAFT cluster creation time\n2041554 - use lease for leader election\n2041581 - KubeDescheduler operator log shows \"Use of insecure cipher detected\"\n2041583 - etcd and api server cpu mask interferes with a guaranteed workload\n2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure\n2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation\n2041620 - bundle CSV alm-examples does not parse\n2041641 - Fix inotify leak and kubelet retaining memory\n2041671 - Delete templates leads to 404 page\n2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category\n2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled\n2041750 - [IPI on Alibabacloud] trying \"create install-config\" with region \"cn-wulanchabu (China (Ulanqab))\" (or \"ap-southeast-6 (Philippines (Manila))\", \"cn-guangzhou (China (Guangzhou))\") failed due to invalid endpoint\n2041763 - The Observe \u003e Alerting pages no longer have their default sort order applied\n2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken\n2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied\n2041882 - cloud-network-config operator can\u0027t work normal on GCP workload identity cluster\n2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases\n2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist\n2041971 - [vsphere] Reconciliation of mutating webhooks didn\u0027t happen\n2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile\n2041999 - [PROXY] external dns pod cannot recognize custom proxy CA\n2042001 - unexpectedly found multiple load balancers\n2042029 - kubedescheduler fails to install completely\n2042036 - [IBMCLOUD] \"openshift-install explain installconfig.platform.ibmcloud\" contains not yet supported custom vpc parameters\n2042049 - Seeing warning related to unrecognized feature gate in kubescheduler \u0026 KCM logs\n2042059 - update discovery burst to reflect lots of CRDs on openshift clusters\n2042069 - Revert toolbox to rhcos-toolbox\n2042169 - Can not delete egressnetworkpolicy in Foreground propagation\n2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool\n2042265 - [IBM]\"--scale-down-utilization-threshold\" doesn\u0027t work on IBMCloud\n2042274 - Storage API should be used when creating a PVC\n2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection\n2042366 - Lifecycle hooks should be independently managed\n2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway\n2042382 - [e2e][automation] CI takes more then 2 hours to run\n2042395 - Add prerequisites for active health checks test\n2042438 - Missing rpms in openstack-installer image\n2042466 - Selection does not happen when switching from Topology Graph to List View\n2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver\n2042567 - insufficient info on CodeReady Containers configuration\n2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk\n2042619 - Overview page of the console is broken for hypershift clusters\n2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running\n2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud\n2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud\n2042770 - [IPI on Alibabacloud] with vpcID \u0026 vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly\n2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)\n2042851 - Create template from SAP HANA template flow - VM is created instead of a new template\n2042906 - Edit machineset with same machine deletion hook name succeed\n2042960 - azure-file CI fails with \"gid(0) in storageClass and pod fsgroup(1000) are not equal\"\n2043003 - [IPI on Alibabacloud] \u0027destroy cluster\u0027 of a failed installation (bug2041694) stuck after \u0027stage=Nat gateways\u0027\n2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n2043043 - Cluster Autoscaler should use K8s 1.23 dependencies\n2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)\n2043078 - Favorite system projects not visible in the project selector after toggling \"Show default projects\". \n2043117 - Recommended operators links are erroneously treated as external\n2043130 - Update CSI sidecars to the latest release for 4.10\n2043234 - Missing validation when creating several BGPPeers with the same peerAddress\n2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler\n2043254 - crio does not bind the security profiles directory\n2043296 - Ignition fails when reusing existing statically-keyed LUKS volume\n2043297 - [4.10] Bootimage bump tracker\n2043316 - RHCOS VM fails to boot on Nutanix AOS\n2043446 - Rebase aws-efs-utils to the latest upstream version. \n2043556 - Add proper ci-operator configuration to ironic and ironic-agent images\n2043577 - DPU network operator\n2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator\n2043675 - Too many machines deleted by cluster autoscaler when scaling down\n2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation\n2043709 - Logging flags no longer being bound to command line\n2043721 - Installer bootstrap hosts using outdated kubelet containing bugs\n2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather\n2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23\n2043780 - Bump router to k8s.io/api 1.23\n2043787 - Bump cluster-dns-operator to k8s.io/api 1.23\n2043801 - Bump CoreDNS to k8s.io/api 1.23\n2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown\n2043961 - [OVN-K] If pod creation fails, retry doesn\u0027t work as expected. \n2044201 - Templates golden image parameters names should be supported\n2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]\n2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter \u201ccsi.storage.k8s.io/fstype\u201d create pvc,pod successfully but write data to the pod\u0027s volume failed of \"Permission denied\"\n2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects\n2044347 - Bump to kubernetes 1.23.3\n2044481 - collect sharedresource cluster scoped instances with must-gather\n2044496 - Unable to create hardware events subscription - failed to add finalizers\n2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources\n2044680 - Additional libovsdb performance and resource consumption fixes\n2044704 - Observe \u003e Alerting pages should not show runbook links in 4.10\n2044717 - [e2e] improve tests for upstream test environment\n2044724 - Remove namespace column on VM list page when a project is selected\n2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff\n2044808 - machine-config-daemon-pull.service: use `cp` instead of `cat` when extracting MCD in OKD\n2045024 - CustomNoUpgrade alerts should be ignored\n2045112 - vsphere-problem-detector has missing rbac rules for leases\n2045199 - SnapShot with Disk Hot-plug hangs\n2045561 - Cluster Autoscaler should use the same default Group value as Cluster API\n2045591 - Reconciliation of aws pod identity mutating webhook did not happen\n2045849 - Add Sprint 212 translations\n2045866 - MCO Operator pod spam \"Error creating event\" warning messages in 4.10\n2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin\n2045916 - [IBMCloud] Default machine profile in installer is unreliable\n2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment\n2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify\n2046137 - oc output for unknown commands is not human readable\n2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance\n2046297 - Bump DB reconnect timeout\n2046517 - In Notification drawer, the \"Recommendations\" header shows when there isn\u0027t any recommendations\n2046597 - Observe \u003e Targets page may show the wrong service monitor is multiple monitors have the same namespace \u0026 label selectors\n2046626 - Allow setting custom metrics for Ansible-based Operators\n2046683 - [AliCloud]\"--scale-down-utilization-threshold\" doesn\u0027t work on AliCloud\n2047025 - Installation fails because of Alibaba CSI driver operator is degraded\n2047190 - Bump Alibaba CSI driver for 4.10\n2047238 - When using communities and localpreferences together, only localpreference gets applied\n2047255 - alibaba: resourceGroupID not found\n2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions\n2047317 - Update HELM OWNERS files under Dev Console\n2047455 - [IBM Cloud] Update custom image os type\n2047496 - Add image digest feature\n2047779 - do not degrade cluster if storagepolicy creation fails\n2047927 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2047929 - use lease for leader election\n2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2048046 - New route annotation to show another URL or hide topology URL decorator doesn\u0027t work for Knative Services\n2048048 - Application tab in User Preferences dropdown menus are too wide. \n2048050 - Topology list view items are not highlighted on keyboard navigation\n2048117 - [IBM]Shouldn\u0027t change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value\n2048413 - Bond CNI: Failed to attach Bond NAD to pod\n2048443 - Image registry operator panics when finalizes config deletion\n2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*\n2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt\n2048598 - Web terminal view is broken\n2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure\n2048891 - Topology page is crashed\n2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class\n2049043 - Cannot create VM from template\n2049156 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2049886 - Placeholder bug for OCP 4.10.0 metadata release\n2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning\n2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2\n2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0\n2050227 - Installation on PSI fails with: \u0027openstack platform does not have the required standard-attr-tag network extension\u0027\n2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]\n2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members\n2050310 - ContainerCreateError when trying to launch large (\u003e500) numbers of pods across nodes\n2050370 - alert data for burn budget needs to be updated to prevent regression\n2050393 - ZTP missing support for local image registry and custom machine config\n2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud\n2050737 - Remove metrics and events for master port offsets\n2050801 - Vsphere upi tries to access vsphere during manifests generation phase\n2050883 - Logger object in LSO does not log source location accurately\n2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit\n2052062 - Whereabouts should implement client-go 1.22+\n2052125 - [4.10] Crio appears to be coredumping in some scenarios\n2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config\n2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. \n2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052598 - kube-scheduler should use configmap lease\n2052599 - kube-controller-manger should use configmap lease\n2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1\n2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch\n2052756 - [4.10] PVs are not being cleaned up after PVC deletion\n2053175 - oc adm catalog mirror throws \u0027missing signature key\u0027 error when using file://local/index\n2053218 - ImagePull fails with error \"unable to pull manifest from example.com/busy.box:v5 invalid reference format\"\n2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs\n2053268 - inability to detect static lifecycle failure\n2053314 - requestheader IDP test doesn\u0027t wait for cleanup, causing high failure rates\n2053323 - OpenShift-Ansible BYOH Unit Tests are Broken\n2053339 - Remove dev preview badge from IBM FlashSystem deployment windows\n2053751 - ztp-site-generate container is missing convenience entrypoint\n2053945 - [4.10] Failed to apply sriov policy on intel nics\n2054109 - Missing \"app\" label\n2054154 - RoleBinding in project without subject is causing \"Project access\" page to fail\n2054244 - Latest pipeline run should be listed on the top of the pipeline run list\n2054288 - console-master-e2e-gcp-console is broken\n2054562 - DPU network operator 4.10 branch need to sync with master\n2054897 - Unable to deploy hw-event-proxy operator\n2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently\n2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line\n2055371 - Remove Check which enforces summary_interval must match logSyncInterval\n2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11\n2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API\n2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured\n2056479 - ovirt-csi-driver-node pods are crashing intermittently\n2056572 - reconcilePrecaching error: cannot list resource \"clusterserviceversions\" in API group \"operators.coreos.com\" at the cluster scope\"\n2056629 - [4.10] EFS CSI driver can\u0027t unmount volumes with \"wait: no child processes\"\n2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs\n2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation\n2056948 - post 1.23 rebase: regression in service-load balancer reliability\n2057438 - Service Level Agreement (SLA) always show \u0027Unknown\u0027\n2057721 - Fix Proxy support in RHACM 2.4.2\n2057724 - Image creation fails when NMstateConfig CR is empty\n2058641 - [4.10] Pod density test causing problems when using kube-burner\n2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install\n2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials\n2060956 - service domain can\u0027t be resolved when networkpolicy is used in OCP 4.10-rc\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2014-3577\nhttps://access.redhat.com/security/cve/CVE-2016-10228\nhttps://access.redhat.com/security/cve/CVE-2017-14502\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2018-1000858\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9169\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-25013\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-8927\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-9952\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-13434\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-15358\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-25660\nhttps://access.redhat.com/security/cve/CVE-2020-25677\nhttps://access.redhat.com/security/cve/CVE-2020-27618\nhttps://access.redhat.com/security/cve/CVE-2020-27781\nhttps://access.redhat.com/security/cve/CVE-2020-29361\nhttps://access.redhat.com/security/cve/CVE-2020-29362\nhttps://access.redhat.com/security/cve/CVE-2020-29363\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/cve/CVE-2021-3326\nhttps://access.redhat.com/security/cve/CVE-2021-3449\nhttps://access.redhat.com/security/cve/CVE-2021-3450\nhttps://access.redhat.com/security/cve/CVE-2021-3516\nhttps://access.redhat.com/security/cve/CVE-2021-3517\nhttps://access.redhat.com/security/cve/CVE-2021-3518\nhttps://access.redhat.com/security/cve/CVE-2021-3520\nhttps://access.redhat.com/security/cve/CVE-2021-3521\nhttps://access.redhat.com/security/cve/CVE-2021-3537\nhttps://access.redhat.com/security/cve/CVE-2021-3541\nhttps://access.redhat.com/security/cve/CVE-2021-3733\nhttps://access.redhat.com/security/cve/CVE-2021-3749\nhttps://access.redhat.com/security/cve/CVE-2021-20305\nhttps://access.redhat.com/security/cve/CVE-2021-21684\nhttps://access.redhat.com/security/cve/CVE-2021-22946\nhttps://access.redhat.com/security/cve/CVE-2021-22947\nhttps://access.redhat.com/security/cve/CVE-2021-25215\nhttps://access.redhat.com/security/cve/CVE-2021-27218\nhttps://access.redhat.com/security/cve/CVE-2021-30666\nhttps://access.redhat.com/security/cve/CVE-2021-30761\nhttps://access.redhat.com/security/cve/CVE-2021-30762\nhttps://access.redhat.com/security/cve/CVE-2021-33928\nhttps://access.redhat.com/security/cve/CVE-2021-33929\nhttps://access.redhat.com/security/cve/CVE-2021-33930\nhttps://access.redhat.com/security/cve/CVE-2021-33938\nhttps://access.redhat.com/security/cve/CVE-2021-36222\nhttps://access.redhat.com/security/cve/CVE-2021-37750\nhttps://access.redhat.com/security/cve/CVE-2021-39226\nhttps://access.redhat.com/security/cve/CVE-2021-41190\nhttps://access.redhat.com/security/cve/CVE-2021-43813\nhttps://access.redhat.com/security/cve/CVE-2021-44716\nhttps://access.redhat.com/security/cve/CVE-2021-44717\nhttps://access.redhat.com/security/cve/CVE-2022-0532\nhttps://access.redhat.com/security/cve/CVE-2022-21673\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL\n0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne\neGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM\nCEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF\naDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC\nY/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp\nsQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO\nRDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN\nrs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry\nbSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z\n7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT\nb5PUYUBIZLc=\n=GUDA\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. This software, such as Apache HTTP Server, is\ncommon to multiple JBoss middleware products, and is packaged under Red Hat\nJBoss Core Services to allow for faster distribution of updates, and for a\nmore consistent update experience. \n\nThis release adds the new Apache HTTP Server 2.4.37 Service Pack 7 packages\nthat are part of the JBoss Core Services offering. Refer to the Release Notes for information on the most\nsignificant bug fixes and enhancements included in this release. Solution:\n\nBefore applying the update, back up your existing installation, including\nall applications, configuration files, databases and database settings, and\nso on. \n\nThe References section of this erratum contains a download link for the\nupdate. You must be logged in to download the update. Bugs fixed (https://bugzilla.redhat.com/):\n\n1941547 - CVE-2021-3450 openssl: CA certificate check bypass with X509_V_FLAG_X509_STRICT\n1941554 - CVE-2021-3449 openssl: NULL pointer dereference in signature_algorithms processing\n\n5. ==========================================================================\nUbuntu Security Notice USN-5038-1\nAugust 12, 2021\n\npostgresql-10, postgresql-12, postgresql-13 vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 21.04\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in PostgreSQL. \n\nSoftware Description:\n- postgresql-13: Object-relational SQL database\n- postgresql-12: Object-relational SQL database\n- postgresql-10: Object-relational SQL database\n\nDetails:\n\nIt was discovered that the PostgresQL planner could create incorrect plans\nin certain circumstances. A remote attacker could use this issue to cause\nPostgreSQL to crash, resulting in a denial of service, or possibly obtain\nsensitive information from memory. (CVE-2021-3677)\n\nIt was discovered that PostgreSQL incorrectly handled certain SSL\nrenegotiation ClientHello messages from clients. A remote attacker could\npossibly use this issue to cause PostgreSQL to crash, resulting in a denial\nof service. (CVE-2021-3449)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 21.04:\n postgresql-13 13.4-0ubuntu0.21.04.1\n\nUbuntu 20.04 LTS:\n postgresql-12 12.8-0ubuntu0.20.04.1\n\nUbuntu 18.04 LTS:\n postgresql-10 10.18-0ubuntu0.18.04.1\n\nThis update uses a new upstream release, which includes additional bug\nfixes. After a standard system update you need to restart PostgreSQL to\nmake all the necessary changes. \n\nSecurity Fix(es):\n\n* golang: crypto/tls: certificate of wrong type is causing TLS client to\npanic\n(CVE-2021-34558)\n* golang: net: lookup functions may return invalid host names\n(CVE-2021-33195)\n* golang: net/http/httputil: ReverseProxy forwards connection headers if\nfirst one is empty (CVE-2021-33197)\n* golang: match/big.Rat: may cause a panic or an unrecoverable fatal error\nif passed inputs with very large exponents (CVE-2021-33198)\n* golang: encoding/xml: infinite loop when using xml.NewTokenDecoder with a\ncustom TokenReader (CVE-2021-27918)\n* golang: net/http: panic in ReadRequest and ReadResponse when reading a\nvery large header (CVE-2021-31525)\n* golang: archive/zip: malformed archive may cause panic or memory\nexhaustion (CVE-2021-33196)\n\nIt was found that the CVE-2021-27918, CVE-2021-31525 and CVE-2021-33196\nhave been incorrectly mentioned as fixed in RHSA for Serverless client kn\n1.16.0. Bugs fixed (https://bugzilla.redhat.com/):\n\n1983596 - CVE-2021-34558 golang: crypto/tls: certificate of wrong type is causing TLS client to panic\n1983651 - Release of OpenShift Serverless Serving 1.17.0\n1983654 - Release of OpenShift Serverless Eventing 1.17.0\n1989564 - CVE-2021-33195 golang: net: lookup functions may return invalid host names\n1989570 - CVE-2021-33197 golang: net/http/httputil: ReverseProxy forwards connection headers if first one is empty\n1989575 - CVE-2021-33198 golang: math/big.Rat: may cause a panic or an unrecoverable fatal error if passed inputs with very large exponents\n1992955 - CVE-2021-3703 serverless: incomplete fix for CVE-2021-27918 / CVE-2021-31525 / CVE-2021-33196\n\n5. OpenSSL Security Advisory [25 March 2021]\n=========================================\n\nCA certificate check bypass with X509_V_FLAG_X509_STRICT (CVE-2021-3450)\n========================================================================\n\nSeverity: High\n\nThe X509_V_FLAG_X509_STRICT flag enables additional security checks of the\ncertificates present in a certificate chain. It is not set by default. \n\nStarting from OpenSSL version 1.1.1h a check to disallow certificates in\nthe chain that have explicitly encoded elliptic curve parameters was added\nas an additional strict check. \n\nAn error in the implementation of this check meant that the result of a\nprevious check to confirm that certificates in the chain are valid CA\ncertificates was overwritten. This effectively bypasses the check\nthat non-CA certificates must not be able to issue other certificates. \n\nIf a \"purpose\" has been configured then there is a subsequent opportunity\nfor checks that the certificate is a valid CA. All of the named \"purpose\"\nvalues implemented in libcrypto perform this check. Therefore, where\na purpose is set the certificate chain will still be rejected even when the\nstrict flag has been used. A purpose is set by default in libssl client and\nserver certificate verification routines, but it can be overridden or\nremoved by an application. \n\nIn order to be affected, an application must explicitly set the\nX509_V_FLAG_X509_STRICT verification flag and either not set a purpose\nfor the certificate verification or, in the case of TLS client or server\napplications, override the default purpose. \n\nThis issue was reported to OpenSSL on 18th March 2021 by Benjamin Kaduk\nfrom Akamai and was discovered by Xiang Ding and others at Akamai. The fix was\ndeveloped by Tom\u00e1\u0161 Mr\u00e1z. \n\nThis issue was reported to OpenSSL on 17th March 2021 by Nokia. The fix was\ndeveloped by Peter K\u00e4stle and Samuel Sapalski from Nokia. \n\nNote\n====\n\nOpenSSL 1.0.2 is out of support and no longer receiving public updates. Extended\nsupport is available for premium support customers:\nhttps://www.openssl.org/support/contracts.html\n\nOpenSSL 1.1.0 is out of support and no longer receiving updates of any kind. \n\nReferences\n==========\n\nURL for this Security Advisory:\nhttps://www.openssl.org/news/secadv/20210325.txt\n\nNote: the online version of the advisory may be updated with additional details\nover time. \n\nFor details of OpenSSL severity classifications please see:\nhttps://www.openssl.org/policies/secpolicy.html\n", "sources": [ { "db": "NVD", "id": "CVE-2021-3449" }, { "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "db": "VULHUB", "id": "VHN-388130" }, { "db": "VULMON", "id": "CVE-2021-3449" }, { "db": "PACKETSTORM", "id": "163209" }, { "db": "PACKETSTORM", "id": "163257" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "162183" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "162197" }, { "db": "PACKETSTORM", "id": "163815" }, { "db": "PACKETSTORM", "id": "164192" }, { "db": "PACKETSTORM", "id": "169659" } ], "trust": 2.61 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2021-3449", "trust": 2.9 }, { "db": "TENABLE", "id": "TNS-2021-06", "trust": 1.2 }, { "db": "TENABLE", "id": "TNS-2021-09", "trust": 1.2 }, { "db": "TENABLE", "id": "TNS-2021-05", "trust": 1.2 }, { "db": "TENABLE", "id": "TNS-2021-10", "trust": 1.2 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/03/28/3", "trust": 1.2 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/03/27/2", "trust": 1.2 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/03/28/4", "trust": 1.2 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/03/27/1", "trust": 1.2 }, { "db": "SIEMENS", "id": "SSA-772220", "trust": 1.2 }, { "db": "SIEMENS", "id": "SSA-389290", "trust": 1.2 }, { "db": "PULSESECURE", "id": "SA44845", "trust": 1.2 }, { "db": "MCAFEE", "id": "SB10356", "trust": 1.2 }, { "db": "JVN", "id": "JVNVU92126369", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2021-001383", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "162197", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "163257", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "162183", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "162114", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162076", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162350", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162041", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162013", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162383", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162699", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162337", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162151", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162189", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162196", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162172", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161984", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162201", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162307", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162200", "trust": 0.1 }, { "db": "SEEBUG", "id": "SSVID-99170", "trust": 0.1 }, { "db": "VULHUB", "id": "VHN-388130", "trust": 0.1 }, { "db": "ICS CERT", "id": "ICSA-22-104-05", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2021-3449", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163209", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163747", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166279", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163815", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164192", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169659", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-388130" }, { "db": "VULMON", "id": "CVE-2021-3449" }, { "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "db": "PACKETSTORM", "id": "163209" }, { "db": "PACKETSTORM", "id": "163257" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "162183" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "162197" }, { "db": "PACKETSTORM", "id": "163815" }, { "db": "PACKETSTORM", "id": "164192" }, { "db": "PACKETSTORM", "id": "169659" }, { "db": "NVD", "id": "CVE-2021-3449" } ] }, "id": "VAR-202103-1464", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-388130" } ], "trust": 0.6431162642424243 }, "last_update_date": "2024-07-23T21:43:25.615000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "hitachi-sec-2021-119 Software product security information", "trust": 0.8, "url": "https://www.debian.org/security/2021/dsa-4875" }, { "title": "Debian Security Advisories: DSA-4875-1 openssl -- security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=b5207bd1e788bc6e8d94f410cf4801bc" }, { "title": "Red Hat: CVE-2021-3449", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=cve-2021-3449" }, { "title": "Amazon Linux 2: ALAS2-2021-1622", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2021-1622" }, { "title": "Arch Linux Issues: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2021-3449 log" }, { "title": "Cisco: Multiple Vulnerabilities in OpenSSL Affecting Cisco Products: March 2021", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=cisco_security_advisories_and_alerts_ciscoproducts\u0026qid=cisco-sa-openssl-2021-ghy28djd" }, { "title": "Hitachi Security Advisories: Vulnerability in JP1/Base and JP1/ File Transmission Server/FTP", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2021-130" }, { "title": "Tenable Security Advisories: [R1] Tenable.sc 5.18.0 Fixes One Third-party Vulnerability", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=tns-2021-06" }, { "title": "Tenable Security Advisories: [R1] Nessus 8.13.2 Fixes Multiple Third-party Vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=tns-2021-05" }, { "title": "Hitachi Security Advisories: Multiple Vulnerabilities in Hitachi Ops Center Common Services", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2021-117" }, { "title": "Hitachi Security Advisories: Multiple Vulnerabilities in Hitachi Ops Center Analyzer viewpoint", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2021-119" }, { "title": "Tenable Security Advisories: [R1] Nessus Network Monitor 5.13.1 Fixes Multiple Third-party Vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=tns-2021-09" }, { "title": "Tenable Security Advisories: [R1] LCE 6.0.9 Fixes Multiple Third-party Vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=tns-2021-10" }, { "title": "Red Hat: Moderate: OpenShift Container Platform 4.10.3 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220056 - security advisory" }, { "title": "CVE-2021-3449 OpenSSL \u003c1.1.1k DoS exploit", "trust": 0.1, "url": "https://github.com/terorie/cve-2021-3449 " }, { "title": "CVE-2021-3449 OpenSSL \u003c1.1.1k DoS exploit", "trust": 0.1, "url": "https://github.com/gitchangye/cve " }, { "title": "NSAPool-PenTest", "trust": 0.1, "url": "https://github.com/alicemongodin/nsapool-pentest " }, { "title": "Analysis of attack vectors for embedded Linux", "trust": 0.1, "url": "https://github.com/fefi7/attacking_embedded_linux " }, { "title": "openssl-cve", "trust": 0.1, "url": "https://github.com/yonhan3/openssl-cve " }, { "title": "CVE-Check", "trust": 0.1, "url": "https://github.com/falk-werner/cve-check " }, { "title": "SEEKER_dataset", "trust": 0.1, "url": "https://github.com/sf4bin/seeker_dataset " }, { "title": "Year of the Jellyfish (YotJF)", "trust": 0.1, "url": "https://github.com/rnbochsr/yr_of_the_jellyfish " }, { "title": "https://github.com/tianocore-docs/ThirdPartySecurityAdvisories", "trust": 0.1, "url": "https://github.com/tianocore-docs/thirdpartysecurityadvisories " }, { "title": "TASSL-1.1.1k", "trust": 0.1, "url": "https://github.com/jntass/tassl-1.1.1k " }, { "title": "Trivy by Aqua security\nRefer this official repository for explore Trivy Action", "trust": 0.1, "url": "https://github.com/scholarnishu/trivy-by-aquasecurity " }, { "title": "Trivy by Aqua security\nRefer this official repository for explore Trivy Action", "trust": 0.1, "url": "https://github.com/thecyberbaby/trivy-by-aquasecurity " }, { "title": "\ud83d\udc31 Catlin Vulnerability Scanner \ud83d\udc31", "trust": 0.1, "url": "https://github.com/vinamra28/tekton-image-scan-trivy " }, { "title": "DEVOPS + ACR + TRIVY", "trust": 0.1, "url": "https://github.com/arindam0310018/04-apr-2022-devops__scan-images-in-acr-using-trivy " }, { "title": "Trivy Demo", "trust": 0.1, "url": "https://github.com/fredrkl/trivy-demo " }, { "title": "GitHub Actions CI App Pipeline", "trust": 0.1, "url": "https://github.com/isgo-golgo13/gokit-gorillakit-enginesvc " }, { "title": "Awesome Stars", "trust": 0.1, "url": "https://github.com/taielab/awesome-hacking-lists " }, { "title": "podcast-dl-gael", "trust": 0.1, "url": "https://github.com/githubforsnap/podcast-dl-gael " }, { "title": "sec-tools", "trust": 0.1, "url": "https://github.com/matengfei000/sec-tools " }, { "title": "sec-tools", "trust": 0.1, "url": "https://github.com/anquanscan/sec-tools " }, { "title": "\u66f4\u65b0\u4e8e 2023-11-27 08:36:01\n\u5b89\u5168\n\u5f00\u53d1\n\u672a\u5206\u7c7b\n\u6742\u4e03\u6742\u516b", "trust": 0.1, "url": "https://github.com/20142995/sectool " }, { "title": "Vulnerability", "trust": 0.1, "url": "https://github.com/tzwlhack/vulnerability " }, { "title": "OpenSSL-CVE-lib", "trust": 0.1, "url": "https://github.com/chnzzh/openssl-cve-lib " }, { "title": "PoC in GitHub", "trust": 0.1, "url": "https://github.com/soosmile/poc " }, { "title": "PoC in GitHub", "trust": 0.1, "url": "https://github.com/manas3c/cve-poc " }, { "title": "The Register", "trust": 0.1, "url": "https://www.theregister.co.uk/2021/03/25/openssl_bug_fix/" } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-3449" }, { "db": "JVNDB", "id": "JVNDB-2021-001383" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-476", "trust": 1.1 }, { "problemtype": "NULL Pointer dereference (CWE-476) [NVD Evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "VULHUB", "id": "VHN-388130" }, { "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "db": "NVD", "id": "CVE-2021-3449" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.3, "url": "https://tools.cisco.com/security/center/content/ciscosecurityadvisory/cisco-sa-openssl-2021-ghy28djd" }, { "trust": 1.3, "url": "https://www.openssl.org/news/secadv/20210325.txt" }, { "trust": 1.3, "url": "https://www.debian.org/security/2021/dsa-4875" }, { "trust": 1.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3449" }, { "trust": 1.2, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-389290.pdf" }, { "trust": 1.2, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-772220.pdf" }, { "trust": 1.2, "url": "https://kb.pulsesecure.net/articles/pulse_security_advisories/sa44845" }, { "trust": 1.2, "url": "https://psirt.global.sonicwall.com/vuln-detail/snwlid-2021-0013" }, { "trust": 1.2, "url": "https://security.netapp.com/advisory/ntap-20210326-0006/" }, { "trust": 1.2, "url": "https://security.netapp.com/advisory/ntap-20210513-0002/" }, { "trust": 1.2, "url": "https://www.tenable.com/security/tns-2021-05" }, { "trust": 1.2, "url": "https://www.tenable.com/security/tns-2021-06" }, { "trust": 1.2, "url": "https://www.tenable.com/security/tns-2021-09" }, { "trust": 1.2, "url": "https://www.tenable.com/security/tns-2021-10" }, { "trust": 1.2, "url": "https://security.gentoo.org/glsa/202103-03" }, { "trust": 1.2, "url": "https://security.freebsd.org/advisories/freebsd-sa-21:07.openssl.asc" }, { "trust": 1.2, "url": "https://www.oracle.com//security-alerts/cpujul2021.html" }, { "trust": 1.2, "url": "https://www.oracle.com/security-alerts/cpuapr2021.html" }, { "trust": 1.2, "url": "https://www.oracle.com/security-alerts/cpuapr2022.html" }, { "trust": 1.2, "url": "https://www.oracle.com/security-alerts/cpujul2022.html" }, { "trust": 1.2, "url": "https://www.oracle.com/security-alerts/cpuoct2021.html" }, { "trust": 1.2, "url": "https://lists.debian.org/debian-lts-announce/2021/08/msg00029.html" }, { "trust": 1.2, "url": "http://www.openwall.com/lists/oss-security/2021/03/27/1" }, { "trust": 1.2, "url": "http://www.openwall.com/lists/oss-security/2021/03/27/2" }, { "trust": 1.2, "url": "http://www.openwall.com/lists/oss-security/2021/03/28/3" }, { "trust": 1.2, "url": "http://www.openwall.com/lists/oss-security/2021/03/28/4" }, { "trust": 1.1, "url": "https://kc.mcafee.com/corporate/index?page=content\u0026id=sb10356" }, { "trust": 1.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=fb9fa6b51defd48157eeb207f52181f735d96148" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/ccbfllvqvilivgzmbjl3ixzgkwqisynp/" }, { "trust": 1.0, "url": "https://security.netapp.com/advisory/ntap-20240621-0006/" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu92126369/" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2021-3449" }, { "trust": 0.7, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2021-3450" }, { "trust": 0.7, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.7, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-15358" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-20305" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-13434" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-29362" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2017-14502" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2019-9169" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-29361" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-3326" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2019-25013" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-8927" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-29363" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2016-10228" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-27618" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-8286" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-28196" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15358" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27618" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-8231" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13434" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-8285" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28196" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9169" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29362" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-2708" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-2708" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-8284" }, { "trust": 0.4, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29361" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3450" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8231" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29363" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3520" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3537" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3518" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3516" }, { "trust": 0.3, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3517" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3541" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13776" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-3842" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-13776" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-24977" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24977" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3842" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8284" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20305" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8285" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8286" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8927" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3326" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-27219" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-20454" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-13050" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-20843" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-1000858" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-14889" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-1730" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-13627" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20271" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-19906" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-15903" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-27218" }, { "trust": 0.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=fb9fa6b51defd48157eeb207f52181f735d96148" }, { "trust": 0.1, "url": "https://kc.mcafee.com/corporate/index?page=content\u0026amp;id=sb10356" }, { "trust": 0.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/ccbfllvqvilivgzmbjl3ixzgkwqisynp/" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/476.html" }, { "trust": 0.1, "url": "https://github.com/terorie/cve-2021-3449" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-104-05" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26116" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2479" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23240" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3139" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13543" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26137" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9951" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23239" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36242" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27619" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9948" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13012" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25659" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-14866" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26116" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14866" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13584" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26137" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13543" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36242" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13584" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27783" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25659" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27783" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27619" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9983" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3177" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3528" }, { "trust": 0.1, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25678" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23336" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25678" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13012" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25736" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2130" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27219" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.7/windows_containers/window" }, { "trust": 0.1, "url": "https://issues.jboss.org/):" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25736" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28469" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28500" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20934" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29418" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28852" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33034" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28092" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28851" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-1730" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33909" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29482" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23337" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32399" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27358" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23369" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21321" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23368" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-11668" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23362" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23364" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23343" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21309" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23841" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23383" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28918" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28851" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3560" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28852" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23840" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33033" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20934" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25217" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28469" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3016" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3377" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28500" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21272" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29477" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27292" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23346" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29478" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11668" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23839" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33623" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21322" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23382" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33910" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:1196" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9925" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9802" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30762" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33938" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9895" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8625" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44716" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8812" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3899" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8819" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43813" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3867" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8720" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9893" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33930" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8808" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3902" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24407" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25215" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3900" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30761" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33928" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9805" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8820" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9807" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8769" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8710" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-37750" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8813" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9850" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27781" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8811" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0055" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22947" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9803" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9862" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2014-3577" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3749" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3885" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15503" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41190" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10018" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25660" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8835" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8764" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3733" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8844" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3865" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3864" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21684" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14391" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3862" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0056" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3901" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-39226" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8823" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3895" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44717" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-11793" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0532" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9894" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8816" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8771" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3897" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9806" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8814" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8743" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33929" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3121" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9915" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36222" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8815" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8783" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20807" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9952" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22946" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21673" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8766" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3868" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8846" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3894" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25677" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30666" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8782" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3521" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:1200" }, { "trust": 0.1, "url": "https://access.redhat.com/jbossnetwork/restricted/listsoftware.html?product=core.service.apachehttp\u0026downloadtype=securitypatches\u0026version=2.4.37" }, { "trust": 0.1, "url": "https://ubuntu.com/security/notices/usn-5038-1" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3677" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/postgresql-10/10.18-0ubuntu0.18.04.1" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/postgresql-12/12.8-0ubuntu0.20.04.1" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/postgresql-13/13.4-0ubuntu0.21.04.1" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27918" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33196" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33195" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27918" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27218" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html/serverless/index" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33196" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33197" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/serverless/index" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33195" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33198" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33198" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-31525" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-34558" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3556" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33197" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20271" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3421" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-31525" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3703" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.7/html/serverless/index" }, { "trust": 0.1, "url": "https://www.openssl.org/support/contracts.html" }, { "trust": 0.1, "url": "https://www.openssl.org/policies/secpolicy.html" } ], "sources": [ { "db": "VULHUB", "id": "VHN-388130" }, { "db": "VULMON", "id": "CVE-2021-3449" }, { "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "db": "PACKETSTORM", "id": "163209" }, { "db": "PACKETSTORM", "id": "163257" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "162183" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "162197" }, { "db": "PACKETSTORM", "id": "163815" }, { "db": "PACKETSTORM", "id": "164192" }, { "db": "PACKETSTORM", "id": "169659" }, { "db": "NVD", "id": "CVE-2021-3449" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-388130" }, { "db": "VULMON", "id": "CVE-2021-3449" }, { "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "db": "PACKETSTORM", "id": "163209" }, { "db": "PACKETSTORM", "id": "163257" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "162183" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "162197" }, { "db": "PACKETSTORM", "id": "163815" }, { "db": "PACKETSTORM", "id": "164192" }, { "db": "PACKETSTORM", "id": "169659" }, { "db": "NVD", "id": "CVE-2021-3449" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-03-25T00:00:00", "db": "VULHUB", "id": "VHN-388130" }, { "date": "2021-03-25T00:00:00", "db": "VULMON", "id": "CVE-2021-3449" }, { "date": "2021-05-06T00:00:00", "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "date": "2021-06-17T18:34:10", "db": "PACKETSTORM", "id": "163209" }, { "date": "2021-06-23T15:44:15", "db": "PACKETSTORM", "id": "163257" }, { "date": "2021-08-06T14:02:37", "db": "PACKETSTORM", "id": "163747" }, { "date": "2021-04-14T16:40:32", "db": "PACKETSTORM", "id": "162183" }, { "date": "2022-03-11T16:38:38", "db": "PACKETSTORM", "id": "166279" }, { "date": "2021-04-15T13:50:04", "db": "PACKETSTORM", "id": "162197" }, { "date": "2021-08-13T14:20:11", "db": "PACKETSTORM", "id": "163815" }, { "date": "2021-09-17T16:04:56", "db": "PACKETSTORM", "id": "164192" }, { "date": "2021-03-25T12:12:12", "db": "PACKETSTORM", "id": "169659" }, { "date": "2021-03-25T15:15:13.450000", "db": "NVD", "id": "CVE-2021-3449" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-08-29T00:00:00", "db": "VULHUB", "id": "VHN-388130" }, { "date": "2023-11-07T00:00:00", "db": "VULMON", "id": "CVE-2021-3449" }, { "date": "2021-09-13T07:43:00", "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "date": "2024-06-21T19:15:19.710000", "db": "NVD", "id": "CVE-2021-3449" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "PACKETSTORM", "id": "163815" } ], "trust": 0.1 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "OpenSSL\u00a0 In \u00a0NULL\u00a0 Pointer dereference vulnerability", "sources": [ { "db": "JVNDB", "id": "JVNDB-2021-001383" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "xss", "sources": [ { "db": "PACKETSTORM", "id": "163209" } ], "trust": 0.1 } }
var-202102-1488
Vulnerability from variot
The OpenSSL public API function X509_issuer_and_serial_hash() attempts to create a unique hash value based on the issuer and serial number data contained within an X509 certificate. However it fails to correctly handle any errors that may occur while parsing the issuer field (which might occur if the issuer field is maliciously constructed). This may subsequently result in a NULL pointer deref and a crash leading to a potential denial of service attack. The function X509_issuer_and_serial_hash() is never directly called by OpenSSL itself so applications are only vulnerable if they use this function directly and they use it on certificates that may have been obtained from untrusted sources. OpenSSL versions 1.1.1i and below are affected by this issue. Users of these versions should upgrade to OpenSSL 1.1.1j. OpenSSL versions 1.0.2x and below are affected by this issue. However OpenSSL 1.0.2 is out of support and no longer receiving public updates. Premium support customers of OpenSSL 1.0.2 should upgrade to 1.0.2y. Other users should upgrade to 1.1.1j. Fixed in OpenSSL 1.1.1j (Affected 1.1.1-1.1.1i). Fixed in OpenSSL 1.0.2y (Affected 1.0.2-1.0.2x). Please keep an eye on CNNVD or manufacturer announcements. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
APPLE-SA-2021-05-25-2 macOS Big Sur 11.4
macOS Big Sur 11.4 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT212529.
AMD Available for: macOS Big Sur Impact: A remote attacker may be able to cause unexpected application termination or arbitrary code execution Description: A logic issue was addressed with improved state management. CVE-2021-30678: Yu Wang of Didi Research America
AMD Available for: macOS Big Sur Impact: A local user may be able to cause unexpected system termination or read kernel memory Description: A logic issue was addressed with improved state management. CVE-2021-30676: shrek_wzw
App Store Available for: macOS Big Sur Impact: A malicious application may be able to break out of its sandbox Description: A path handling issue was addressed with improved validation. CVE-2021-30688: Thijs Alkemade of Computest Research Division
AppleScript Available for: macOS Big Sur Impact: A malicious application may bypass Gatekeeper checks Description: A logic issue was addressed with improved state management. CVE-2021-30669: Yair Hoffmann
Audio Available for: macOS Big Sur Impact: Processing a maliciously crafted audio file may lead to arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30707: hjy79425575 working with Trend Micro Zero Day Initiative
Audio Available for: macOS Big Sur Impact: Parsing a maliciously crafted audio file may lead to disclosure of user information Description: This issue was addressed with improved checks. CVE-2021-30685: Mickey Jin (@patch1t) of Trend Micro
Core Services Available for: macOS Big Sur Impact: A malicious application may be able to gain root privileges Description: A validation issue existed in the handling of symlinks. This issue was addressed with improved validation of symlinks. CVE-2021-30681: Zhongcheng Li (CK01)
CoreAudio Available for: macOS Big Sur Impact: Processing a maliciously crafted audio file may disclose restricted memory Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-30686: Mickey Jin of Trend Micro
Crash Reporter Available for: macOS Big Sur Impact: A malicious application may be able to modify protected parts of the file system Description: A logic issue was addressed with improved state management. CVE-2021-30727: Cees Elzinga
CVMS Available for: macOS Big Sur Impact: A local attacker may be able to elevate their privileges Description: This issue was addressed with improved checks. CVE-2021-30724: Mickey Jin (@patch1t) of Trend Micro
Dock Available for: macOS Big Sur Impact: A malicious application may be able to access a user's call history Description: An access issue was addressed with improved access restrictions. CVE-2021-30673: Josh Parnham (@joshparnham)
Graphics Drivers Available for: macOS Big Sur Impact: A remote attacker may cause an unexpected application termination or arbitrary code execution Description: A logic issue was addressed with improved state management. CVE-2021-30684: Liu Long of Ant Security Light-Year Lab
Graphics Drivers Available for: macOS Big Sur Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2021-30735: Jack Dates of RET2 Systems, Inc. (@ret2systems) working with Trend Micro Zero Day Initiative
Heimdal Available for: macOS Big Sur Impact: A local user may be able to leak sensitive user information Description: A logic issue was addressed with improved state management. CVE-2021-30697: Gabe Kirkpatrick (@gabe_k)
Heimdal Available for: macOS Big Sur Impact: A malicious application may cause a denial of service or potentially disclose memory contents Description: A memory corruption issue was addressed with improved state management. CVE-2021-30710: Gabe Kirkpatrick (@gabe_k)
Heimdal Available for: macOS Big Sur Impact: A malicious application could execute arbitrary code leading to compromise of user information Description: A use after free issue was addressed with improved memory management. CVE-2021-30683: Gabe Kirkpatrick (@gabe_k)
ImageIO Available for: macOS Big Sur Impact: Processing a maliciously crafted image may lead to disclosure of user information Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-30687: Hou JingYi (@hjy79425575) of Qihoo 360
ImageIO Available for: macOS Big Sur Impact: Processing a maliciously crafted image may lead to disclosure of user information Description: This issue was addressed with improved checks. CVE-2021-30700: Ye Zhang(@co0py_Cat) of Baidu Security
ImageIO Available for: macOS Big Sur Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30701: Mickey Jin (@patch1t) of Trend Micro and Ye Zhang of Baidu Security
ImageIO Available for: macOS Big Sur Impact: Processing a maliciously crafted ASTC file may disclose memory contents Description: This issue was addressed with improved checks. CVE-2021-30705: Ye Zhang of Baidu Security
Intel Graphics Driver Available for: macOS Big Sur Impact: A local user may be able to cause unexpected system termination or read kernel memory Description: An out-of-bounds read issue was addressed by removing the vulnerable code. CVE-2021-30719: an anonymous researcher working with Trend Micro Zero Day Initiative
Intel Graphics Driver Available for: macOS Big Sur Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2021-30728: Liu Long of Ant Security Light-Year Lab CVE-2021-30726: Yinyi Wu(@3ndy1) of Qihoo 360 Vulcan Team
Kernel Available for: macOS Big Sur Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: A logic issue was addressed with improved validation. CVE-2021-30740: Linus Henze (pinauten.de)
Kernel Available for: macOS Big Sur Impact: An application may be able to execute arbitrary code with kernel privileges Description: A logic issue was addressed with improved state management. CVE-2021-30704: an anonymous researcher
Kernel Available for: macOS Big Sur Impact: Processing a maliciously crafted message may lead to a denial of service Description: A logic issue was addressed with improved state management. CVE-2021-30715: The UK's National Cyber Security Centre (NCSC)
Kernel Available for: macOS Big Sur Impact: An application may be able to execute arbitrary code with kernel privileges Description: A buffer overflow was addressed with improved size validation. CVE-2021-30736: Ian Beer of Google Project Zero
Kernel Available for: macOS Big Sur Impact: A local attacker may be able to elevate their privileges Description: A memory corruption issue was addressed with improved validation. CVE-2021-30739: Zuozhi Fan (@pattern_F_) of Ant Group Tianqiong Security Lab
Kext Management Available for: macOS Big Sur Impact: A local user may be able to load unsigned kernel extensions Description: A logic issue was addressed with improved state management. CVE-2021-30680: Csaba Fitzl (@theevilbit) of Offensive Security
LaunchServices Available for: macOS Big Sur Impact: A malicious application may be able to break out of its sandbox Description: This issue was addressed with improved environment sanitization. CVE-2021-30677: Ron Waisberg (@epsilan)
Login Window Available for: macOS Big Sur Impact: A person with physical access to a Mac may be able to bypass Login Window Description: A logic issue was addressed with improved state management. CVE-2021-30702: Jewel Lambert of Original Spin, LLC.
Mail Available for: macOS Big Sur Impact: An attacker in a privileged network position may be able to misrepresent application state Description: A logic issue was addressed with improved state management. CVE-2021-30696: Fabian Ising and Damian Poddebniak of Münster University of Applied Sciences
Model I/O Available for: macOS Big Sur Impact: Processing a maliciously crafted USD file may disclose memory contents Description: An information disclosure issue was addressed with improved state management. CVE-2021-30723: Mickey Jin (@patch1t) of Trend Micro CVE-2021-30691: Mickey Jin (@patch1t) of Trend Micro CVE-2021-30692: Mickey Jin (@patch1t) of Trend Micro CVE-2021-30694: Mickey Jin (@patch1t) of Trend Micro
Model I/O Available for: macOS Big Sur Impact: Processing a maliciously crafted USD file may lead to unexpected application termination or arbitrary code execution Description: A memory corruption issue was addressed with improved state management. CVE-2021-30725: Mickey Jin (@patch1t) of Trend Micro
Model I/O Available for: macOS Big Sur Impact: Processing a maliciously crafted USD file may disclose memory contents Description: An out-of-bounds read was addressed with improved input validation. CVE-2021-30746: Mickey Jin (@patch1t) of Trend Micro
Model I/O Available for: macOS Big Sur Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: A validation issue was addressed with improved logic. CVE-2021-30693: Mickey Jin (@patch1t) & Junzhi Lu (@pwn0rz) of Trend Micro
Model I/O Available for: macOS Big Sur Impact: Processing a maliciously crafted USD file may disclose memory contents Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-30695: Mickey Jin (@patch1t) & Junzhi Lu (@pwn0rz) of Trend Micro
Model I/O Available for: macOS Big Sur Impact: Processing a maliciously crafted USD file may lead to unexpected application termination or arbitrary code execution Description: An out-of-bounds read was addressed with improved input validation. CVE-2021-30708: Mickey Jin (@patch1t) & Junzhi Lu (@pwn0rz) of Trend Micro
Model I/O Available for: macOS Big Sur Impact: Processing a maliciously crafted USD file may disclose memory contents Description: This issue was addressed with improved checks. CVE-2021-30709: Mickey Jin (@patch1t) of Trend Micro
NSOpenPanel Available for: macOS Big Sur Impact: An application may be able to gain elevated privileges Description: This issue was addressed by removing the vulnerable code. CVE-2021-30679: Gabe Kirkpatrick (@gabe_k)
OpenLDAP Available for: macOS Big Sur Impact: A remote attacker may be able to cause a denial of service Description: This issue was addressed with improved checks. CVE-2020-36226 CVE-2020-36227 CVE-2020-36223 CVE-2020-36224 CVE-2020-36225 CVE-2020-36221 CVE-2020-36228 CVE-2020-36222 CVE-2020-36230 CVE-2020-36229
PackageKit Available for: macOS Big Sur Impact: A malicious application may be able to overwrite arbitrary files Description: An issue with path validation logic for hardlinks was addressed with improved path sanitization. CVE-2021-30738: Qingyang Chen of Topsec Alpha Team and Csaba Fitzl (@theevilbit) of Offensive Security
Security Available for: macOS Big Sur Impact: Processing a maliciously crafted certificate may lead to arbitrary code execution Description: A memory corruption issue in the ASN.1 decoder was addressed by removing the vulnerable code. CVE-2021-30737: xerub
smbx Available for: macOS Big Sur Impact: An attacker in a privileged network position may be able to perform denial of service Description: A logic issue was addressed with improved state management. CVE-2021-30716: Aleksandar Nikolic of Cisco Talos
smbx Available for: macOS Big Sur Impact: An attacker in a privileged network position may be able to execute arbitrary code Description: A memory corruption issue was addressed with improved state management. CVE-2021-30717: Aleksandar Nikolic of Cisco Talos
smbx Available for: macOS Big Sur Impact: An attacker in a privileged network position may be able to leak sensitive user information Description: A path handling issue was addressed with improved validation. CVE-2021-30721: Aleksandar Nikolic of Cisco Talos
smbx Available for: macOS Big Sur Impact: An attacker in a privileged network position may be able to leak sensitive user information Description: An information disclosure issue was addressed with improved state management. CVE-2021-30722: Aleksandar Nikolic of Cisco Talos
smbx Available for: macOS Big Sur Impact: A remote attacker may be able to cause unexpected application termination or arbitrary code execution Description: A logic issue was addressed with improved state management. CVE-2021-30712: Aleksandar Nikolic of Cisco Talos
Software Update Available for: macOS Big Sur Impact: A person with physical access to a Mac may be able to bypass Login Window during a software update Description: This issue was addressed with improved checks. CVE-2021-30668: Syrus Kimiagar and Danilo Paffi Monteiro
SoftwareUpdate Available for: macOS Big Sur Impact: A non-privileged user may be able to modify restricted settings Description: This issue was addressed with improved checks. CVE-2021-30718: SiQian Wei of ByteDance Security
TCC Available for: macOS Big Sur Impact: A malicious application may be able to send unauthorized Apple events to Finder Description: A validation issue was addressed with improved logic. CVE-2021-30671: Ryan Bell (@iRyanBell)
TCC Available for: macOS Big Sur Impact: A malicious application may be able to bypass Privacy preferences. Apple is aware of a report that this issue may have been actively exploited. Description: A permissions issue was addressed with improved validation. CVE-2021-30713: an anonymous researcher
WebKit Available for: macOS Big Sur Impact: Processing maliciously crafted web content may lead to universal cross site scripting Description: A cross-origin issue with iframe elements was addressed with improved tracking of security origins. CVE-2021-30744: Dan Hite of jsontop
WebKit Available for: macOS Big Sur Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. CVE-2021-21779: Marcin Towalski of Cisco Talos
WebKit Available for: macOS Big Sur Impact: A malicious application may be able to leak sensitive user information Description: A logic issue was addressed with improved restrictions. CVE-2021-30682: an anonymous researcher and 1lastBr3ath
WebKit Available for: macOS Big Sur Impact: Processing maliciously crafted web content may lead to universal cross site scripting Description: A logic issue was addressed with improved state management. CVE-2021-30689: an anonymous researcher
WebKit Available for: macOS Big Sur Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: Multiple memory corruption issues were addressed with improved memory handling. CVE-2021-30749: an anonymous researcher and mipu94 of SEFCOM lab, ASU. working with Trend Micro Zero Day Initiative CVE-2021-30734: Jack Dates of RET2 Systems, Inc. (@ret2systems) working with Trend Micro Zero Day Initiative
WebKit Available for: macOS Big Sur Impact: A malicious website may be able to access restricted ports on arbitrary servers Description: A logic issue was addressed with improved restrictions. CVE-2021-30720: David Schütz (@xdavidhu)
WebRTC Available for: macOS Big Sur Impact: A remote attacker may be able to cause a denial of service Description: A null pointer dereference was addressed with improved input validation. CVE-2021-23841: Tavis Ormandy of Google CVE-2021-30698: Tavis Ormandy of Google
Additional recognition
App Store We would like to acknowledge Thijs Alkemade of Computest Research Division for their assistance.
CoreCapture We would like to acknowledge Zuozhi Fan (@pattern_F_) of Ant- financial TianQiong Security Lab for their assistance.
ImageIO We would like to acknowledge Jzhu working with Trend Micro Zero Day Initiative and an anonymous researcher for their assistance.
Mail Drafts We would like to acknowledge Lauritz Holtmann (@lauritz) for their assistance.
WebKit We would like to acknowledge Chris Salls (@salls) of Makai Security for their assistance.
Installation note:
This update may be obtained from the Mac App Store or Apple's Software Downloads web site: https://support.apple.com/downloads/
Information will also be posted to the Apple Security Updates web site: https://support.apple.com/kb/HT201222
This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEbURczHs1TP07VIfuZcsbuWJ6jjAFAmCtU9AACgkQZcsbuWJ6 jjDC5g/+P0Hya9smOX6XVhxtnwe+vh2d5zOrKLBymdkvDPGw1UQoGOq08+7eu02Q vsManS/aP1UKNcMnbALHNFbFXv61ZjWi+71qgGGAQAe3EtYTJchBiIIyOBNIHoOJ 8X9sOeiyFzOOKw+GyVsBMNRL9Oh678USC4qgyyO5u2+Oexehu+6N9YNdAzwZgy6o muP+NlZ08s80ahRfq/6q8uKj7+Is0k5OEdxpWTnJOoXUDzZPj4Vo7H0HL6zjuqg3 CurJQABF3kDBWgZCvroMU6/HpbilGPE+JUFV7HPfaMe6iE3FsfrOq101w+/ovuNM hJ3yk/QENoh5BYdHKJo7zPVZBteGX20EVPdWfTsnz6a/hk568A+ICiupFIqwEuQv esIBWzgab9YUb2fAaZ071Z+lSn0Rj7tm3V/rhdwq19tYD3Q7BqEJ+YxYCH2zvyIB mP4/NoMpsDiTqFradR8Skac5uwINpZzAHjFyWLj0QVWVMxyQB8EGshR16YPkMryJ rjGyNIqZPcZ/Z6KJqpvNJrfI+b0oeqFMBUwpwK/7aQFPP/MvsM+UVSySipRiqwoa WAHMuY4SQwcseok7N6Rf+zAEYm9Nc+YglYpTW2taw6g0vWNIuCbyzPdC/Srrjw98 od2jLahPwyoBg6WBvXoZ6H4YOWFAywf225nYk3l5ATsG6rNbhYk= =Avma -----END PGP SIGNATURE-----
. Bugs fixed (https://bugzilla.redhat.com/):
1944888 - CVE-2021-21409 netty: Request smuggling via content-length header 2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn't allow setting size restrictions for decompressed data 2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn't restrict chunk length and may buffer skippable chunks in an unnecessary way 2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value
- JIRA issues fixed (https://issues.jboss.org/):
LOG-1971 - Applying cluster state is causing elasticsearch to hit an issue and become unusable
-
8) - aarch64, ppc64le, s390x, x86_64
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.5 Release Notes linked from the References section. 1965362 - In renegotiated handshake openssl sends extensions which client didn't advertise in second ClientHello [rhel-8]
-
8) - noarch
-
Description:
EDK (Embedded Development Kit) is a project to enable UEFI support for Virtual Machines. This package contains a sample 64-bit UEFI firmware for QEMU and KVM.
The following packages have been upgraded to a later upstream version: edk2 (20210527gite1999b264f1f). -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: openssl security update Advisory ID: RHSA-2021:3798-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2021:3798 Issue date: 2021-10-12 CVE Names: CVE-2021-23840 CVE-2021-23841 =====================================================================
- Summary:
An update for openssl is now available for Red Hat Enterprise Linux 7.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Client (v. 7) - x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64 Red Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Workstation (v. 7) - x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - x86_64
- Description:
OpenSSL is a toolkit that implements the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols, as well as a full-strength general-purpose cryptography library.
Security Fix(es):
-
openssl: integer overflow in CipherUpdate (CVE-2021-23840)
-
openssl: NULL pointer dereference in X509_issuer_and_serial_hash() (CVE-2021-23841)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
For the update to take effect, all services linked to the OpenSSL library must be restarted, or the system rebooted.
- Bugs fixed (https://bugzilla.redhat.com/):
1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash() 1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate
- Package List:
Red Hat Enterprise Linux Client (v. 7):
Source: openssl-1.0.2k-22.el7_9.src.rpm
x86_64: openssl-1.0.2k-22.el7_9.x86_64.rpm openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-libs-1.0.2k-22.el7_9.i686.rpm openssl-libs-1.0.2k-22.el7_9.x86_64.rpm
Red Hat Enterprise Linux Client Optional (v. 7):
x86_64: openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-devel-1.0.2k-22.el7_9.i686.rpm openssl-devel-1.0.2k-22.el7_9.x86_64.rpm openssl-perl-1.0.2k-22.el7_9.x86_64.rpm openssl-static-1.0.2k-22.el7_9.i686.rpm openssl-static-1.0.2k-22.el7_9.x86_64.rpm
Red Hat Enterprise Linux ComputeNode (v. 7):
Source: openssl-1.0.2k-22.el7_9.src.rpm
x86_64: openssl-1.0.2k-22.el7_9.x86_64.rpm openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-libs-1.0.2k-22.el7_9.i686.rpm openssl-libs-1.0.2k-22.el7_9.x86_64.rpm
Red Hat Enterprise Linux ComputeNode Optional (v. 7):
x86_64: openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-devel-1.0.2k-22.el7_9.i686.rpm openssl-devel-1.0.2k-22.el7_9.x86_64.rpm openssl-perl-1.0.2k-22.el7_9.x86_64.rpm openssl-static-1.0.2k-22.el7_9.i686.rpm openssl-static-1.0.2k-22.el7_9.x86_64.rpm
Red Hat Enterprise Linux Server (v. 7):
Source: openssl-1.0.2k-22.el7_9.src.rpm
ppc64: openssl-1.0.2k-22.el7_9.ppc64.rpm openssl-debuginfo-1.0.2k-22.el7_9.ppc.rpm openssl-debuginfo-1.0.2k-22.el7_9.ppc64.rpm openssl-devel-1.0.2k-22.el7_9.ppc.rpm openssl-devel-1.0.2k-22.el7_9.ppc64.rpm openssl-libs-1.0.2k-22.el7_9.ppc.rpm openssl-libs-1.0.2k-22.el7_9.ppc64.rpm
ppc64le: openssl-1.0.2k-22.el7_9.ppc64le.rpm openssl-debuginfo-1.0.2k-22.el7_9.ppc64le.rpm openssl-devel-1.0.2k-22.el7_9.ppc64le.rpm openssl-libs-1.0.2k-22.el7_9.ppc64le.rpm
s390x: openssl-1.0.2k-22.el7_9.s390x.rpm openssl-debuginfo-1.0.2k-22.el7_9.s390.rpm openssl-debuginfo-1.0.2k-22.el7_9.s390x.rpm openssl-devel-1.0.2k-22.el7_9.s390.rpm openssl-devel-1.0.2k-22.el7_9.s390x.rpm openssl-libs-1.0.2k-22.el7_9.s390.rpm openssl-libs-1.0.2k-22.el7_9.s390x.rpm
x86_64: openssl-1.0.2k-22.el7_9.x86_64.rpm openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-devel-1.0.2k-22.el7_9.i686.rpm openssl-devel-1.0.2k-22.el7_9.x86_64.rpm openssl-libs-1.0.2k-22.el7_9.i686.rpm openssl-libs-1.0.2k-22.el7_9.x86_64.rpm
Red Hat Enterprise Linux Server Optional (v. 7):
ppc64: openssl-debuginfo-1.0.2k-22.el7_9.ppc.rpm openssl-debuginfo-1.0.2k-22.el7_9.ppc64.rpm openssl-perl-1.0.2k-22.el7_9.ppc64.rpm openssl-static-1.0.2k-22.el7_9.ppc.rpm openssl-static-1.0.2k-22.el7_9.ppc64.rpm
ppc64le: openssl-debuginfo-1.0.2k-22.el7_9.ppc64le.rpm openssl-perl-1.0.2k-22.el7_9.ppc64le.rpm openssl-static-1.0.2k-22.el7_9.ppc64le.rpm
s390x: openssl-debuginfo-1.0.2k-22.el7_9.s390.rpm openssl-debuginfo-1.0.2k-22.el7_9.s390x.rpm openssl-perl-1.0.2k-22.el7_9.s390x.rpm openssl-static-1.0.2k-22.el7_9.s390.rpm openssl-static-1.0.2k-22.el7_9.s390x.rpm
x86_64: openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-perl-1.0.2k-22.el7_9.x86_64.rpm openssl-static-1.0.2k-22.el7_9.i686.rpm openssl-static-1.0.2k-22.el7_9.x86_64.rpm
Red Hat Enterprise Linux Workstation (v. 7):
Source: openssl-1.0.2k-22.el7_9.src.rpm
x86_64: openssl-1.0.2k-22.el7_9.x86_64.rpm openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-devel-1.0.2k-22.el7_9.i686.rpm openssl-devel-1.0.2k-22.el7_9.x86_64.rpm openssl-libs-1.0.2k-22.el7_9.i686.rpm openssl-libs-1.0.2k-22.el7_9.x86_64.rpm
Red Hat Enterprise Linux Workstation Optional (v. 7):
x86_64: openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-perl-1.0.2k-22.el7_9.x86_64.rpm openssl-static-1.0.2k-22.el7_9.i686.rpm openssl-static-1.0.2k-22.el7_9.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2021-23840 https://access.redhat.com/security/cve/CVE-2021-23841 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYWWqjtzjgjWX9erEAQj4lg/+IFxqmMQqLSvyz8cKUAPgss/+/wFMpRgh ZZxYBQQ0cBFfWFlROVLaRdeiGcZYkyJCRDqy2Yb8YO1A4PnSOc+htLFYmSmU2kcm QLHinOzGEZo/44vN7Qsl4WhJkJIdlysCwKpkkOCUprMEnhlWMvja2eSSG9JLH16d RqGe4AsJQLKSKLgmhejCOqxb9am+t9zBW0zaZHP4UR52Ju1rG5rLjBJ85Gcrmp2B vp/GVEQ/Asid4MZA2WTx+s6wj5Dt7JOdLWrUbcYAC0I8oPWbAoZJTfPkM7S6Xv+U 68iruVFTh74IkCbQ+SNLoYjiDAVJqtAVRVBha7Fd3/gWR6aJLLaqluLRGvd0mwXY pohCS0ynuMQ9wtYOJ3ezSVcBN+/d9Hs/3s8RWQTzrNG6jtBe57H9/tNkeSVFSVvu PMKXsUoOrIUE2HCflJytDB9wkQmsWxiZoH/xVlrtD0D11egZ4EWjJL6x+xtCTAkT u67CAwsCKxxCeNmz42uBtXSwFXoUapJnsviGzAx247T2pyuXlYMYHlsOy7CtBvIk jEEosCMM72UyXO4XsYTXc0jM3ze6iQTcF9irwhy+X+rTB4IXBubdUEoT0jnKlwfI BQvoPEBlcG+f0VU8BL+FCOosvM0ZqC7KGGOwJLoG1Vqz8rbtmhpcmNAOvzUiHdm3 T4OjSl1NzQQ= =Taj2 -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:
Red Hat Advanced Cluster Management for Kubernetes 2.1.12 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in.
Container updates:
-
RHACM 2.1.12 images (BZ# 2007489)
-
Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):
2007489 - RHACM 2.1.12 images 2010991 - CVE-2021-32687 redis: Integer overflow issue with intsets 2011000 - CVE-2021-32675 redis: Denial of service via Redis Standard Protocol (RESP) request 2011001 - CVE-2021-32672 redis: Out of bounds read in lua debugger protocol parser 2011004 - CVE-2021-32628 redis: Integer overflow bug in the ziplist data structure 2011010 - CVE-2021-32627 redis: Integer overflow issue with Streams 2011017 - CVE-2021-32626 redis: Lua scripts can overflow the heap-based Lua stack 2011020 - CVE-2021-41099 redis: Integer overflow issue with strings
- Bugs fixed (https://bugzilla.redhat.com/):
1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment
- JIRA issues fixed (https://issues.jboss.org/):
LOG-1168 - Disable hostname verification in syslog TLS settings
LOG-1235 - Using HTTPS without a secret does not translate into the correct 'scheme' value in Fluentd
LOG-1375 - ssl_ca_cert should be optional
LOG-1378 - CLO should support sasl_plaintext(Password over http)
LOG-1392 - In fluentd config, flush_interval can't be set with flush_mode=immediate
LOG-1494 - Syslog output is serializing json incorrectly
LOG-1555 - Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server
LOG-1575 - Rejected by Elasticsearch and unexpected json-parsing
LOG-1735 - Regression introducing flush_at_shutdown
LOG-1774 - The collector logs should be excluded in fluent.conf
LOG-1776 - fluentd total_limit_size sets value beyond available space
LOG-1822 - OpenShift Alerting Rules Style-Guide Compliance
LOG-1859 - CLO Should not error and exit early on missing ca-bundle when cluster wide proxy is not enabled
LOG-1862 - Unsupported kafka parameters when enabled Kafka SASL
LOG-1903 - Fix the Display of ClusterLogging type in OLM
LOG-1911 - CLF API changes to Opt-in to multiline error detection
LOG-1918 - Alert FluentdNodeDown
always firing
LOG-1939 - Opt-in multiline detection breaks cloudwatch forwarding
- Red Hat OpenShift Container Storage is highly scalable, production-grade persistent storage for stateful applications running in the Red Hat OpenShift Container Platform. In addition to persistent storage, Red Hat OpenShift Container Storage provides a multicloud data management service with an S3 compatible API.
Bug Fix(es):
-
Previously, when the namespace store target was deleted, no alert was sent to the namespace bucket because of an issue in calculating the namespace bucket health. With this update, the issue in calculating the namespace bucket health is fixed and alerts are triggered as expected. (BZ#1993873)
-
Previously, the Multicloud Object Gateway (MCG) components performed slowly and there was a lot of pressure on the MCG components due to non-optimized database queries. With this update the non-optimized database queries are fixed which reduces the compute resources and time taken for queries. Bugs fixed (https://bugzilla.redhat.com/):
1993873 - [4.8.z clone] Alert NooBaaNamespaceBucketErrorState is not triggered when namespacestore's target bucket is deleted 2006958 - CVE-2020-26301 nodejs-ssh2: Command injection by calling vulnerable method with untrusted input
- Bugs fixed (https://bugzilla.redhat.com/):
1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option 1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option
5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202102-1488", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.57" }, { "model": "openssl", "scope": "lt", "trust": 1.0, "vendor": "openssl", "version": "1.1.1j" }, { "model": "mysql server", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "8.0.15" }, { "model": "iphone os", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "14.6" }, { "model": "mysql server", "scope": "lt", "trust": 1.0, "vendor": "oracle", "version": "8.0.23" }, { "model": "enterprise manager ops center", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "12.4.0.0" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.13.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "10.0" }, { "model": "business intelligence", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "12.2.1.4.0" }, { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "snapcenter", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "graalvm", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "19.3.5" }, { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.59" }, { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.58" }, { "model": "mysql server", "scope": "lt", "trust": 1.0, "vendor": "oracle", "version": "5.7.33" }, { "model": "safari", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "14.1.1" }, { "model": "macos", "scope": "gte", "trust": 1.0, "vendor": "apple", "version": "11.1" }, { "model": "business intelligence", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "12.2.1.3.0" }, { "model": "tenable.sc", "scope": "gte", "trust": 1.0, "vendor": "tenable", "version": "5.13.0" }, { "model": "business intelligence", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "5.5.0.0.0" }, { "model": "graalvm", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "21.0.0.2" }, { "model": "oncommand workflow automation", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "ipados", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "14.6" }, { "model": "communications cloud native core policy", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "1.15.0" }, { "model": "openssl", "scope": "lt", "trust": 1.0, "vendor": "openssl", "version": "1.0.2y" }, { "model": "oncommand insight", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "1.0.2" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.12.0" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.12.1" }, { "model": "macos", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "11.4" }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "1.1.1" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.11.0" }, { "model": "mysql enterprise monitor", "scope": "lt", "trust": 1.0, "vendor": "oracle", "version": "8.0.23" }, { "model": "business intelligence", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "5.9.0.0.0" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.11.1" }, { "model": "essbase", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "21.2" }, { "model": "graalvm", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "20.3.1.2" }, { "model": "tenable.sc", "scope": "lte", "trust": 1.0, "vendor": "tenable", "version": "5.17.0" }, { "model": "zfs storage appliance kit", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.8" }, { "model": "jd edwards world security", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "a9.4" }, { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "enterprise manager for storage management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "13.4.0.0" } ], "sources": [ { "db": "NVD", "id": "CVE-2021-23841" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0.2y", "versionStartIncluding": "1.0.2", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.1.1j", "versionStartIncluding": "1.1.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:tenable:tenable.sc:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "5.17.0", "versionStartIncluding": "5.13.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.11.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.12.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.12.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.13.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.11.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "11.4", "versionStartIncluding": "11.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:iphone_os:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "14.6", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:apple:safari:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "14.1.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:ipados:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "14.6", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:snapcenter:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:oncommand_workflow_automation:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:oncommand_insight:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:oracle:business_intelligence:12.2.1.3.0:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.57:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_world_security:a9.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:business_intelligence:12.2.1.4.0:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:business_intelligence:5.5.0.0.0:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.58:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_manager_for_storage_management:13.4.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_manager_ops_center:12.4.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:zfs_storage_appliance_kit:8.8:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:graalvm:20.3.1.2:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:graalvm:21.0.0.2:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:graalvm:19.3.5:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_server:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "8.0.23", "versionStartIncluding": "8.0.15", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_server:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "5.7.33", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_enterprise_monitor:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "8.0.23", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.59:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:essbase:21.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:business_intelligence:5.9.0.0.0:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_policy:1.15.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2021-23841" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "165286" }, { "db": "PACKETSTORM", "id": "164889" }, { "db": "PACKETSTORM", "id": "164890" }, { "db": "PACKETSTORM", "id": "164562" }, { "db": "PACKETSTORM", "id": "164489" }, { "db": "PACKETSTORM", "id": "164583" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "165096" }, { "db": "PACKETSTORM", "id": "165002" } ], "trust": 0.9 }, "cve": "CVE-2021-23841", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 4.3, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "PARTIAL", "baseScore": 4.3, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "id": "VHN-382524", "impactScore": 2.9, "integrityImpact": "NONE", "severity": "MEDIUM", "trust": 0.1, "vectorString": "AV:N/AC:M/AU:N/C:N/I:N/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "HIGH", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 5.9, "baseSeverity": "MEDIUM", "confidentialityImpact": "NONE", "exploitabilityScore": 2.2, "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H", "version": "3.1" } ], "severity": [ { "author": "NVD", "id": "CVE-2021-23841", "trust": 1.0, "value": "MEDIUM" }, { "author": "VULHUB", "id": "VHN-382524", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-382524" }, { "db": "NVD", "id": "CVE-2021-23841" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "The OpenSSL public API function X509_issuer_and_serial_hash() attempts to create a unique hash value based on the issuer and serial number data contained within an X509 certificate. However it fails to correctly handle any errors that may occur while parsing the issuer field (which might occur if the issuer field is maliciously constructed). This may subsequently result in a NULL pointer deref and a crash leading to a potential denial of service attack. The function X509_issuer_and_serial_hash() is never directly called by OpenSSL itself so applications are only vulnerable if they use this function directly and they use it on certificates that may have been obtained from untrusted sources. OpenSSL versions 1.1.1i and below are affected by this issue. Users of these versions should upgrade to OpenSSL 1.1.1j. OpenSSL versions 1.0.2x and below are affected by this issue. However OpenSSL 1.0.2 is out of support and no longer receiving public updates. Premium support customers of OpenSSL 1.0.2 should upgrade to 1.0.2y. Other users should upgrade to 1.1.1j. Fixed in OpenSSL 1.1.1j (Affected 1.1.1-1.1.1i). Fixed in OpenSSL 1.0.2y (Affected 1.0.2-1.0.2x). Please keep an eye on CNNVD or manufacturer announcements. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2021-05-25-2 macOS Big Sur 11.4\n\nmacOS Big Sur 11.4 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT212529. \n\nAMD\nAvailable for: macOS Big Sur\nImpact: A remote attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30678: Yu Wang of Didi Research America\n\nAMD\nAvailable for: macOS Big Sur\nImpact: A local user may be able to cause unexpected system\ntermination or read kernel memory\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30676: shrek_wzw\n\nApp Store\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to break out of its\nsandbox\nDescription: A path handling issue was addressed with improved\nvalidation. \nCVE-2021-30688: Thijs Alkemade of Computest Research Division\n\nAppleScript\nAvailable for: macOS Big Sur\nImpact: A malicious application may bypass Gatekeeper checks\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30669: Yair Hoffmann\n\nAudio\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted audio file may lead to\narbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30707: hjy79425575 working with Trend Micro Zero Day\nInitiative\n\nAudio\nAvailable for: macOS Big Sur\nImpact: Parsing a maliciously crafted audio file may lead to\ndisclosure of user information\nDescription: This issue was addressed with improved checks. \nCVE-2021-30685: Mickey Jin (@patch1t) of Trend Micro\n\nCore Services\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to gain root privileges\nDescription: A validation issue existed in the handling of symlinks. \nThis issue was addressed with improved validation of symlinks. \nCVE-2021-30681: Zhongcheng Li (CK01)\n\nCoreAudio\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted audio file may disclose\nrestricted memory\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-30686: Mickey Jin of Trend Micro\n\nCrash Reporter\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to modify protected parts\nof the file system\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30727: Cees Elzinga\n\nCVMS\nAvailable for: macOS Big Sur\nImpact: A local attacker may be able to elevate their privileges\nDescription: This issue was addressed with improved checks. \nCVE-2021-30724: Mickey Jin (@patch1t) of Trend Micro\n\nDock\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to access a user\u0027s call\nhistory\nDescription: An access issue was addressed with improved access\nrestrictions. \nCVE-2021-30673: Josh Parnham (@joshparnham)\n\nGraphics Drivers\nAvailable for: macOS Big Sur\nImpact: A remote attacker may cause an unexpected application\ntermination or arbitrary code execution\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30684: Liu Long of Ant Security Light-Year Lab\n\nGraphics Drivers\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2021-30735: Jack Dates of RET2 Systems, Inc. (@ret2systems)\nworking with Trend Micro Zero Day Initiative\n\nHeimdal\nAvailable for: macOS Big Sur\nImpact: A local user may be able to leak sensitive user information\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30697: Gabe Kirkpatrick (@gabe_k)\n\nHeimdal\nAvailable for: macOS Big Sur\nImpact: A malicious application may cause a denial of service or\npotentially disclose memory contents\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2021-30710: Gabe Kirkpatrick (@gabe_k)\n\nHeimdal\nAvailable for: macOS Big Sur\nImpact: A malicious application could execute arbitrary code leading\nto compromise of user information\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2021-30683: Gabe Kirkpatrick (@gabe_k)\n\nImageIO\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted image may lead to disclosure\nof user information\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-30687: Hou JingYi (@hjy79425575) of Qihoo 360\n\nImageIO\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted image may lead to disclosure\nof user information\nDescription: This issue was addressed with improved checks. \nCVE-2021-30700: Ye Zhang(@co0py_Cat) of Baidu Security\n\nImageIO\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30701: Mickey Jin (@patch1t) of Trend Micro and Ye Zhang of\nBaidu Security\n\nImageIO\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted ASTC file may disclose\nmemory contents\nDescription: This issue was addressed with improved checks. \nCVE-2021-30705: Ye Zhang of Baidu Security\n\nIntel Graphics Driver\nAvailable for: macOS Big Sur\nImpact: A local user may be able to cause unexpected system\ntermination or read kernel memory\nDescription: An out-of-bounds read issue was addressed by removing\nthe vulnerable code. \nCVE-2021-30719: an anonymous researcher working with Trend Micro Zero\nDay Initiative\n\nIntel Graphics Driver\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2021-30728: Liu Long of Ant Security Light-Year Lab\nCVE-2021-30726: Yinyi Wu(@3ndy1) of Qihoo 360 Vulcan Team\n\nKernel\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: A logic issue was addressed with improved validation. \nCVE-2021-30740: Linus Henze (pinauten.de)\n\nKernel\nAvailable for: macOS Big Sur\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30704: an anonymous researcher\n\nKernel\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted message may lead to a denial\nof service\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30715: The UK\u0027s National Cyber Security Centre (NCSC)\n\nKernel\nAvailable for: macOS Big Sur\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A buffer overflow was addressed with improved size\nvalidation. \nCVE-2021-30736: Ian Beer of Google Project Zero\n\nKernel\nAvailable for: macOS Big Sur\nImpact: A local attacker may be able to elevate their privileges\nDescription: A memory corruption issue was addressed with improved\nvalidation. \nCVE-2021-30739: Zuozhi Fan (@pattern_F_) of Ant Group Tianqiong\nSecurity Lab\n\nKext Management\nAvailable for: macOS Big Sur\nImpact: A local user may be able to load unsigned kernel extensions\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30680: Csaba Fitzl (@theevilbit) of Offensive Security\n\nLaunchServices\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to break out of its\nsandbox\nDescription: This issue was addressed with improved environment\nsanitization. \nCVE-2021-30677: Ron Waisberg (@epsilan)\n\nLogin Window\nAvailable for: macOS Big Sur\nImpact: A person with physical access to a Mac may be able to bypass\nLogin Window\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30702: Jewel Lambert of Original Spin, LLC. \n\nMail\nAvailable for: macOS Big Sur\nImpact: An attacker in a privileged network position may be able to\nmisrepresent application state\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30696: Fabian Ising and Damian Poddebniak of M\u00fcnster\nUniversity of Applied Sciences\n\nModel I/O\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted USD file may disclose memory\ncontents\nDescription: An information disclosure issue was addressed with\nimproved state management. \nCVE-2021-30723: Mickey Jin (@patch1t) of Trend Micro\nCVE-2021-30691: Mickey Jin (@patch1t) of Trend Micro\nCVE-2021-30692: Mickey Jin (@patch1t) of Trend Micro\nCVE-2021-30694: Mickey Jin (@patch1t) of Trend Micro\n\nModel I/O\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted USD file may lead to\nunexpected application termination or arbitrary code execution\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2021-30725: Mickey Jin (@patch1t) of Trend Micro\n\nModel I/O\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted USD file may disclose memory\ncontents\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2021-30746: Mickey Jin (@patch1t) of Trend Micro\n\nModel I/O\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: A validation issue was addressed with improved logic. \nCVE-2021-30693: Mickey Jin (@patch1t) \u0026 Junzhi Lu (@pwn0rz) of Trend\nMicro\n\nModel I/O\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted USD file may disclose memory\ncontents\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-30695: Mickey Jin (@patch1t) \u0026 Junzhi Lu (@pwn0rz) of Trend\nMicro\n\nModel I/O\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted USD file may lead to\nunexpected application termination or arbitrary code execution\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2021-30708: Mickey Jin (@patch1t) \u0026 Junzhi Lu (@pwn0rz) of Trend\nMicro\n\nModel I/O\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted USD file may disclose memory\ncontents\nDescription: This issue was addressed with improved checks. \nCVE-2021-30709: Mickey Jin (@patch1t) of Trend Micro\n\nNSOpenPanel\nAvailable for: macOS Big Sur\nImpact: An application may be able to gain elevated privileges\nDescription: This issue was addressed by removing the vulnerable\ncode. \nCVE-2021-30679: Gabe Kirkpatrick (@gabe_k)\n\nOpenLDAP\nAvailable for: macOS Big Sur\nImpact: A remote attacker may be able to cause a denial of service\nDescription: This issue was addressed with improved checks. \nCVE-2020-36226\nCVE-2020-36227\nCVE-2020-36223\nCVE-2020-36224\nCVE-2020-36225\nCVE-2020-36221\nCVE-2020-36228\nCVE-2020-36222\nCVE-2020-36230\nCVE-2020-36229\n\nPackageKit\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to overwrite arbitrary\nfiles\nDescription: An issue with path validation logic for hardlinks was\naddressed with improved path sanitization. \nCVE-2021-30738: Qingyang Chen of Topsec Alpha Team and Csaba Fitzl\n(@theevilbit) of Offensive Security\n\nSecurity\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted certificate may lead to\narbitrary code execution\nDescription: A memory corruption issue in the ASN.1 decoder was\naddressed by removing the vulnerable code. \nCVE-2021-30737: xerub\n\nsmbx\nAvailable for: macOS Big Sur\nImpact: An attacker in a privileged network position may be able to\nperform denial of service\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30716: Aleksandar Nikolic of Cisco Talos\n\nsmbx\nAvailable for: macOS Big Sur\nImpact: An attacker in a privileged network position may be able to\nexecute arbitrary code\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2021-30717: Aleksandar Nikolic of Cisco Talos\n\nsmbx\nAvailable for: macOS Big Sur\nImpact: An attacker in a privileged network position may be able to\nleak sensitive user information\nDescription: A path handling issue was addressed with improved\nvalidation. \nCVE-2021-30721: Aleksandar Nikolic of Cisco Talos\n\nsmbx\nAvailable for: macOS Big Sur\nImpact: An attacker in a privileged network position may be able to\nleak sensitive user information\nDescription: An information disclosure issue was addressed with\nimproved state management. \nCVE-2021-30722: Aleksandar Nikolic of Cisco Talos\n\nsmbx\nAvailable for: macOS Big Sur\nImpact: A remote attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30712: Aleksandar Nikolic of Cisco Talos\n\nSoftware Update\nAvailable for: macOS Big Sur\nImpact: A person with physical access to a Mac may be able to bypass\nLogin Window during a software update\nDescription: This issue was addressed with improved checks. \nCVE-2021-30668: Syrus Kimiagar and Danilo Paffi Monteiro\n\nSoftwareUpdate\nAvailable for: macOS Big Sur\nImpact: A non-privileged user may be able to modify restricted\nsettings\nDescription: This issue was addressed with improved checks. \nCVE-2021-30718: SiQian Wei of ByteDance Security\n\nTCC\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to send unauthorized\nApple events to Finder\nDescription: A validation issue was addressed with improved logic. \nCVE-2021-30671: Ryan Bell (@iRyanBell)\n\nTCC\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to bypass Privacy\npreferences. Apple is aware of a report that this issue may have been\nactively exploited. \nDescription: A permissions issue was addressed with improved\nvalidation. \nCVE-2021-30713: an anonymous researcher\n\nWebKit\nAvailable for: macOS Big Sur\nImpact: Processing maliciously crafted web content may lead to\nuniversal cross site scripting\nDescription: A cross-origin issue with iframe elements was addressed\nwith improved tracking of security origins. \nCVE-2021-30744: Dan Hite of jsontop\n\nWebKit\nAvailable for: macOS Big Sur\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2021-21779: Marcin Towalski of Cisco Talos\n\nWebKit\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to leak sensitive user\ninformation\nDescription: A logic issue was addressed with improved restrictions. \nCVE-2021-30682: an anonymous researcher and 1lastBr3ath\n\nWebKit\nAvailable for: macOS Big Sur\nImpact: Processing maliciously crafted web content may lead to\nuniversal cross site scripting\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30689: an anonymous researcher\n\nWebKit\nAvailable for: macOS Big Sur\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: Multiple memory corruption issues were addressed with\nimproved memory handling. \nCVE-2021-30749: an anonymous researcher and mipu94 of SEFCOM lab,\nASU. working with Trend Micro Zero Day Initiative\nCVE-2021-30734: Jack Dates of RET2 Systems, Inc. (@ret2systems)\nworking with Trend Micro Zero Day Initiative\n\nWebKit\nAvailable for: macOS Big Sur\nImpact: A malicious website may be able to access restricted ports on\narbitrary servers\nDescription: A logic issue was addressed with improved restrictions. \nCVE-2021-30720: David Sch\u00fctz (@xdavidhu)\n\nWebRTC\nAvailable for: macOS Big Sur\nImpact: A remote attacker may be able to cause a denial of service\nDescription: A null pointer dereference was addressed with improved\ninput validation. \nCVE-2021-23841: Tavis Ormandy of Google\nCVE-2021-30698: Tavis Ormandy of Google\n\nAdditional recognition\n\nApp Store\nWe would like to acknowledge Thijs Alkemade of Computest Research\nDivision for their assistance. \n\nCoreCapture\nWe would like to acknowledge Zuozhi Fan (@pattern_F_) of Ant-\nfinancial TianQiong Security Lab for their assistance. \n\nImageIO\nWe would like to acknowledge Jzhu working with Trend Micro Zero Day\nInitiative and an anonymous researcher for their assistance. \n\nMail Drafts\nWe would like to acknowledge Lauritz Holtmann (@_lauritz_) for their\nassistance. \n\nWebKit\nWe would like to acknowledge Chris Salls (@salls) of Makai Security\nfor their assistance. \n\nInstallation note:\n\nThis update may be obtained from the Mac App Store or\nApple\u0027s Software Downloads web site:\nhttps://support.apple.com/downloads/\n\nInformation will also be posted to the Apple Security Updates\nweb site: https://support.apple.com/kb/HT201222\n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEbURczHs1TP07VIfuZcsbuWJ6jjAFAmCtU9AACgkQZcsbuWJ6\njjDC5g/+P0Hya9smOX6XVhxtnwe+vh2d5zOrKLBymdkvDPGw1UQoGOq08+7eu02Q\nvsManS/aP1UKNcMnbALHNFbFXv61ZjWi+71qgGGAQAe3EtYTJchBiIIyOBNIHoOJ\n8X9sOeiyFzOOKw+GyVsBMNRL9Oh678USC4qgyyO5u2+Oexehu+6N9YNdAzwZgy6o\nmuP+NlZ08s80ahRfq/6q8uKj7+Is0k5OEdxpWTnJOoXUDzZPj4Vo7H0HL6zjuqg3\nCurJQABF3kDBWgZCvroMU6/HpbilGPE+JUFV7HPfaMe6iE3FsfrOq101w+/ovuNM\nhJ3yk/QENoh5BYdHKJo7zPVZBteGX20EVPdWfTsnz6a/hk568A+ICiupFIqwEuQv\nesIBWzgab9YUb2fAaZ071Z+lSn0Rj7tm3V/rhdwq19tYD3Q7BqEJ+YxYCH2zvyIB\nmP4/NoMpsDiTqFradR8Skac5uwINpZzAHjFyWLj0QVWVMxyQB8EGshR16YPkMryJ\nrjGyNIqZPcZ/Z6KJqpvNJrfI+b0oeqFMBUwpwK/7aQFPP/MvsM+UVSySipRiqwoa\nWAHMuY4SQwcseok7N6Rf+zAEYm9Nc+YglYpTW2taw6g0vWNIuCbyzPdC/Srrjw98\nod2jLahPwyoBg6WBvXoZ6H4YOWFAywf225nYk3l5ATsG6rNbhYk=\n=Avma\n-----END PGP SIGNATURE-----\n\n\n. Bugs fixed (https://bugzilla.redhat.com/):\n\n1944888 - CVE-2021-21409 netty: Request smuggling via content-length header\n2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn\u0027t allow setting size restrictions for decompressed data\n2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn\u0027t restrict chunk length and may buffer skippable chunks in an unnecessary way\n2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1971 - Applying cluster state is causing elasticsearch to hit an issue and become unusable\n\n6. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.5 Release Notes linked from the References section. \n1965362 - In renegotiated handshake openssl sends extensions which client didn\u0027t advertise in second ClientHello [rhel-8]\n\n6. 8) - noarch\n\n3. Description:\n\nEDK (Embedded Development Kit) is a project to enable UEFI support for\nVirtual Machines. This package contains a sample 64-bit UEFI firmware for\nQEMU and KVM. \n\nThe following packages have been upgraded to a later upstream version: edk2\n(20210527gite1999b264f1f). -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: openssl security update\nAdvisory ID: RHSA-2021:3798-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:3798\nIssue date: 2021-10-12\nCVE Names: CVE-2021-23840 CVE-2021-23841 \n=====================================================================\n\n1. Summary:\n\nAn update for openssl is now available for Red Hat Enterprise Linux 7. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64\nRed Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Workstation (v. 7) - x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - x86_64\n\n3. Description:\n\nOpenSSL is a toolkit that implements the Secure Sockets Layer (SSL) and\nTransport Layer Security (TLS) protocols, as well as a full-strength\ngeneral-purpose cryptography library. \n\nSecurity Fix(es):\n\n* openssl: integer overflow in CipherUpdate (CVE-2021-23840)\n\n* openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n(CVE-2021-23841)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nFor the update to take effect, all services linked to the OpenSSL library\nmust be restarted, or the system rebooted. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate\n\n6. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nopenssl-1.0.2k-22.el7_9.src.rpm\n\nx86_64:\nopenssl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-libs-1.0.2k-22.el7_9.i686.rpm\nopenssl-libs-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nx86_64:\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-devel-1.0.2k-22.el7_9.i686.rpm\nopenssl-devel-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-perl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-static-1.0.2k-22.el7_9.i686.rpm\nopenssl-static-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode (v. 7):\n\nSource:\nopenssl-1.0.2k-22.el7_9.src.rpm\n\nx86_64:\nopenssl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-libs-1.0.2k-22.el7_9.i686.rpm\nopenssl-libs-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nx86_64:\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-devel-1.0.2k-22.el7_9.i686.rpm\nopenssl-devel-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-perl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-static-1.0.2k-22.el7_9.i686.rpm\nopenssl-static-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nopenssl-1.0.2k-22.el7_9.src.rpm\n\nppc64:\nopenssl-1.0.2k-22.el7_9.ppc64.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.ppc.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.ppc64.rpm\nopenssl-devel-1.0.2k-22.el7_9.ppc.rpm\nopenssl-devel-1.0.2k-22.el7_9.ppc64.rpm\nopenssl-libs-1.0.2k-22.el7_9.ppc.rpm\nopenssl-libs-1.0.2k-22.el7_9.ppc64.rpm\n\nppc64le:\nopenssl-1.0.2k-22.el7_9.ppc64le.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.ppc64le.rpm\nopenssl-devel-1.0.2k-22.el7_9.ppc64le.rpm\nopenssl-libs-1.0.2k-22.el7_9.ppc64le.rpm\n\ns390x:\nopenssl-1.0.2k-22.el7_9.s390x.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.s390.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.s390x.rpm\nopenssl-devel-1.0.2k-22.el7_9.s390.rpm\nopenssl-devel-1.0.2k-22.el7_9.s390x.rpm\nopenssl-libs-1.0.2k-22.el7_9.s390.rpm\nopenssl-libs-1.0.2k-22.el7_9.s390x.rpm\n\nx86_64:\nopenssl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-devel-1.0.2k-22.el7_9.i686.rpm\nopenssl-devel-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-libs-1.0.2k-22.el7_9.i686.rpm\nopenssl-libs-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nppc64:\nopenssl-debuginfo-1.0.2k-22.el7_9.ppc.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.ppc64.rpm\nopenssl-perl-1.0.2k-22.el7_9.ppc64.rpm\nopenssl-static-1.0.2k-22.el7_9.ppc.rpm\nopenssl-static-1.0.2k-22.el7_9.ppc64.rpm\n\nppc64le:\nopenssl-debuginfo-1.0.2k-22.el7_9.ppc64le.rpm\nopenssl-perl-1.0.2k-22.el7_9.ppc64le.rpm\nopenssl-static-1.0.2k-22.el7_9.ppc64le.rpm\n\ns390x:\nopenssl-debuginfo-1.0.2k-22.el7_9.s390.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.s390x.rpm\nopenssl-perl-1.0.2k-22.el7_9.s390x.rpm\nopenssl-static-1.0.2k-22.el7_9.s390.rpm\nopenssl-static-1.0.2k-22.el7_9.s390x.rpm\n\nx86_64:\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-perl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-static-1.0.2k-22.el7_9.i686.rpm\nopenssl-static-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nopenssl-1.0.2k-22.el7_9.src.rpm\n\nx86_64:\nopenssl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-devel-1.0.2k-22.el7_9.i686.rpm\nopenssl-devel-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-libs-1.0.2k-22.el7_9.i686.rpm\nopenssl-libs-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. 7):\n\nx86_64:\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-perl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-static-1.0.2k-22.el7_9.i686.rpm\nopenssl-static-1.0.2k-22.el7_9.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-23840\nhttps://access.redhat.com/security/cve/CVE-2021-23841\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYWWqjtzjgjWX9erEAQj4lg/+IFxqmMQqLSvyz8cKUAPgss/+/wFMpRgh\nZZxYBQQ0cBFfWFlROVLaRdeiGcZYkyJCRDqy2Yb8YO1A4PnSOc+htLFYmSmU2kcm\nQLHinOzGEZo/44vN7Qsl4WhJkJIdlysCwKpkkOCUprMEnhlWMvja2eSSG9JLH16d\nRqGe4AsJQLKSKLgmhejCOqxb9am+t9zBW0zaZHP4UR52Ju1rG5rLjBJ85Gcrmp2B\nvp/GVEQ/Asid4MZA2WTx+s6wj5Dt7JOdLWrUbcYAC0I8oPWbAoZJTfPkM7S6Xv+U\n68iruVFTh74IkCbQ+SNLoYjiDAVJqtAVRVBha7Fd3/gWR6aJLLaqluLRGvd0mwXY\npohCS0ynuMQ9wtYOJ3ezSVcBN+/d9Hs/3s8RWQTzrNG6jtBe57H9/tNkeSVFSVvu\nPMKXsUoOrIUE2HCflJytDB9wkQmsWxiZoH/xVlrtD0D11egZ4EWjJL6x+xtCTAkT\nu67CAwsCKxxCeNmz42uBtXSwFXoUapJnsviGzAx247T2pyuXlYMYHlsOy7CtBvIk\njEEosCMM72UyXO4XsYTXc0jM3ze6iQTcF9irwhy+X+rTB4IXBubdUEoT0jnKlwfI\nBQvoPEBlcG+f0VU8BL+FCOosvM0ZqC7KGGOwJLoG1Vqz8rbtmhpcmNAOvzUiHdm3\nT4OjSl1NzQQ=\n=Taj2\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.1.12 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \n\nContainer updates:\n\n* RHACM 2.1.12 images (BZ# 2007489)\n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n2007489 - RHACM 2.1.12 images\n2010991 - CVE-2021-32687 redis: Integer overflow issue with intsets\n2011000 - CVE-2021-32675 redis: Denial of service via Redis Standard Protocol (RESP) request\n2011001 - CVE-2021-32672 redis: Out of bounds read in lua debugger protocol parser\n2011004 - CVE-2021-32628 redis: Integer overflow bug in the ziplist data structure\n2011010 - CVE-2021-32627 redis: Integer overflow issue with Streams\n2011017 - CVE-2021-32626 redis: Lua scripts can overflow the heap-based Lua stack\n2011020 - CVE-2021-41099 redis: Integer overflow issue with strings\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1168 - Disable hostname verification in syslog TLS settings\nLOG-1235 - Using HTTPS without a secret does not translate into the correct \u0027scheme\u0027 value in Fluentd\nLOG-1375 - ssl_ca_cert should be optional\nLOG-1378 - CLO should support sasl_plaintext(Password over http)\nLOG-1392 - In fluentd config, flush_interval can\u0027t be set with flush_mode=immediate\nLOG-1494 - Syslog output is serializing json incorrectly\nLOG-1555 - Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server\nLOG-1575 - Rejected by Elasticsearch and unexpected json-parsing\nLOG-1735 - Regression introducing flush_at_shutdown \nLOG-1774 - The collector logs should be excluded in fluent.conf\nLOG-1776 - fluentd total_limit_size sets value beyond available space\nLOG-1822 - OpenShift Alerting Rules Style-Guide Compliance\nLOG-1859 - CLO Should not error and exit early on missing ca-bundle when cluster wide proxy is not enabled\nLOG-1862 - Unsupported kafka parameters when enabled Kafka SASL\nLOG-1903 - Fix the Display of ClusterLogging type in OLM\nLOG-1911 - CLF API changes to Opt-in to multiline error detection\nLOG-1918 - Alert `FluentdNodeDown` always firing \nLOG-1939 - Opt-in multiline detection breaks cloudwatch forwarding\n\n6. \nRed Hat OpenShift Container Storage is highly scalable, production-grade\npersistent storage for stateful applications running in the Red Hat\nOpenShift Container Platform. In addition to persistent storage, Red Hat\nOpenShift Container Storage provides a multicloud data management service\nwith an S3 compatible API. \n\nBug Fix(es):\n\n* Previously, when the namespace store target was deleted, no alert was\nsent to the namespace bucket because of an issue in calculating the\nnamespace bucket health. With this update, the issue in calculating the\nnamespace bucket health is fixed and alerts are triggered as expected. \n(BZ#1993873)\n\n* Previously, the Multicloud Object Gateway (MCG) components performed\nslowly and there was a lot of pressure on the MCG components due to\nnon-optimized database queries. With this update the non-optimized\ndatabase queries are fixed which reduces the compute resources and time\ntaken for queries. Bugs fixed (https://bugzilla.redhat.com/):\n\n1993873 - [4.8.z clone] Alert NooBaaNamespaceBucketErrorState is not triggered when namespacestore\u0027s target bucket is deleted\n2006958 - CVE-2020-26301 nodejs-ssh2: Command injection by calling vulnerable method with untrusted input\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option\n1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option\n\n5", "sources": [ { "db": "NVD", "id": "CVE-2021-23841" }, { "db": "VULHUB", "id": "VHN-382524" }, { "db": "PACKETSTORM", "id": "162826" }, { "db": "PACKETSTORM", "id": "165286" }, { "db": "PACKETSTORM", "id": "164889" }, { "db": "PACKETSTORM", "id": "164890" }, { "db": "PACKETSTORM", "id": "164562" }, { "db": "PACKETSTORM", "id": "164489" }, { "db": "PACKETSTORM", "id": "164583" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "165096" }, { "db": "PACKETSTORM", "id": "165002" } ], "trust": 1.89 }, "exploit_availability": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/exploit_availability#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "reference": "https://www.scap.org.cn/vuln/vhn-382524", "trust": 0.1, "type": "unknown" } ], "sources": [ { "db": "VULHUB", "id": "VHN-382524" } ] }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2021-23841", "trust": 2.1 }, { "db": "TENABLE", "id": "TNS-2021-03", "trust": 1.1 }, { "db": "TENABLE", "id": "TNS-2021-09", "trust": 1.1 }, { "db": "PULSESECURE", "id": "SA44846", "trust": 1.1 }, { "db": "SIEMENS", "id": "SSA-637483", "trust": 1.1 }, { "db": "PACKETSTORM", "id": "165096", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "164583", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "164889", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "165002", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "162826", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "164890", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "162151", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161525", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165099", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162823", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164928", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162824", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164927", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161459", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165129", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162041", "trust": 0.1 }, { "db": "VULHUB", "id": "VHN-382524", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165286", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164562", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164489", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164967", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-382524" }, { "db": "PACKETSTORM", "id": "162826" }, { "db": "PACKETSTORM", "id": "165286" }, { "db": "PACKETSTORM", "id": "164889" }, { "db": "PACKETSTORM", "id": "164890" }, { "db": "PACKETSTORM", "id": "164562" }, { "db": "PACKETSTORM", "id": "164489" }, { "db": "PACKETSTORM", "id": "164583" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "165096" }, { "db": "PACKETSTORM", "id": "165002" }, { "db": "NVD", "id": "CVE-2021-23841" } ] }, "id": "VAR-202102-1488", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-382524" } ], "trust": 0.30766129 }, "last_update_date": "2024-07-23T20:39:26.069000Z", "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-476", "trust": 1.1 }, { "problemtype": "CWE-190", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-382524" }, { "db": "NVD", "id": "CVE-2021-23841" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.1, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf" }, { "trust": 1.1, "url": "https://kb.pulsesecure.net/articles/pulse_security_advisories/sa44846" }, { "trust": 1.1, "url": "https://security.netapp.com/advisory/ntap-20210219-0009/" }, { "trust": 1.1, "url": "https://security.netapp.com/advisory/ntap-20210513-0002/" }, { "trust": 1.1, "url": "https://support.apple.com/kb/ht212528" }, { "trust": 1.1, "url": "https://support.apple.com/kb/ht212529" }, { "trust": 1.1, "url": "https://support.apple.com/kb/ht212534" }, { "trust": 1.1, "url": "https://www.openssl.org/news/secadv/20210216.txt" }, { "trust": 1.1, "url": "https://www.tenable.com/security/tns-2021-03" }, { "trust": 1.1, "url": "https://www.tenable.com/security/tns-2021-09" }, { "trust": 1.1, "url": "https://www.debian.org/security/2021/dsa-4855" }, { "trust": 1.1, "url": "http://seclists.org/fulldisclosure/2021/may/67" }, { "trust": 1.1, "url": "http://seclists.org/fulldisclosure/2021/may/70" }, { "trust": 1.1, "url": "http://seclists.org/fulldisclosure/2021/may/68" }, { "trust": 1.1, "url": "https://security.gentoo.org/glsa/202103-03" }, { "trust": 1.1, "url": "https://www.oracle.com//security-alerts/cpujul2021.html" }, { "trust": 1.1, "url": "https://www.oracle.com/security-alerts/cpuapr2021.html" }, { "trust": 1.1, "url": "https://www.oracle.com/security-alerts/cpuapr2022.html" }, { "trust": 1.1, "url": "https://www.oracle.com/security-alerts/cpuoct2021.html" }, { "trust": 1.0, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=122a19ab48091c657f7cb1fb3af9fc07bd557bbf" }, { "trust": 1.0, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=8252ee4d90f3f2004d3d0aeeed003ad49c9a7807" }, { "trust": 1.0, "url": "https://security.netapp.com/advisory/ntap-20240621-0006/" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2021-23841" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2021-23840" }, { "trust": 0.9, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.9, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.9, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.8, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840" }, { "trust": 0.6, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-3200" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-27645" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-33574" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-13435" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-5827" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-24370" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-13751" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-19603" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-35942" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-17594" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-3572" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-12762" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-36086" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-3778" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-22898" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-16135" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-36084" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-3800" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-36087" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-3445" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-22925" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-20232" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-20266" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-20838" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-22876" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-20231" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-14155" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-36085" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-33560" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-17595" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-28153" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-13750" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-3426" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-18218" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-3580" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-3796" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595" }, { "trust": 0.4, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2018-20673" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-42574" }, { "trust": 0.3, "url": "https://issues.jboss.org/):" }, { "trust": 0.3, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-25013" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35522" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35524" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-14145" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-25014" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14145" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-25012" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35521" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-17541" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-36331" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-31535" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-36330" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-36332" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-17541" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3481" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-25009" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-25010" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35523" }, { "trust": 0.2, "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.5_release_notes/" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22922" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-36222" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-32626" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-32687" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22543" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-37750" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32626" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-41099" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22923" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32675" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3656" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3653" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3656" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22543" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22924" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37750" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22922" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-4658" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22924" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-32675" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2016-4658" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-41099" }, { "trust": 0.2, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3653" }, { "trust": 0.2, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32627" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32687" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37576" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32628" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32672" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-36222" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-32627" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-32672" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22923" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-32628" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-37576" }, { "trust": 0.2, "url": "https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27645" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20266" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232" }, { "trust": 0.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=122a19ab48091c657f7cb1fb3af9fc07bd557bbf" }, { "trust": 0.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=8252ee4d90f3f2004d3d0aeeed003ad49c9a7807" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36228" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21779" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30684" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36222" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30671" }, { "trust": 0.1, "url": "https://support.apple.com/downloads/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30682" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30669" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30685" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36221" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36225" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30676" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36226" }, { "trust": 0.1, "url": "https://www.apple.com/support/security/pgp/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36224" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36229" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36223" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30679" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30673" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30678" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30677" }, { "trust": 0.1, "url": "https://support.apple.com/kb/ht201222" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36230" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30681" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30680" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36227" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30683" }, { "trust": 0.1, "url": "https://support.apple.com/ht212529." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30668" }, { "trust": 0.1, "url": "https://access.redhat.com/security/vulnerabilities/rhsb-2021-009" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43527" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35524" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35522" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-37136" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44228" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3712" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35523" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:5128" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-37137" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21409" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.8/logging/cluster-logging-upgrading.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36330" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35521" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4424" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4198" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21670" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25648" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21670" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25741" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23017" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25648" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21671" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3925" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32690" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21671" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32690" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23017" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25741" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3798" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3949" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23133" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3573" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26141" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27777" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26147" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-14615" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36386" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29650" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24587" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26144" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29155" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33033" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20197" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3487" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-0427" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36312" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-31829" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10001" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-31440" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26145" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3564" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10001" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-35448" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3489" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24503" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28971" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26146" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26139" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3679" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24588" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36158" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24504" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33194" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3348" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24503" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20284" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29646" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0427" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14615" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-0129" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3635" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26143" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29368" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20194" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3659" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33200" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29660" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.7/logging/cluster-logging-upgrading.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26140" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3600" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24586" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20239" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3732" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28950" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4627" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-31916" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4845" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20095" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28493" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-42771" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26301" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26301" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28957" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8037" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8037" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20095" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28493" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23369" }, { "trust": 0.1, "url": "https://access.redhat.com/security/updates/classification/#low" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23383" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23369" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-28153" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23383" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4032" } ], "sources": [ { "db": "VULHUB", "id": "VHN-382524" }, { "db": "PACKETSTORM", "id": "162826" }, { "db": "PACKETSTORM", "id": "165286" }, { "db": "PACKETSTORM", "id": "164889" }, { "db": "PACKETSTORM", "id": "164890" }, { "db": "PACKETSTORM", "id": "164562" }, { "db": "PACKETSTORM", "id": "164489" }, { "db": "PACKETSTORM", "id": "164583" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "165096" }, { "db": "PACKETSTORM", "id": "165002" }, { "db": "NVD", "id": "CVE-2021-23841" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-382524" }, { "db": "PACKETSTORM", "id": "162826" }, { "db": "PACKETSTORM", "id": "165286" }, { "db": "PACKETSTORM", "id": "164889" }, { "db": "PACKETSTORM", "id": "164890" }, { "db": "PACKETSTORM", "id": "164562" }, { "db": "PACKETSTORM", "id": "164489" }, { "db": "PACKETSTORM", "id": "164583" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "165096" }, { "db": "PACKETSTORM", "id": "165002" }, { "db": "NVD", "id": "CVE-2021-23841" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-02-16T00:00:00", "db": "VULHUB", "id": "VHN-382524" }, { "date": "2021-05-26T17:50:31", "db": "PACKETSTORM", "id": "162826" }, { "date": "2021-12-15T15:20:33", "db": "PACKETSTORM", "id": "165286" }, { "date": "2021-11-10T17:13:10", "db": "PACKETSTORM", "id": "164889" }, { "date": "2021-11-10T17:13:18", "db": "PACKETSTORM", "id": "164890" }, { "date": "2021-10-20T15:45:47", "db": "PACKETSTORM", "id": "164562" }, { "date": "2021-10-13T14:47:32", "db": "PACKETSTORM", "id": "164489" }, { "date": "2021-10-21T15:31:47", "db": "PACKETSTORM", "id": "164583" }, { "date": "2021-11-15T17:25:56", "db": "PACKETSTORM", "id": "164967" }, { "date": "2021-11-29T18:12:32", "db": "PACKETSTORM", "id": "165096" }, { "date": "2021-11-17T15:25:40", "db": "PACKETSTORM", "id": "165002" }, { "date": "2021-02-16T17:15:13.377000", "db": "NVD", "id": "CVE-2021-23841" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-01-09T00:00:00", "db": "VULHUB", "id": "VHN-382524" }, { "date": "2024-06-21T19:15:17.377000", "db": "NVD", "id": "CVE-2021-23841" } ] }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Apple Security Advisory 2021-05-25-2", "sources": [ { "db": "PACKETSTORM", "id": "162826" } ], "trust": 0.1 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "overflow", "sources": [ { "db": "PACKETSTORM", "id": "164889" }, { "db": "PACKETSTORM", "id": "164890" }, { "db": "PACKETSTORM", "id": "164562" }, { "db": "PACKETSTORM", "id": "164489" }, { "db": "PACKETSTORM", "id": "164583" } ], "trust": 0.5 } }
cve-2023-5622
Vulnerability from cvelistv5
Vendor | Product | Version | |
---|---|---|---|
▼ | Tenable | Nessus Network Monitor |
Version: 0 |
|
{ "containers": { "adp": [ { "affected": [ { "cpes": [ "cpe:2.3:a:tenable:nessus_network_monitor:*:*:*:*:*:windows:*:*" ], "defaultStatus": "unknown", "product": "nessus_network_monitor", "vendor": "tenable", "versions": [ { "lessThan": "6.3.0", "status": "affected", "version": "0", "versionType": "custom" } ] } ], "metrics": [ { "other": { "content": { "id": "CVE-2023-5622", "options": [ { "Exploitation": "none" }, { "Automatable": "no" }, { "Technical Impact": "total" } ], "role": "CISA Coordinator", "timestamp": "2024-06-22T03:55:30.951388Z", "version": "2.0.3" }, "type": "ssvc" } } ], "providerMetadata": { "dateUpdated": "2024-06-24T12:51:02.378Z", "orgId": "134c704f-9b21-4f2e-91b3-4a467353bcc0", "shortName": "CISA-ADP" }, "title": "CISA ADP Vulnrichment" }, { "providerMetadata": { "dateUpdated": "2024-08-02T08:07:32.319Z", "orgId": "af854a3a-2127-422b-91ae-364da2661108", "shortName": "CVE" }, "references": [ { "tags": [ "x_transferred" ], "url": "https://www.tenable.com/security/tns-2023-34" } ], "title": "CVE Program Container" } ], "cna": { "affected": [ { "defaultStatus": "affected", "platforms": [ "Windows" ], "product": "Nessus Network Monitor", "vendor": "Tenable", "versions": [ { "lessThan": "6.3.0", "status": "affected", "version": "0", "versionType": "6.3.0" } ] } ], "datePublic": "2023-10-25T19:00:00.000Z", "descriptions": [ { "lang": "en", "supportingMedia": [ { "base64": false, "type": "text/html", "value": "\n\nUnder certain conditions, Nessus Network Monitor could allow a low privileged user to escalate privileges to NT AUTHORITY\\SYSTEM on Windows hosts by replacing a specially crafted file." } ], "value": "\nUnder certain conditions, Nessus Network Monitor could allow a low privileged user to escalate privileges to NT AUTHORITY\\SYSTEM on Windows hosts by replacing a specially crafted file." } ], "impacts": [ { "capecId": "CAPEC-233", "descriptions": [ { "lang": "en", "value": "CAPEC-233 Privilege Escalation" } ] } ], "metrics": [ { "cvssV3_1": { "attackComplexity": "LOW", "attackVector": "LOCAL", "availabilityImpact": "NONE", "baseScore": 7.1, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "integrityImpact": "HIGH", "privilegesRequired": "LOW", "scope": "UNCHANGED", "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:N", "version": "3.1" }, "format": "CVSS", "scenarios": [ { "lang": "en", "value": "GENERAL" } ] } ], "providerMetadata": { "dateUpdated": "2023-10-26T16:18:16.410Z", "orgId": "5ac1ecc2-367a-4d16-a0b2-35d495ddd0be", "shortName": "tenable" }, "references": [ { "url": "https://www.tenable.com/security/tns-2023-34" } ], "source": { "advisory": "TNS-2023-34", "discovery": "EXTERNAL" }, "title": "Privilege Escalation ", "x_generator": { "engine": "Vulnogram 0.1.0-dev" } } }, "cveMetadata": { "assignerOrgId": "5ac1ecc2-367a-4d16-a0b2-35d495ddd0be", "assignerShortName": "tenable", "cveId": "CVE-2023-5622", "datePublished": "2023-10-26T16:18:16.410Z", "dateReserved": "2023-10-17T19:03:14.686Z", "dateUpdated": "2024-08-02T08:07:32.319Z", "state": "PUBLISHED" }, "dataType": "CVE_RECORD", "dataVersion": "5.1" }
cve-2024-9158
Vulnerability from cvelistv5
Vendor | Product | Version | |
---|---|---|---|
▼ | Tenable | Nessus Network Monitor |
Version: 0 < 6.5.0 |
|
{ "containers": { "adp": [ { "metrics": [ { "other": { "content": { "id": "CVE-2024-9158", "options": [ { "Exploitation": "none" }, { "Automatable": "no" }, { "Technical Impact": "partial" } ], "role": "CISA Coordinator", "timestamp": "2024-09-30T17:21:28.392571Z", "version": "2.0.3" }, "type": "ssvc" } } ], "providerMetadata": { "dateUpdated": "2024-09-30T17:22:16.903Z", "orgId": "134c704f-9b21-4f2e-91b3-4a467353bcc0", "shortName": "CISA-ADP" }, "title": "CISA ADP Vulnrichment" } ], "cna": { "affected": [ { "defaultStatus": "affected", "platforms": [ "Windows", "Linux" ], "product": "Nessus Network Monitor", "vendor": "Tenable", "versions": [ { "lessThan": "6.5.0", "status": "affected", "version": "0", "versionType": "custom" } ] } ], "datePublic": "2024-09-24T07:00:00.000Z", "descriptions": [ { "lang": "en", "supportingMedia": [ { "base64": false, "type": "text/html", "value": "A stored cross site scripting vulnerability exists in Nessus Network Monitor where an authenticated, privileged local attacker could inject arbitrary code into the NNM UI via the local CLI." } ], "value": "A stored cross site scripting vulnerability exists in Nessus Network Monitor where an authenticated, privileged local attacker could inject arbitrary code into the NNM UI via the local CLI." } ], "impacts": [ { "capecId": "CAPEC-592", "descriptions": [ { "lang": "en", "value": "CAPEC-592 Stored XSS" } ] } ], "metrics": [ { "cvssV3_1": { "attackComplexity": "LOW", "attackVector": "NETWORK", "availabilityImpact": "HIGH", "baseScore": 8.4, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "integrityImpact": "HIGH", "privilegesRequired": "HIGH", "scope": "CHANGED", "userInteraction": "REQUIRED", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:R/S:C/C:H/I:H/A:H", "version": "3.1" }, "format": "CVSS", "scenarios": [ { "lang": "en", "value": "GENERAL" } ] } ], "problemTypes": [ { "descriptions": [ { "cweId": "CWE-79", "description": "CWE-79 Improper Neutralization of Input During Web Page Generation (XSS or \u0027Cross-site Scripting\u0027)", "lang": "en", "type": "CWE" } ] } ], "providerMetadata": { "dateUpdated": "2024-09-30T16:24:55.635Z", "orgId": "5ac1ecc2-367a-4d16-a0b2-35d495ddd0be", "shortName": "tenable" }, "references": [ { "url": "https://www.tenable.com/security/tns-2024-17" } ], "solutions": [ { "lang": "en", "supportingMedia": [ { "base64": false, "type": "text/html", "value": "Tenable has released Nessus Network Monitor 6.5.0 to address these issues. The installation files can be obtained from the Tenable Downloads Portal (\u003ca target=\"_blank\" rel=\"nofollow\" href=\"https://www.tenable.com/downloads/nessus-network-monitor\"\u003e\u003cu\u003ehttps://www.tenable.com/downloads/nessus-network-monitor\u003c/u\u003e\u003c/a\u003e).\n\n\u003cbr\u003e" } ], "value": "Tenable has released Nessus Network Monitor 6.5.0 to address these issues. The installation files can be obtained from the Tenable Downloads Portal ( https://www.tenable.com/downloads/nessus-network-monitor https://www.tenable.com/downloads/nessus-network-monitor )." } ], "source": { "advisory": "tns-2024-17", "discovery": "INTERNAL" }, "title": "XSS", "x_generator": { "engine": "Vulnogram 0.2.0" } } }, "cveMetadata": { "assignerOrgId": "5ac1ecc2-367a-4d16-a0b2-35d495ddd0be", "assignerShortName": "tenable", "cveId": "CVE-2024-9158", "datePublished": "2024-09-30T16:24:55.635Z", "dateReserved": "2024-09-24T16:17:19.544Z", "dateUpdated": "2024-09-30T17:22:16.903Z", "state": "PUBLISHED" }, "dataType": "CVE_RECORD", "dataVersion": "5.1" }
cve-2023-5624
Vulnerability from cvelistv5
Vendor | Product | Version | |
---|---|---|---|
▼ | Tenable | Nessus Network Monitor |
Version: 0 |
|
{ "containers": { "adp": [ { "providerMetadata": { "dateUpdated": "2024-08-02T08:07:32.303Z", "orgId": "af854a3a-2127-422b-91ae-364da2661108", "shortName": "CVE" }, "references": [ { "tags": [ "x_transferred" ], "url": "https://www.tenable.com/security/tns-2023-34" } ], "title": "CVE Program Container" }, { "affected": [ { "cpes": [ "cpe:2.3:a:tenable:nessus_network_monitor:*:*:*:*:*:*:*:*" ], "defaultStatus": "unknown", "product": "nessus_network_monitor", "vendor": "tenable", "versions": [ { "lessThan": "6.3.0", "status": "affected", "version": "0", "versionType": "custom" } ] } ], "metrics": [ { "other": { "content": { "id": "CVE-2023-5624", "options": [ { "Exploitation": "none" }, { "Automatable": "no" }, { "Technical Impact": "total" } ], "role": "CISA Coordinator", "timestamp": "2024-09-09T15:45:31.720231Z", "version": "2.0.3" }, "type": "ssvc" } } ], "providerMetadata": { "dateUpdated": "2024-09-09T15:48:24.575Z", "orgId": "134c704f-9b21-4f2e-91b3-4a467353bcc0", "shortName": "CISA-ADP" }, "title": "CISA ADP Vulnrichment" } ], "cna": { "affected": [ { "defaultStatus": "affected", "product": "Nessus Network Monitor", "vendor": "Tenable", "versions": [ { "lessThan": "6.3.0", "status": "affected", "version": "0", "versionType": "6.3.0" } ] } ], "datePublic": "2023-10-25T19:00:00.000Z", "descriptions": [ { "lang": "en", "supportingMedia": [ { "base64": false, "type": "text/html", "value": "\n\nUnder certain conditions, Nessus Network Monitor was found to not properly enforce input validation. This could allow an admin user to alter parameters that could potentially allow a blindSQL injection.\n\n" } ], "value": "\nUnder certain conditions, Nessus Network Monitor was found to not properly enforce input validation. This could allow an admin user to alter parameters that could potentially allow a blindSQL injection.\n\n" } ], "impacts": [ { "capecId": "CAPEC-7", "descriptions": [ { "lang": "en", "value": "CAPEC-7 Blind SQL Injection" } ] } ], "metrics": [ { "cvssV3_1": { "attackComplexity": "LOW", "attackVector": "NETWORK", "availabilityImpact": "HIGH", "baseScore": 7.2, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "integrityImpact": "HIGH", "privilegesRequired": "HIGH", "scope": "UNCHANGED", "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H", "version": "3.1" }, "format": "CVSS", "scenarios": [ { "lang": "en", "value": "GENERAL" } ] } ], "problemTypes": [ { "descriptions": [ { "cweId": "CWE-20", "description": "CWE-20 Improper Input Validation", "lang": "en", "type": "CWE" } ] } ], "providerMetadata": { "dateUpdated": "2023-10-26T16:36:32.251Z", "orgId": "5ac1ecc2-367a-4d16-a0b2-35d495ddd0be", "shortName": "tenable" }, "references": [ { "url": "https://www.tenable.com/security/tns-2023-34" } ], "source": { "advisory": "TNS-2023-34", "discovery": "EXTERNAL" }, "title": "Blind SQL Injection", "x_generator": { "engine": "Vulnogram 0.1.0-dev" } } }, "cveMetadata": { "assignerOrgId": "5ac1ecc2-367a-4d16-a0b2-35d495ddd0be", "assignerShortName": "tenable", "cveId": "CVE-2023-5624", "datePublished": "2023-10-26T16:36:32.251Z", "dateReserved": "2023-10-17T19:10:31.208Z", "dateUpdated": "2024-09-09T15:48:24.575Z", "state": "PUBLISHED" }, "dataType": "CVE_RECORD", "dataVersion": "5.1" }
cve-2023-5623
Vulnerability from cvelistv5
Vendor | Product | Version | |
---|---|---|---|
▼ | Tenable | Nessus Network Monitor |
Version: 0 |
|
{ "containers": { "adp": [ { "providerMetadata": { "dateUpdated": "2024-08-02T08:07:32.253Z", "orgId": "af854a3a-2127-422b-91ae-364da2661108", "shortName": "CVE" }, "references": [ { "tags": [ "x_transferred" ], "url": "https://www.tenable.com/security/tns-2023-34" } ], "title": "CVE Program Container" }, { "affected": [ { "cpes": [ "cpe:2.3:a:tenable:nessus_network_monitor:*:*:*:*:*:windows:*:*" ], "defaultStatus": "unknown", "product": "nessus_network_monitor", "vendor": "tenable", "versions": [ { "lessThan": "6.3.0", "status": "affected", "version": "0", "versionType": "custom" } ] } ], "metrics": [ { "other": { "content": { "id": "CVE-2023-5623", "options": [ { "Exploitation": "none" }, { "Automatable": "no" }, { "Technical Impact": "total" } ], "role": "CISA Coordinator", "timestamp": "2024-09-09T15:45:42.723295Z", "version": "2.0.3" }, "type": "ssvc" } } ], "problemTypes": [ { "descriptions": [ { "cweId": "CWE-276", "description": "CWE-276 Incorrect Default Permissions", "lang": "en", "type": "CWE" } ] } ], "providerMetadata": { "dateUpdated": "2024-09-09T15:50:01.791Z", "orgId": "134c704f-9b21-4f2e-91b3-4a467353bcc0", "shortName": "CISA-ADP" }, "title": "CISA ADP Vulnrichment" } ], "cna": { "affected": [ { "defaultStatus": "affected", "platforms": [ "Windows" ], "product": "Nessus Network Monitor", "vendor": "Tenable", "versions": [ { "lessThan": "6.3.0", "status": "affected", "version": "0", "versionType": "6.3.0" } ] } ], "datePublic": "2023-10-25T19:00:00.000Z", "descriptions": [ { "lang": "en", "supportingMedia": [ { "base64": false, "type": "text/html", "value": "\n\nNNM failed to properly set ACLs on its installation directory, which could allow a low privileged user to run arbitrary code with SYSTEM privileges where NNM is installed to a non-standard location\n\n" } ], "value": "\nNNM failed to properly set ACLs on its installation directory, which could allow a low privileged user to run arbitrary code with SYSTEM privileges where NNM is installed to a non-standard location\n\n" } ], "impacts": [ { "capecId": "CAPEC-233", "descriptions": [ { "lang": "en", "value": "CAPEC-233 Privilege Escalation" } ] } ], "metrics": [ { "cvssV3_1": { "attackComplexity": "HIGH", "attackVector": "LOCAL", "availabilityImpact": "HIGH", "baseScore": 7, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "integrityImpact": "HIGH", "privilegesRequired": "LOW", "scope": "UNCHANGED", "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:L/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:H", "version": "3.1" }, "format": "CVSS", "scenarios": [ { "lang": "en", "value": "GENERAL" } ] } ], "providerMetadata": { "dateUpdated": "2023-10-26T16:25:17.792Z", "orgId": "5ac1ecc2-367a-4d16-a0b2-35d495ddd0be", "shortName": "tenable" }, "references": [ { "url": "https://www.tenable.com/security/tns-2023-34" } ], "source": { "advisory": "TNS-2023-34", "discovery": "EXTERNAL" }, "title": "Privilege Escalation ", "x_generator": { "engine": "Vulnogram 0.1.0-dev" } } }, "cveMetadata": { "assignerOrgId": "5ac1ecc2-367a-4d16-a0b2-35d495ddd0be", "assignerShortName": "tenable", "cveId": "CVE-2023-5623", "datePublished": "2023-10-26T16:25:17.792Z", "dateReserved": "2023-10-17T19:03:43.341Z", "dateUpdated": "2024-09-09T15:50:01.791Z", "state": "PUBLISHED" }, "dataType": "CVE_RECORD", "dataVersion": "5.1" }