var-202103-1464
Vulnerability from variot
An OpenSSL TLS server may crash if sent a maliciously crafted renegotiation ClientHello message from a client. If a TLSv1.2 renegotiation ClientHello omits the signature_algorithms extension (where it was present in the initial ClientHello), but includes a signature_algorithms_cert extension then a NULL pointer dereference will result, leading to a crash and a denial of service attack. A server is only vulnerable if it has TLSv1.2 and renegotiation enabled (which is the default configuration). OpenSSL TLS clients are not impacted by this issue. All OpenSSL 1.1.1 versions are affected by this issue. Users of these versions should upgrade to OpenSSL 1.1.1k. OpenSSL 1.0.2 is not impacted by this issue. Fixed in OpenSSL 1.1.1k (Affected 1.1.1-1.1.1j). OpenSSL is an open source general encryption library of the Openssl team that can implement the Secure Sockets Layer (SSLv2/v3) and Transport Layer Security (TLSv1) protocols. The product supports a variety of encryption algorithms, including symmetric ciphers, hash algorithms, secure hash algorithms, etc. On March 25, 2021, the OpenSSL Project released a security advisory, OpenSSL Security Advisory [25 March 2021], that disclosed two vulnerabilities. Exploitation of these vulnerabilities could allow an malicious user to use a valid non-certificate authority (CA) certificate to act as a CA and sign a certificate for an arbitrary organization, user or device, or to cause a denial of service (DoS) condition. This advisory is available at the following link:tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-openssl-2021-GHY28dJd. In addition to persistent storage, Red Hat OpenShift Container Storage provisions a multicloud data management service with an S3 compatible API.
Security Fix(es):
- NooBaa: noobaa-operator leaking RPC AuthToken into log files (CVE-2021-3528)
For more details about the security issue(s), including the impact, a CVSS score, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fix(es):
-
Currently, a newly restored PVC cannot be mounted if some of the OpenShift Container Platform nodes are running on a version of Red Hat Enterprise Linux which is less than 8.2, and the snapshot from which the PVC was restored is deleted. Workaround: Do not delete the snapshot from which the PVC was restored until the restored PVC is deleted. (BZ#1962483)
-
Previously, the default backingstore was not created on AWS S3 when OpenShift Container Storage was deployed, due to incorrect identification of AWS S3. With this update, the default backingstore gets created when OpenShift Container Storage is deployed on AWS S3. (BZ#1927307)
-
Previously, log messages were printed to the endpoint pod log even if the debug option was not set. With this update, the log messages are printed to the endpoint pod log only when the debug option is set. (BZ#1938106)
-
Previously, the PVCs could not be provisioned as the
rook-ceph-mds
did not register the pod IP on the monitor servers, and hence every mount on the filesystem timed out, resulting in CephFS volume provisioning failure. With this update, an argument--public-addr=podIP
is added to the MDS pod when the host network is not enabled, and hence the CephFS volume provisioning does not fail. (BZ#1949558) -
Previously, OpenShift Container Storage 4.2 clusters were not updated with the correct cache value, and hence MDSs in standby-replay might report an oversized cache, as rook did not apply the
mds_cache_memory_limit
argument during upgrades. With this update, themds_cache_memory_limit
argument is applied during upgrades and the mds daemon operates normally. (BZ#1951348) -
Previously, the coredumps were not generated in the correct location as rook was setting the config option
log_file
to an empty string since logging happened on stdout and not on the files, and hence Ceph read the value of thelog_file
to build the dump path. With this update, rook does not set thelog_file
and keeps Ceph's internal default, and hence the coredumps are generated in the correct location and are accessible under/var/log/ceph/
. (BZ#1938049) -
Previously, Ceph became inaccessible, as the mons lose quorum if a mon pod was drained while another mon was failing over. With this update, voluntary mon drains are prevented while a mon is failing over, and hence Ceph does not become inaccessible. (BZ#1946573)
-
Previously, the mon quorum was at risk, as the operator could erroneously remove the new mon if the operator was restarted during a mon failover. With this update, the operator completes the same mon failover after the operator is restarted, and hence the mon quorum is more reliable in the node drains and mon failover scenarios. Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied.
For details on how to apply this update, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1938106 - [GSS][RFE]Reduce debug level for logs of Nooba Endpoint pod 1950915 - XSS Vulnerability with Noobaa version 5.5.0-3bacc6b 1951348 - [GSS][CephFS] health warning "MDS cache is too large (3GB/1GB); 0 inodes in use by clients, 0 stray files" for the standby-replay 1951600 - [4.6.z][Clone of BZ #1936545] setuid and setgid file bits are not retained after a OCS CephFS CSI restore 1955601 - CVE-2021-3528 NooBaa: noobaa-operator leaking RPC AuthToken into log files 1957189 - [Rebase] Use RHCS4.2z1 container image with OCS 4..6.5[may require doc update for external mode min supported RHCS version] 1959980 - When a node is being drained, increase the mon failover timeout to prevent unnecessary mon failover 1959983 - [GSS][mon] rook-operator scales mons to 4 after healthCheck timeout 1962483 - [RHEL7][RBD][4.6.z clone] FailedMount error when using restored PVC on app pod
Bug Fix(es):
-
WMCO patch pub-key-hash annotation to Linux node (BZ#1945248)
-
LoadBalancer Service type with invalid external loadbalancer IP breaks the datapath (BZ#1952917)
-
Telemetry info not completely available to identify windows nodes (BZ#1955319)
-
WMCO incorrectly shows node as ready after a failed configuration (BZ#1956412)
-
kube-proxy service terminated unexpectedly after recreated LB service (BZ#1963263)
-
Solution:
For Windows Machine Config Operator upgrades, see the following documentation:
https://docs.openshift.com/container-platform/4.7/windows_containers/window s-node-upgrades.html
- Bugs fixed (https://bugzilla.redhat.com/):
1945248 - WMCO patch pub-key-hash annotation to Linux node 1946538 - CVE-2021-25736 kubernetes: LoadBalancer Service type don't create a HNS policy for empty or invalid external loadbalancer IP, what could lead to MITM 1952917 - LoadBalancer Service type with invalid external loadbalancer IP breaks the datapath 1955319 - Telemetry info not completely available to identify windows nodes 1956412 - WMCO incorrectly shows node as ready after a failed configuration 1963263 - kube-proxy service terminated unexpectedly after recreated LB service
- Description:
Red Hat Advanced Cluster Management for Kubernetes 2.3.0 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana gement_for_kubernetes/2.3/html/release_notes/
Security:
-
fastify-reply-from: crafted URL allows prefix scape of the proxied backend service (CVE-2021-21321)
-
fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service (CVE-2021-21322)
-
nodejs-netmask: improper input validation of octal input data (CVE-2021-28918)
-
redis: Integer overflow via STRALGO LCS command (CVE-2021-29477)
-
redis: Integer overflow via COPY command for large intsets (CVE-2021-29478)
-
nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)
-
nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions (CVE-2020-28500)
-
golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing
-
-u- extension (CVE-2020-28851)
-
golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag (CVE-2020-28852)
-
nodejs-ansi_up: XSS due to insufficient URL sanitization (CVE-2021-3377)
-
oras: zip-slip vulnerability via oras-pull (CVE-2021-21272)
-
redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms (CVE-2021-21309)
-
nodejs-lodash: command injection via template (CVE-2021-23337)
-
nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() (CVE-2021-23362)
-
browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS) (CVE-2021-23364)
-
nodejs-postcss: Regular expression denial of service during source map parsing (CVE-2021-23368)
-
nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option (CVE-2021-23369)
-
nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js (CVE-2021-23382)
-
nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option (CVE-2021-23383)
-
openssl: integer overflow in CipherUpdate (CVE-2021-23840)
-
openssl: NULL pointer dereference in X509_issuer_and_serial_hash() (CVE-2021-23841)
-
nodejs-ua-parser-js: ReDoS via malicious User-Agent header (CVE-2021-27292)
-
grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call (CVE-2021-27358)
-
nodejs-is-svg: ReDoS via malicious string (CVE-2021-28092)
-
nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character (CVE-2021-29418)
-
ulikunitz/xz: Infinite loop in readUvarint allows for denial of service (CVE-2021-29482)
-
normalize-url: ReDoS for data URLs (CVE-2021-33502)
-
nodejs-trim-newlines: ReDoS in .end() method (CVE-2021-33623)
-
nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe (CVE-2021-23343)
-
html-parse-stringify: Regular Expression DoS (CVE-2021-23346)
-
openssl: incorrect SSLv2 rollback protection (CVE-2021-23839)
For more details about the security issues, including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE pages listed in the References section.
Bugs:
-
RFE Make the source code for the endpoint-metrics-operator public (BZ# 1913444)
-
cluster became offline after apiserver health check (BZ# 1942589)
-
Bugs fixed (https://bugzilla.redhat.com/):
1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension 1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag 1913444 - RFE Make the source code for the endpoint-metrics-operator public 1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull 1927520 - RHACM 2.3.0 images 1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection 1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash() 1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate 1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms 1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization 1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string 1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application 1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header 1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call 1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS 1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service 1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service 1942589 - cluster became offline after apiserver health check 1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() 1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character 1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data 1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service 1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option 1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing 1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js 1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service 1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS) 1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option 1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe 1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command 1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets 1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs 1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method 1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions 1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id 1983131 - Defragmenting an etcd member doesn't reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters
- It is comprised of the Apache Tomcat Servlet container, JBoss HTTP Connector (mod_cluster), the PicketLink Vault extension for Apache Tomcat, and the Tomcat Native library. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: OpenShift Container Platform 4.10.3 security update Advisory ID: RHSA-2022:0056-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2022:0056 Issue date: 2022-03-10 CVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 CVE-2022-24407 =====================================================================
- Summary:
Red Hat OpenShift Container Platform release 4.10.3 is now available with updates to packages and images that fix several bugs and add enhancements.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.10.3. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHSA-2022:0055
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html
Security Fix(es):
- gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)
- grafana: Snapshot authentication bypass (CVE-2021-39226)
- golang: net/http: limit growth of header canonicalization cache (CVE-2021-44716)
- nodejs-axios: Regular expression denial of service in trim function (CVE-2021-3749)
- golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)
- grafana: Forward OAuth Identity Token can allow users to access some data sources (CVE-2022-21673)
- grafana: directory traversal vulnerability (CVE-2021-43813)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-x86_64
The image digest is sha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-s390x
The image digest is sha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le
The image digest is sha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c
All OpenShift Container Platform 4.10 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.10 see the following documentation, which will be updated shortly for this release, for moderate instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1808240 - Always return metrics value for pods under the user's namespace
1815189 - feature flagged UI does not always become available after operator installation
1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters
1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly
1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal
1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered
1878925 - 'oc adm upgrade --to ...' rejects versions which occur only in history, while the cluster-version operator supports history fallback
1880738 - origin e2e test deletes original worker
1882983 - oVirt csi driver should refuse to provision RWX and ROX PV
1886450 - Keepalived router id check not documented for RHV/VMware IPI
1889488 - The metrics endpoint for the Scheduler is not protected by RBAC
1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom
1896474 - Path based routing is broken for some combinations
1897431 - CIDR support for additional network attachment with the bridge CNI plug-in
1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes
1907433 - Excessive logging in image operator
1909906 - The router fails with PANIC error when stats port already in use
1911173 - [MSTR-998] Many charts' legend names show {{}} instead of words
1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting.
1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)
1917893 - [ovirt] install fails: due to terraform error "Cannot attach Virtual Disk: Disk is locked" on vm resource
1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name
1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation
1926522 - oc adm catalog does not clean temporary files
1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes.
1928141 - kube-storage-version-migrator constantly reporting type "Upgradeable" status Unknown
1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it's storageclass is not yet finished, confusing users
1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x
1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade
1937085 - RHV UPI inventory playbook missing guarantee_memory
1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion
1938236 - vsphere-problem-detector does not support overriding log levels via storage CR
1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods
1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer
1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
1942913 - ThanosSidecarUnhealthy isn't resilient to WAL replays.
1943363 - [ovn] CNO should gracefully terminate ovn-northd
1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17
1948080 - authentication should not set Available=False APIServices_Error with 503s
1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set
1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0
1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer
1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs
1953264 - "remote error: tls: bad certificate" logs in prometheus-operator container
1955300 - Machine config operator reports unavailable for 23m during upgrade
1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set
1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set
1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters
1956496 - Needs SR-IOV Docs Upstream
1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret
1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid
1956964 - upload a boot-source to OpenShift virtualization using the console
1957547 - [RFE]VM name is not auto filled in dev console
1958349 - ovn-controller doesn't release the memory after cluster-density run
1959352 - [scale] failed to get pod annotation: timed out waiting for annotations
1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not
1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]
1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects
1961391 - String updates
1961509 - DHCP daemon pod should have CPU and memory requests set but not limits
1962066 - Edit machine/machineset specs not working
1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent
1963053 - oc whoami --show-console
should show the web console URL, not the server api URL
1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters
1964327 - Support containers with name:tag@digest
1964789 - Send keys and disconnect does not work for VNC console
1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7
1966445 - Unmasking a service doesn't work if it masked using MCO
1966477 - Use GA version in KAS/OAS/OauthAS to avoid: "audit.k8s.io/v1beta1" is deprecated and will be removed in a future release, use "audit.k8s.io/v1" instead
1966521 - kube-proxy's userspace implementation consumes excessive CPU
1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up
1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount
1970218 - MCO writes incorrect file contents if compression field is specified
1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]
1970805 - Cannot create build when docker image url contains dir structure
1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io
1972827 - image registry does not remain available during upgrade
1972962 - Should set the minimum value for the --max-icsp-size
flag of oc adm catalog mirror
1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run
1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established
1976301 - [ci] e2e-azure-upi is permafailing
1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change.
1976674 - CCO didn't set Upgradeable to False when cco mode is configured to Manual on azure platform
1976894 - Unidling a StatefulSet does not work as expected
1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases
1977414 - Build Config timed out waiting for condition 400: Bad Request
1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus
1978528 - systemd-coredump started and failed intermittently for unknown reasons
1978581 - machine-config-operator: remove runlevel from mco namespace
1979562 - Cluster operators: don't show messages when neither progressing, degraded or unavailable
1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9
1979966 - OCP builds always fail when run on RHEL7 nodes
1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading
1981549 - Machine-config daemon does not recover from broken Proxy configuration
1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]
1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues
1982063 - 'Control Plane' is not translated in Simplified Chinese language in Home->Overview page
1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands
1982662 - Workloads - DaemonSets - Add storage: i18n misses
1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE "/secrets/encryption-config" on single node clusters
1983758 - upgrades are failing on disruptive tests
1983964 - Need Device plugin configuration for the NIC "needVhostNet" & "isRdma"
1984592 - global pull secret not working in OCP4.7.4+ for additional private registries
1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs
1985486 - Cluster Proxy not used during installation on OSP with Kuryr
1985724 - VM Details Page missing translations
1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted
1985933 - Downstream image registry recommendation
1985965 - oVirt CSI driver does not report volume stats
1986216 - [scale] SNO: Slow Pod recovery due to "timed out waiting for OVS port binding"
1986237 - "MachineNotYetDeleted" in Pending state , alert not fired
1986239 - crictl create fails with "PID namespace requested, but sandbox infra container invalid"
1986302 - console continues to fetch prometheus alert and silences for normal user
1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI
1986338 - error creating list of resources in Import YAML
1986502 - yaml multi file dnd duplicates previous dragged files
1986819 - fix string typos for hot-plug disks
1987044 - [OCPV48] Shutoff VM is being shown as "Starting" in WebUI when using spec.runStrategy Manual/RerunOnFailure
1987136 - Declare operatorframework.io/arch. labels for all operators
1987257 - Go-http-client user-agent being used for oc adm mirror requests
1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold
1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP
1988406 - SSH key dropped when selecting "Customize virtual machine" in UI
1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade
1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with "Unable to connect to the server"
1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs
1989438 - expected replicas is wrong
1989502 - Developer Catalog is disappearing after short time
1989843 - 'More' and 'Show Less' functions are not translated on several page
1990014 - oc debug Upgradeable: false
when HA workload is incorrectly spread
1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole
1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN
1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down
1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page
1996647 - Provide more useful degraded message in auth operator on DNS errors
1996736 - Large number of 501 lr-policies in INCI2 env
1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes
1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP
1996928 - Enable default operator indexes on ARM
1997028 - prometheus-operator update removes env var support for thanos-sidecar
1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used
1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller.
1997245 - "Subscription already exists in openshift-storage namespace" error message is seen while installing odf-operator via UI
1997269 - Have to refresh console to install kube-descheduler
1997478 - Storage operator is not available after reboot cluster instances
1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
1997967 - storageClass is not reserved from default wizard to customize wizard
1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order
1998038 - [e2e][automation] add tests for UI for VM disk hot-plug
1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus
1998174 - Create storageclass gp3-csi after install ocp cluster on aws
1998183 - "r: Bad Gateway" info is improper
1998235 - Firefox warning: Cookie “csrf-token” will be soon rejected
1998377 - Filesystem table head is not full displayed in disk tab
1998378 - Virtual Machine is 'Not available' in Home -> Overview -> Cluster inventory
1998519 - Add fstype when create localvolumeset instance on web console
1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses
1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page
1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable
1999091 - Console update toast notification can appear multiple times
1999133 - removing and recreating static pod manifest leaves pod in error state
1999246 - .indexignore is not ingore when oc command load dc configuration
1999250 - ArgoCD in GitOps operator can't manage namespaces
1999255 - ovnkube-node always crashes out the first time it starts
1999261 - ovnkube-node log spam (and security token leak?)
1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -> Operator Installation page
1999314 - console-operator is slow to mark Degraded as False once console starts working
1999425 - kube-apiserver with "[SHOULD NOT HAPPEN] failed to update managedFields" err="failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)
1999556 - "master" pool should be updated before the CVO reports available at the new version occurred
1999578 - AWS EFS CSI tests are constantly failing
1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages
1999619 - cloudinit is malformatted if a user sets a password during VM creation flow
1999621 - Empty ssh_authorized_keys entry is added to VM's cloudinit if created from a customize flow
1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined
1999668 - openshift-install destroy cluster panic's when given invalid credentials to cloud provider (Azure Stack Hub)
1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource
1999771 - revert "force cert rotation every couple days for development" in 4.10
1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function
1999796 - Openshift Console Helm
tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace.
1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions
1999903 - Click "This is a CD-ROM boot source" ticking "Use template size PVC" on pvc upload form
1999983 - No way to clear upload error from template boot source
2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter
2000096 - Git URL is not re-validated on edit build-config form reload
2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig
2000236 - Confusing usage message from dynkeepalived CLI
2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported
2000430 - bump cluster-api-provider-ovirt version in installer
2000450 - 4.10: Enable static PV multi-az test
2000490 - All critical alerts shipped by CMO should have links to a runbook
2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)
2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster
2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled
2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console
2000754 - IPerf2 tests should be lower
2000846 - Structure logs in the entire codebase of Local Storage Operator
2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24
2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM
2000938 - CVO does not respect changes to a Deployment strategy
2000963 - 'Inline-volume (default fs)] volumes should store data' tests are failing on OKD with updated selinux-policy
2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don't have snapshot and should be fullClone
2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole
2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api
2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error
2001337 - Details Card in ODF Dashboard mentions OCS
2001339 - fix text content hotplug
2001413 - [e2e][automation] add/delete nic and disk to template
2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log
2001442 - Empty termination.log file for the kube-apiserver has too permissive mode
2001479 - IBM Cloud DNS unable to create/update records
2001566 - Enable alerts for prometheus operator in UWM
2001575 - Clicking on the perspective switcher shows a white page with loader
2001577 - Quick search placeholder is not displayed properly when the search string is removed
2001578 - [e2e][automation] add tests for vm dashboard tab
2001605 - PVs remain in Released state for a long time after the claim is deleted
2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options
2001620 - Cluster becomes degraded if it can't talk to Manila
2001760 - While creating 'Backing Store', 'Bucket Class', 'Namespace Store' user is navigated to 'Installed Operators' page after clicking on ODF
2001761 - Unable to apply cluster operator storage for SNO on GCP platform.
2001765 - Some error message in the log of diskmaker-manager caused confusion
2001784 - show loading page before final results instead of showing a transient message No log files exist
2001804 - Reload feature on Environment section in Build Config form does not work properly
2001810 - cluster admin unable to view BuildConfigs in all namespaces
2001817 - Failed to load RoleBindings list that will lead to ‘Role name’ is not able to be selected on Create RoleBinding page as well
2001823 - OCM controller must update operator status
2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start
2001835 - Could not select image tag version when create app from dev console
2001855 - Add capacity is disabled for ocs-storagecluster
2001856 - Repeating event: MissingVersion no image found for operand pod
2001959 - Side nav list borders don't extend to edges of container
2002007 - Layout issue on "Something went wrong" page
2002010 - ovn-kube may never attempt to retry a pod creation
2002012 - Cannot change volume mode when cloning a VM from a template
2002027 - Two instances of Dotnet helm chart show as one in topology
2002075 - opm render does not automatically pulling in the image(s) used in the deployments
2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster
2002125 - Network policy details page heading should be updated to Network Policy details
2002133 - [e2e][automation] add support/virtualization and improve deleteResource
2002134 - [e2e][automation] add test to verify vm details tab
2002215 - Multipath day1 not working on s390x
2002238 - Image stream tag is not persisted when switching from yaml to form editor
2002262 - [vSphere] Incorrect user agent in vCenter sessions list
2002266 - SinkBinding create form doesn't allow to use subject name, instead of label selector
2002276 - OLM fails to upgrade operators immediately
2002300 - Altering the Schedule Profile configurations doesn't affect the placement of the pods
2002354 - Missing DU configuration "Done" status reporting during ZTP flow
2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn't use commonjs
2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation
2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN
2002397 - Resources search is inconsistent
2002434 - CRI-O leaks some children PIDs
2002443 - Getting undefined error on create local volume set page
2002461 - DNS operator performs spurious updates in response to API's defaulting of service's internalTrafficPolicy
2002504 - When the openshift-cluster-storage-operator is degraded because of "VSphereProblemDetectorController_SyncError", the insights operator is not sending the logs from all pods.
2002559 - User preference for topology list view does not follow when a new namespace is created
2002567 - Upstream SR-IOV worker doc has broken links
2002588 - Change text to be sentence case to align with PF
2002657 - ovn-kube egress IP monitoring is using a random port over the node network
2002713 - CNO: OVN logs should have millisecond resolution
2002748 - [ICNI2] 'ErrorAddingLogicalPort' failed to handle external GW check: timeout waiting for namespace event
2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite
2002763 - Two storage systems getting created with external mode RHCS
2002808 - KCM does not use web identity credentials
2002834 - Cluster-version operator does not remove unrecognized volume mounts
2002896 - Incorrect result return when user filter data by name on search page
2002950 - Why spec.containers.command is not created with "oc create deploymentconfig Create VM
missing permissions alert
2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]
2034287 - do not block upgrades if we can't create storageclass in 4.10 in vsphere
2034300 - Du validator policy is NonCompliant after DU configuration completed
2034319 - Negation constraint is not validating packages
2034322 - CNO doesn't pick up settings required when ExternalControlPlane topology
2034350 - The CNO should implement the Whereabouts IP reconciliation cron job
2034362 - update description of disk interface
2034398 - The Whereabouts IPPools CRD should include the podref field
2034409 - Default CatalogSources should be pointing to 4.10 index images
2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics
2034413 - cloud-network-config-controller fails to init with secret "cloud-credentials" not found in manual credential mode
2034460 - Summary: cloud-network-config-controller does not account for different environment
2034474 - Template's boot source is "Unknown source" before and after set enableCommonBootImageImport to true
2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren't working properly
2034493 - Change cluster version operator log level
2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list
2034527 - IPI deployment fails 'timeout reached while inspecting the node' when provisioning network ipv6
2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer
2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART
2034537 - Update team
2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds
2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success
2034577 - Current OVN gateway mode should be reflected on node annotation as well
2034621 - context menu not popping up for application group
2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10
2034624 - Warn about unsupported CSI driver in vsphere operator
2034647 - missing volumes list in snapshot modal
2034648 - Rebase openshift-controller-manager to 1.23
2034650 - Rebase openshift/builder to 1.23
2034705 - vSphere: storage e2e tests logging configuration data
2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail.
2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment
2034785 - ptpconfig with summary_interval cannot be applied
2034823 - RHEL9 should be starred in template list
2034838 - An external router can inject routes if no service is added
2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent
2034879 - Lifecycle hook's name and owner shouldn't be allowed to be empty
2034881 - Cloud providers components should use K8s 1.23 dependencies
2034884 - ART cannot build the image because it tries to download controller-gen
2034889 - oc adm prune deployments
does not work
2034898 - Regression in recently added Events feature
2034957 - update openshift-apiserver to kube 1.23.1
2035015 - ClusterLogForwarding CR remains stuck remediating forever
2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster
2035141 - [RFE] Show GPU/Host devices in template's details tab
2035146 - "kubevirt-plugin~PVC cannot be empty" shows on add-disk modal while adding existing PVC
2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting
2035199 - IPv6 support in mtu-migration-dispatcher.yaml
2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing
2035250 - Peering with ebgp peer over multi-hops doesn't work
2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices
2035315 - invalid test cases for AWS passthrough mode
2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env
2035321 - Add Sprint 211 translations
2035326 - [ExternalCloudProvider] installation with additional network on workers fails
2035328 - Ccoctl does not ignore credentials request manifest marked for deletion
2035333 - Kuryr orphans ports on 504 errors from Neutron
2035348 - Fix two grammar issues in kubevirt-plugin.json strings
2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets
2035409 - OLM E2E test depends on operator package that's no longer published
2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address
2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to 'ecs-cn-hangzhou.aliyuncs.com' timeout, although the specified region is 'us-east-1'
2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster
2035467 - UI: Queried metrics can't be ordered on Oberve->Metrics page
2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers
2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class
2035602 - [e2e][automation] add tests for Virtualization Overview page cards
2035703 - Roles -> RoleBindings tab doesn't show RoleBindings correctly
2035704 - RoleBindings list page filter doesn't apply
2035705 - Azure 'Destroy cluster' get stuck when the cluster resource group is already not existing.
2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed
2035772 - AccessMode and VolumeMode is not reserved for customize wizard
2035847 - Two dashes in the Cronjob / Job pod name
2035859 - the output of opm render doesn't contain olm.constraint which is defined in dependencies.yaml
2035882 - [BIOS setting values] Create events for all invalid settings in spec
2035903 - One redundant capi-operator credential requests in “oc adm extract --credentials-requests”
2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen
2035927 - Cannot enable HighNodeUtilization scheduler profile
2035933 - volume mode and access mode are empty in customize wizard review tab
2035969 - "ip a " shows "Error: Peer netns reference is invalid" after create test pods
2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation
2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error
2036029 - New added cloud-network-config operator doesn’t supported aws sts format credential
2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend
2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes
2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23
2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23
2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments
2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists
2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected
2036826 - oc adm prune deployments
can prune the RC/RS
2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform
2036861 - kube-apiserver is degraded while enable multitenant
2036937 - Command line tools page shows wrong download ODO link
2036940 - oc registry login fails if the file is empty or stdout
2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container
2036989 - Route URL copy to clipboard button wraps to a separate line by itself
2036990 - ZTP "DU Done inform policy" never becomes compliant on multi-node clusters
2036993 - Machine API components should use Go lang version 1.17
2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log.
2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api
2037073 - Alertmanager container fails to start because of startup probe never being successful
2037075 - Builds do not support CSI volumes
2037167 - Some log level in ibm-vpc-block-csi-controller are hard code
2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles
2037182 - PingSource badge color is not matched with knativeEventing color
2037203 - "Running VMs" card is too small in Virtualization Overview
2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly
2037237 - Add "This is a CD-ROM boot source" to customize wizard
2037241 - default TTL for noobaa cache buckets should be 0
2037246 - Cannot customize auto-update boot source
2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately
2037288 - Remove stale image reference
2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources
2037483 - Rbacs for Pods within the CBO should be more restrictive
2037484 - Bump dependencies to k8s 1.23
2037554 - Mismatched wave number error message should include the wave numbers that are in conflict
2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]
2037635 - impossible to configure custom certs for default console route in ingress config
2037637 - configure custom certificate for default console route doesn't take effect for OCP >= 4.8
2037638 - Builds do not support CSI volumes as volume sources
2037664 - text formatting issue in Installed Operators list table
2037680 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080
2037689 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080
2037801 - Serverless installation is failing on CI jobs for e2e tests
2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format
2037856 - use lease for leader election
2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10
2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests
2037904 - upgrade operator deployment failed due to memory limit too low for manager container
2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]
2038034 - non-privileged user cannot see auto-update boot source
2038053 - Bump dependencies to k8s 1.23
2038088 - Remove ipa-downloader references
2038160 - The default
project missed the annotation : openshift.io/node-selector: ""
2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional
2038196 - must-gather is missing collecting some metal3 resources
2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)
2038253 - Validator Policies are long lived
2038272 - Failures to build a PreprovisioningImage are not reported
2038384 - Azure Default Instance Types are Incorrect
2038389 - Failing test: [sig-arch] events should not repeat pathologically
2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket
2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips
2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained
2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect
2038663 - update kubevirt-plugin OWNERS
2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via "oc adm groups new"
2038705 - Update ptp reviewers
2038761 - Open Observe->Targets page, wait for a while, page become blank
2038768 - All the filters on the Observe->Targets page can't work
2038772 - Some monitors failed to display on Observe->Targets page
2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node
2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces
2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard
2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation
2038864 - E2E tests fail because multi-hop-net was not created
2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console
2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured
2038968 - Move feature gates from a carry patch to openshift/api
2039056 - Layout issue with breadcrumbs on API explorer page
2039057 - Kind column is not wide enough in API explorer page
2039064 - Bulk Import e2e test flaking at a high rate
2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled
2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters
2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost
2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy
2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator
2039170 - [upgrade]Error shown on registry operator "missing the cloud-provider-config configmap" after upgrade
2039227 - Improve image customization server parameter passing during installation
2039241 - Improve image customization server parameter passing during installation
2039244 - Helm Release revision history page crashes the UI
2039294 - SDN controller metrics cannot be consumed correctly by prometheus
2039311 - oc Does Not Describe Build CSI Volumes
2039315 - Helm release list page should only fetch secrets for deployed charts
2039321 - SDN controller metrics are not being consumed by prometheus
2039330 - Create NMState button doesn't work in OperatorHub web console
2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations
2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters.
2039359 - oc adm prune deployments
can't prune the RS where the associated Deployment no longer exists
2039382 - gather_metallb_logs does not have execution permission
2039406 - logout from rest session after vsphere operator sync is finished
2039408 - Add GCP region northamerica-northeast2 to allowed regions
2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration
2039425 - No need to set KlusterletAddonConfig CR applicationManager->enabled: true in RAN ztp deployment
2039491 - oc - git:// protocol used in unit tests
2039516 - Bump OVN to ovn21.12-21.12.0-25
2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate
2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled
2039541 - Resolv-prepender script duplicating entries
2039586 - [e2e] update centos8 to centos stream8
2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty
2039619 - [AWS] In tree provisioner storageclass aws disk type should contain 'gp3' and csi provisioner storageclass default aws disk type should be 'gp3'
2039670 - Create PDBs for control plane components
2039678 - Page goes blank when create image pull secret
2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported
2039743 - React missing key warning when open operator hub detail page (and maybe others as well)
2039756 - React missing key warning when open KnativeServing details
2039770 - Observe dashboard doesn't react on time-range changes after browser reload when perspective is changed in another tab
2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard
2039781 - [GSS] OBC is not visible by admin of a Project on Console
2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector
2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled
2039880 - Log level too low for control plane metrics
2039919 - Add E2E test for router compression feature
2039981 - ZTP for standard clusters installs stalld on master nodes
2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead
2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced
2040143 - [IPI on Alibabacloud] suggest to remove region "cn-nanjing" or provide better error message
2040150 - Update ConfigMap keys for IBM HPCS
2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth
2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository
2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp
2040376 - "unknown instance type" error for supported m6i.xlarge instance
2040394 - Controller: enqueue the failed configmap till services update
2040467 - Cannot build ztp-site-generator container image
2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn't take affect in OpenShift 4
2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps
2040535 - Auto-update boot source is not available in customize wizard
2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name
2040603 - rhel worker scaleup playbook failed because missing some dependency of podman
2040616 - rolebindings page doesn't load for normal users
2040620 - [MAPO] Error pulling MAPO image on installation
2040653 - Topology sidebar warns that another component is updated while rendering
2040655 - User settings update fails when selecting application in topology sidebar
2040661 - Different react warnings about updating state on unmounted components when leaving topology
2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation
2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi
2040694 - Three upstream HTTPClientConfig struct fields missing in the operator
2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers
2040710 - cluster-baremetal-operator cannot update BMC subscription CR
2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms
2040782 - Import YAML page blocks input with more then one generateName attribute
2040783 - The Import from YAML summary page doesn't show the resource name if created via generateName attribute
2040791 - Default PGT policies must be 'inform' to integrate with the Lifecycle Operator
2040793 - Fix snapshot e2e failures
2040880 - do not block upgrades if we can't connect to vcenter
2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10
2041093 - autounattend.xml missing
2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates
2041319 - [IPI on Alibabacloud] installation in region "cn-shanghai" failed, due to "Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped"
2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23
2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller
2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener
2041441 - Provision volume with size 3000Gi even if sizeRange: '[10-2000]GiB' in storageclass on IBM cloud
2041466 - Kubedescheduler version is missing from the operator logs
2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses
2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods)
2041492 - Spacing between resources in inventory card is too small
2041509 - GCP Cloud provider components should use K8s 1.23 dependencies
2041510 - cluster-baremetal-operator doesn't run baremetal-operator's subscription webhook
2041541 - audit: ManagedFields are dropped using API not annotation
2041546 - ovnkube: set election timer at RAFT cluster creation time
2041554 - use lease for leader election
2041581 - KubeDescheduler operator log shows "Use of insecure cipher detected"
2041583 - etcd and api server cpu mask interferes with a guaranteed workload
2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure
2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation
2041620 - bundle CSV alm-examples does not parse
2041641 - Fix inotify leak and kubelet retaining memory
2041671 - Delete templates leads to 404 page
2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category
2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled
2041750 - [IPI on Alibabacloud] trying "create install-config" with region "cn-wulanchabu (China (Ulanqab))" (or "ap-southeast-6 (Philippines (Manila))", "cn-guangzhou (China (Guangzhou))") failed due to invalid endpoint
2041763 - The Observe > Alerting pages no longer have their default sort order applied
2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken
2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied
2041882 - cloud-network-config operator can't work normal on GCP workload identity cluster
2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases
2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist
2041971 - [vsphere] Reconciliation of mutating webhooks didn't happen
2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile
2041999 - [PROXY] external dns pod cannot recognize custom proxy CA
2042001 - unexpectedly found multiple load balancers
2042029 - kubedescheduler fails to install completely
2042036 - [IBMCLOUD] "openshift-install explain installconfig.platform.ibmcloud" contains not yet supported custom vpc parameters
2042049 - Seeing warning related to unrecognized feature gate in kubescheduler & KCM logs
2042059 - update discovery burst to reflect lots of CRDs on openshift clusters
2042069 - Revert toolbox to rhcos-toolbox
2042169 - Can not delete egressnetworkpolicy in Foreground propagation
2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool
2042265 - [IBM]"--scale-down-utilization-threshold" doesn't work on IBMCloud
2042274 - Storage API should be used when creating a PVC
2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection
2042366 - Lifecycle hooks should be independently managed
2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway
2042382 - [e2e][automation] CI takes more then 2 hours to run
2042395 - Add prerequisites for active health checks test
2042438 - Missing rpms in openstack-installer image
2042466 - Selection does not happen when switching from Topology Graph to List View
2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver
2042567 - insufficient info on CodeReady Containers configuration
2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk
2042619 - Overview page of the console is broken for hypershift clusters
2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running
2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud
2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud
2042770 - [IPI on Alibabacloud] with vpcID & vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly
2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)
2042851 - Create template from SAP HANA template flow - VM is created instead of a new template
2042906 - Edit machineset with same machine deletion hook name succeed
2042960 - azure-file CI fails with "gid(0) in storageClass and pod fsgroup(1000) are not equal"
2043003 - [IPI on Alibabacloud] 'destroy cluster' of a failed installation (bug2041694) stuck after 'stage=Nat gateways'
2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]
2043043 - Cluster Autoscaler should use K8s 1.23 dependencies
2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)
2043078 - Favorite system projects not visible in the project selector after toggling "Show default projects".
2043117 - Recommended operators links are erroneously treated as external
2043130 - Update CSI sidecars to the latest release for 4.10
2043234 - Missing validation when creating several BGPPeers with the same peerAddress
2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler
2043254 - crio does not bind the security profiles directory
2043296 - Ignition fails when reusing existing statically-keyed LUKS volume
2043297 - [4.10] Bootimage bump tracker
2043316 - RHCOS VM fails to boot on Nutanix AOS
2043446 - Rebase aws-efs-utils to the latest upstream version.
2043556 - Add proper ci-operator configuration to ironic and ironic-agent images
2043577 - DPU network operator
2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator
2043675 - Too many machines deleted by cluster autoscaler when scaling down
2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation
2043709 - Logging flags no longer being bound to command line
2043721 - Installer bootstrap hosts using outdated kubelet containing bugs
2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather
2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23
2043780 - Bump router to k8s.io/api 1.23
2043787 - Bump cluster-dns-operator to k8s.io/api 1.23
2043801 - Bump CoreDNS to k8s.io/api 1.23
2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown
2043961 - [OVN-K] If pod creation fails, retry doesn't work as expected.
2044201 - Templates golden image parameters names should be supported
2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]
2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter “csi.storage.k8s.io/fstype” create pvc,pod successfully but write data to the pod's volume failed of "Permission denied"
2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects
2044347 - Bump to kubernetes 1.23.3
2044481 - collect sharedresource cluster scoped instances with must-gather
2044496 - Unable to create hardware events subscription - failed to add finalizers
2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources
2044680 - Additional libovsdb performance and resource consumption fixes
2044704 - Observe > Alerting pages should not show runbook links in 4.10
2044717 - [e2e] improve tests for upstream test environment
2044724 - Remove namespace column on VM list page when a project is selected
2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff
2044808 - machine-config-daemon-pull.service: use cp
instead of cat
when extracting MCD in OKD
2045024 - CustomNoUpgrade alerts should be ignored
2045112 - vsphere-problem-detector has missing rbac rules for leases
2045199 - SnapShot with Disk Hot-plug hangs
2045561 - Cluster Autoscaler should use the same default Group value as Cluster API
2045591 - Reconciliation of aws pod identity mutating webhook did not happen
2045849 - Add Sprint 212 translations
2045866 - MCO Operator pod spam "Error creating event" warning messages in 4.10
2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin
2045916 - [IBMCloud] Default machine profile in installer is unreliable
2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment
2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify
2046137 - oc output for unknown commands is not human readable
2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance
2046297 - Bump DB reconnect timeout
2046517 - In Notification drawer, the "Recommendations" header shows when there isn't any recommendations
2046597 - Observe > Targets page may show the wrong service monitor is multiple monitors have the same namespace & label selectors
2046626 - Allow setting custom metrics for Ansible-based Operators
2046683 - [AliCloud]"--scale-down-utilization-threshold" doesn't work on AliCloud
2047025 - Installation fails because of Alibaba CSI driver operator is degraded
2047190 - Bump Alibaba CSI driver for 4.10
2047238 - When using communities and localpreferences together, only localpreference gets applied
2047255 - alibaba: resourceGroupID not found
2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions
2047317 - Update HELM OWNERS files under Dev Console
2047455 - [IBM Cloud] Update custom image os type
2047496 - Add image digest feature
2047779 - do not degrade cluster if storagepolicy creation fails
2047927 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2047929 - use lease for leader election
2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2048046 - New route annotation to show another URL or hide topology URL decorator doesn't work for Knative Services
2048048 - Application tab in User Preferences dropdown menus are too wide.
2048050 - Topology list view items are not highlighted on keyboard navigation
2048117 - [IBM]Shouldn't change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value
2048413 - Bond CNI: Failed to attach Bond NAD to pod
2048443 - Image registry operator panics when finalizes config deletion
2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*
2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt
2048598 - Web terminal view is broken
2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure
2048891 - Topology page is crashed
2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class
2049043 - Cannot create VM from template
2049156 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2049886 - Placeholder bug for OCP 4.10.0 metadata release
2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning
2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2
2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0
2050227 - Installation on PSI fails with: 'openstack platform does not have the required standard-attr-tag network extension'
2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]
2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members
2050310 - ContainerCreateError when trying to launch large (>500) numbers of pods across nodes
2050370 - alert data for burn budget needs to be updated to prevent regression
2050393 - ZTP missing support for local image registry and custom machine config
2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud
2050737 - Remove metrics and events for master port offsets
2050801 - Vsphere upi tries to access vsphere during manifests generation phase
2050883 - Logger object in LSO does not log source location accurately
2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit
2052062 - Whereabouts should implement client-go 1.22+
2052125 - [4.10] Crio appears to be coredumping in some scenarios
2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config
2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade.
2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests
2052598 - kube-scheduler should use configmap lease
2052599 - kube-controller-manger should use configmap lease
2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh
2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total
not valid
2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop
2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set.
2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1
2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch
2052756 - [4.10] PVs are not being cleaned up after PVC deletion
2053175 - oc adm catalog mirror throws 'missing signature key' error when using file://local/index
2053218 - ImagePull fails with error "unable to pull manifest from example.com/busy.box:v5 invalid reference format"
2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs
2053268 - inability to detect static lifecycle failure
2053314 - requestheader IDP test doesn't wait for cleanup, causing high failure rates
2053323 - OpenShift-Ansible BYOH Unit Tests are Broken
2053339 - Remove dev preview badge from IBM FlashSystem deployment windows
2053751 - ztp-site-generate container is missing convenience entrypoint
2053945 - [4.10] Failed to apply sriov policy on intel nics
2054109 - Missing "app" label
2054154 - RoleBinding in project without subject is causing "Project access" page to fail
2054244 - Latest pipeline run should be listed on the top of the pipeline run list
2054288 - console-master-e2e-gcp-console is broken
2054562 - DPU network operator 4.10 branch need to sync with master
2054897 - Unable to deploy hw-event-proxy operator
2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently
2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line
2055371 - Remove Check which enforces summary_interval must match logSyncInterval
2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11
2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API
2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured
2056479 - ovirt-csi-driver-node pods are crashing intermittently
2056572 - reconcilePrecaching error: cannot list resource "clusterserviceversions" in API group "operators.coreos.com" at the cluster scope"
2056629 - [4.10] EFS CSI driver can't unmount volumes with "wait: no child processes"
2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs
2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation
2056948 - post 1.23 rebase: regression in service-load balancer reliability
2057438 - Service Level Agreement (SLA) always show 'Unknown'
2057721 - Fix Proxy support in RHACM 2.4.2
2057724 - Image creation fails when NMstateConfig CR is empty
2058641 - [4.10] Pod density test causing problems when using kube-burner
2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install
2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials
2060956 - service domain can't be resolved when networkpolicy is used in OCP 4.10-rc
- References:
https://access.redhat.com/security/cve/CVE-2014-3577 https://access.redhat.com/security/cve/CVE-2016-10228 https://access.redhat.com/security/cve/CVE-2017-14502 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2018-1000858 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9169 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-25013 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-8927 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-9952 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-13434 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-15358 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-25660 https://access.redhat.com/security/cve/CVE-2020-25677 https://access.redhat.com/security/cve/CVE-2020-27618 https://access.redhat.com/security/cve/CVE-2020-27781 https://access.redhat.com/security/cve/CVE-2020-29361 https://access.redhat.com/security/cve/CVE-2020-29362 https://access.redhat.com/security/cve/CVE-2020-29363 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/cve/CVE-2021-3326 https://access.redhat.com/security/cve/CVE-2021-3449 https://access.redhat.com/security/cve/CVE-2021-3450 https://access.redhat.com/security/cve/CVE-2021-3516 https://access.redhat.com/security/cve/CVE-2021-3517 https://access.redhat.com/security/cve/CVE-2021-3518 https://access.redhat.com/security/cve/CVE-2021-3520 https://access.redhat.com/security/cve/CVE-2021-3521 https://access.redhat.com/security/cve/CVE-2021-3537 https://access.redhat.com/security/cve/CVE-2021-3541 https://access.redhat.com/security/cve/CVE-2021-3733 https://access.redhat.com/security/cve/CVE-2021-3749 https://access.redhat.com/security/cve/CVE-2021-20305 https://access.redhat.com/security/cve/CVE-2021-21684 https://access.redhat.com/security/cve/CVE-2021-22946 https://access.redhat.com/security/cve/CVE-2021-22947 https://access.redhat.com/security/cve/CVE-2021-25215 https://access.redhat.com/security/cve/CVE-2021-27218 https://access.redhat.com/security/cve/CVE-2021-30666 https://access.redhat.com/security/cve/CVE-2021-30761 https://access.redhat.com/security/cve/CVE-2021-30762 https://access.redhat.com/security/cve/CVE-2021-33928 https://access.redhat.com/security/cve/CVE-2021-33929 https://access.redhat.com/security/cve/CVE-2021-33930 https://access.redhat.com/security/cve/CVE-2021-33938 https://access.redhat.com/security/cve/CVE-2021-36222 https://access.redhat.com/security/cve/CVE-2021-37750 https://access.redhat.com/security/cve/CVE-2021-39226 https://access.redhat.com/security/cve/CVE-2021-41190 https://access.redhat.com/security/cve/CVE-2021-43813 https://access.redhat.com/security/cve/CVE-2021-44716 https://access.redhat.com/security/cve/CVE-2021-44717 https://access.redhat.com/security/cve/CVE-2022-0532 https://access.redhat.com/security/cve/CVE-2022-21673 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL 0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne eGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM CEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF aDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC Y/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp sQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO RDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN rs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry bSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z 7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT b5PUYUBIZLc= =GUDA -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . This software, such as Apache HTTP Server, is common to multiple JBoss middleware products, and is packaged under Red Hat JBoss Core Services to allow for faster distribution of updates, and for a more consistent update experience.
This release adds the new Apache HTTP Server 2.4.37 Service Pack 7 packages that are part of the JBoss Core Services offering. Refer to the Release Notes for information on the most significant bug fixes and enhancements included in this release. Solution:
Before applying the update, back up your existing installation, including all applications, configuration files, databases and database settings, and so on.
The References section of this erratum contains a download link for the update. You must be logged in to download the update. Bugs fixed (https://bugzilla.redhat.com/):
1941547 - CVE-2021-3450 openssl: CA certificate check bypass with X509_V_FLAG_X509_STRICT 1941554 - CVE-2021-3449 openssl: NULL pointer dereference in signature_algorithms processing
- ========================================================================== Ubuntu Security Notice USN-5038-1 August 12, 2021
postgresql-10, postgresql-12, postgresql-13 vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 21.04
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
Summary:
Several security issues were fixed in PostgreSQL.
Software Description: - postgresql-13: Object-relational SQL database - postgresql-12: Object-relational SQL database - postgresql-10: Object-relational SQL database
Details:
It was discovered that the PostgresQL planner could create incorrect plans in certain circumstances. A remote attacker could use this issue to cause PostgreSQL to crash, resulting in a denial of service, or possibly obtain sensitive information from memory. (CVE-2021-3677)
It was discovered that PostgreSQL incorrectly handled certain SSL renegotiation ClientHello messages from clients. A remote attacker could possibly use this issue to cause PostgreSQL to crash, resulting in a denial of service. (CVE-2021-3449)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 21.04: postgresql-13 13.4-0ubuntu0.21.04.1
Ubuntu 20.04 LTS: postgresql-12 12.8-0ubuntu0.20.04.1
Ubuntu 18.04 LTS: postgresql-10 10.18-0ubuntu0.18.04.1
This update uses a new upstream release, which includes additional bug fixes. After a standard system update you need to restart PostgreSQL to make all the necessary changes.
Security Fix(es):
- golang: crypto/tls: certificate of wrong type is causing TLS client to panic (CVE-2021-34558)
- golang: net: lookup functions may return invalid host names (CVE-2021-33195)
- golang: net/http/httputil: ReverseProxy forwards connection headers if first one is empty (CVE-2021-33197)
- golang: match/big.Rat: may cause a panic or an unrecoverable fatal error if passed inputs with very large exponents (CVE-2021-33198)
- golang: encoding/xml: infinite loop when using xml.NewTokenDecoder with a custom TokenReader (CVE-2021-27918)
- golang: net/http: panic in ReadRequest and ReadResponse when reading a very large header (CVE-2021-31525)
- golang: archive/zip: malformed archive may cause panic or memory exhaustion (CVE-2021-33196)
It was found that the CVE-2021-27918, CVE-2021-31525 and CVE-2021-33196 have been incorrectly mentioned as fixed in RHSA for Serverless client kn 1.16.0. Bugs fixed (https://bugzilla.redhat.com/):
1983596 - CVE-2021-34558 golang: crypto/tls: certificate of wrong type is causing TLS client to panic 1983651 - Release of OpenShift Serverless Serving 1.17.0 1983654 - Release of OpenShift Serverless Eventing 1.17.0 1989564 - CVE-2021-33195 golang: net: lookup functions may return invalid host names 1989570 - CVE-2021-33197 golang: net/http/httputil: ReverseProxy forwards connection headers if first one is empty 1989575 - CVE-2021-33198 golang: math/big.Rat: may cause a panic or an unrecoverable fatal error if passed inputs with very large exponents 1992955 - CVE-2021-3703 serverless: incomplete fix for CVE-2021-27918 / CVE-2021-31525 / CVE-2021-33196
5. OpenSSL Security Advisory [25 March 2021]
CA certificate check bypass with X509_V_FLAG_X509_STRICT (CVE-2021-3450)
Severity: High
The X509_V_FLAG_X509_STRICT flag enables additional security checks of the certificates present in a certificate chain. It is not set by default.
Starting from OpenSSL version 1.1.1h a check to disallow certificates in the chain that have explicitly encoded elliptic curve parameters was added as an additional strict check.
An error in the implementation of this check meant that the result of a previous check to confirm that certificates in the chain are valid CA certificates was overwritten. This effectively bypasses the check that non-CA certificates must not be able to issue other certificates.
If a "purpose" has been configured then there is a subsequent opportunity for checks that the certificate is a valid CA. All of the named "purpose" values implemented in libcrypto perform this check. Therefore, where a purpose is set the certificate chain will still be rejected even when the strict flag has been used. A purpose is set by default in libssl client and server certificate verification routines, but it can be overridden or removed by an application.
In order to be affected, an application must explicitly set the X509_V_FLAG_X509_STRICT verification flag and either not set a purpose for the certificate verification or, in the case of TLS client or server applications, override the default purpose.
This issue was reported to OpenSSL on 18th March 2021 by Benjamin Kaduk from Akamai and was discovered by Xiang Ding and others at Akamai. The fix was developed by Tomáš Mráz.
This issue was reported to OpenSSL on 17th March 2021 by Nokia. The fix was developed by Peter Kästle and Samuel Sapalski from Nokia.
Note
OpenSSL 1.0.2 is out of support and no longer receiving public updates. Extended support is available for premium support customers: https://www.openssl.org/support/contracts.html
OpenSSL 1.1.0 is out of support and no longer receiving updates of any kind.
References
URL for this Security Advisory: https://www.openssl.org/news/secadv/20210325.txt
Note: the online version of the advisory may be updated with additional details over time.
For details of OpenSSL severity classifications please see: https://www.openssl.org/policies/secpolicy.html
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202103-1464", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "nessus", "scope": "lte", "trust": 1.0, "vendor": "tenable", "version": "8.13.1" }, { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.57" }, { "model": "mysql workbench", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.0.23" }, { "model": "scalance s627-2m", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "4.1" }, { "model": "sonicos", "scope": "eq", "trust": 1.0, "vendor": "sonicwall", "version": "7.0.1.0" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "12.12.0" }, { "model": "scalance xr-300wg", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "4.3" }, { "model": "mysql server", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "8.0.15" }, { "model": "quantum security management", "scope": "eq", "trust": 1.0, "vendor": "checkpoint", "version": "r81" }, { "model": "scalance xp-200", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "4.3" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "10.0.0" }, { "model": "sinamics connect 300", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "scalance xc-200", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "4.3" }, { "model": "simatic net cp 1543-1", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "3.0" }, { "model": "simatic net cp 1542sp-1 irc", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.1" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "12.13.0" }, { "model": "scalance xr526-8c", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "6.4" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "10.0" }, { "model": "simatic net cp1243-7 lte us", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "3.1" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "scalance w700", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "6.5" }, { "model": "sinec infrastructure network services", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0.1.1" }, { "model": "scalance xr552-12", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "6.4" }, { "model": "simatic mv500", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "scalance xm-400", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "6.4" }, { "model": "scalance s615", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "6.2" }, { "model": "tia administrator", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "log correlation engine", "scope": "lt", "trust": 1.0, "vendor": "tenable", "version": "6.0.9" }, { "model": "multi-domain management", "scope": "eq", "trust": 1.0, "vendor": "checkpoint", "version": "r80.40" }, { "model": "capture client", "scope": "eq", "trust": 1.0, "vendor": "sonicwall", "version": "3.5" }, { "model": "simatic cloud connect 7", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": null }, { "model": "web gateway cloud service", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "8.2.19" }, { "model": "simatic pcs neo", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "simatic rf186c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "simatic s7-1200 cpu 1212c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "simatic cp 1242-7 gprs v2", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": null }, { "model": "simatic s7-1200 cpu 1215 fc", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "simatic logon", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "1.6.0.2" }, { "model": "freebsd", "scope": "eq", "trust": 1.0, "vendor": "freebsd", "version": "12.2" }, { "model": "simatic logon", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.5" }, { "model": "simatic hmi basic panels 2nd generation", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "oncommand workflow automation", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "quantum security gateway", "scope": "eq", "trust": 1.0, "vendor": "checkpoint", "version": "r81" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "14.16.1" }, { "model": "primavera unifier", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "17.12" }, { "model": "simatic rf188ci", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "oncommand insight", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "sinec nms", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "14.0.0" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.12.0" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.12.1" }, { "model": "scalance xr524-8c", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "6.4" }, { "model": "tim 1531 irc", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.0" }, { "model": "sinec pni", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": null }, { "model": "storagegrid", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "12.22.1" }, { "model": "communications communications policy management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "12.6.0.0.0" }, { "model": "scalance s612", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "4.1" }, { "model": "simatic hmi ktp mobile panels", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "15.0.0" }, { "model": "sinumerik opc ua server", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "web gateway cloud service", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "9.2.10" }, { "model": "e-series performance analyzer", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "simatic cp 1242-7 gprs v2", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "3.1" }, { "model": "tenable.sc", "scope": "lte", "trust": 1.0, "vendor": "tenable", "version": "5.17.0" }, { "model": "zfs storage appliance kit", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.8" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "10.24.0" }, { "model": "jd edwards world security", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "a9.4" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "10.12.0" }, { "model": "simatic wincc runtime advanced", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "simatic process historian opc ua server", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2019" }, { "model": "simatic net cp 1545-1", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "ruggedcom rcm1224", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "6.2" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "34" }, { "model": "web gateway", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "8.2.19" }, { "model": "simatic cloud connect 7", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "1.1" }, { "model": "scalance xr528-6m", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "6.4" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "14.14.0" }, { "model": "cloud volumes ontap mediator", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "multi-domain management", "scope": "eq", "trust": 1.0, "vendor": "checkpoint", "version": "r81" }, { "model": "scalance xf-200ba", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "4.3" }, { "model": "simatic s7-1500 cpu 1518-4 pn\\/dp mfp", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.13.0" }, { "model": "primavera unifier", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "19.12" }, { "model": "scalance m-800", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "6.2" }, { "model": "openssl", "scope": "lt", "trust": 1.0, "vendor": "openssl", "version": "1.1.1k" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "14.15.0" }, { "model": "snapcenter", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "scalance lpe9403", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "simatic hmi comfort outdoor panels", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "scalance sc-600", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.0" }, { "model": "quantum security management", "scope": "eq", "trust": 1.0, "vendor": "checkpoint", "version": "r80.40" }, { "model": "active iq unified manager", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "graalvm", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "19.3.5" }, { "model": "mysql server", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.0.23" }, { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.59" }, { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.58" }, { "model": "sma100", "scope": "lt", "trust": 1.0, "vendor": "sonicwall", "version": "10.2.1.0-17sv" }, { "model": "simatic rf186ci", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "primavera unifier", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "20.12" }, { "model": "sinema server", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "14.0" }, { "model": "simatic net cp 1243-1", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "3.1" }, { "model": "simatic s7-1200 cpu 1217c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "primavera unifier", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "17.7" }, { "model": "simatic rf185c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "tenable.sc", "scope": "gte", "trust": 1.0, "vendor": "tenable", "version": "5.13.0" }, { "model": "santricity smi-s provider", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "primavera unifier", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "21.12" }, { "model": "simatic s7-1200 cpu 1211c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "scalance s623", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "4.1" }, { "model": "simatic s7-1200 cpu 1215c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "web gateway", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "9.2.10" }, { "model": "graalvm", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "21.0.0.2" }, { "model": "simatic rf166c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "tim 1531 irc", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "2.2" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "10.13.0" }, { "model": "simatic net cp 1543sp-1", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.1" }, { "model": "web gateway", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "10.1.1" }, { "model": "scalance w1700", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.0" }, { "model": "mysql server", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "5.7.33" }, { "model": "simatic s7-1200 cpu 1212fc", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "simatic net cp1243-7 lte eu", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "3.1" }, { "model": "simatic wincc telecontrol", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": null }, { "model": "simatic net cp 1243-8 irc", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "3.1" }, { "model": "simatic pcs 7 telecontrol", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "15.14.0" }, { "model": "simatic rf360r", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "9.0" }, { "model": "jd edwards enterpriseone tools", "scope": "lt", "trust": 1.0, "vendor": "oracle", "version": "9.2.6.0" }, { "model": "sma100", "scope": "gte", "trust": 1.0, "vendor": "sonicwall", "version": "10.2.0.0" }, { "model": "scalance xb-200", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "4.3" }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "1.1.1" }, { "model": "scalance s602", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "4.1" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "12.0.0" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.11.0" }, { "model": "quantum security gateway", "scope": "eq", "trust": 1.0, "vendor": "checkpoint", "version": "r80.40" }, { "model": "secure global desktop", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "5.6" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.11.1" }, { "model": "essbase", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "21.2" }, { "model": "graalvm", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "20.3.1.2" }, { "model": "mysql connectors", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.0.23" }, { "model": "secure backup", "scope": "lt", "trust": 1.0, "vendor": "oracle", "version": "18.1.0.1.0" }, { "model": "simatic net cp 1543-1", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.2" }, { "model": "simatic s7-1200 cpu 1214 fc", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "web gateway cloud service", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "10.1.1" }, { "model": "simatic rf188c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "simatic s7-1200 cpu 1214c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "enterprise manager for storage management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "13.4.0.0" }, { "model": "simatic pdm", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "9.1.0.7" }, { "model": "hitachi ops center analyzer viewpoint", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "storagegrid", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "ontap select deploy administration utility", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "quantum security gateway", "scope": null, "trust": 0.8, "vendor": "\u30c1\u30a7\u30c3\u30af \u30dd\u30a4\u30f3\u30c8 \u30bd\u30d5\u30c8\u30a6\u30a7\u30a2 \u30c6\u30af\u30ce\u30ed\u30b8\u30fc\u30ba", "version": null }, { "model": "tenable.sc", "scope": null, "trust": 0.8, "vendor": "tenable", "version": null }, { "model": "nessus", "scope": null, "trust": 0.8, "vendor": "tenable", "version": null }, { "model": "oncommand workflow automation", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "freebsd", "scope": null, "trust": 0.8, "vendor": "freebsd", "version": null }, { "model": "hitachi ops center common services", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "santricity smi-s provider", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "mcafee web gateway \u30bd\u30d5\u30c8\u30a6\u30a7\u30a2", "scope": null, "trust": 0.8, "vendor": "\u30de\u30ab\u30d5\u30a3\u30fc", "version": null }, { "model": "e-series performance analyzer", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "fedora", "scope": null, "trust": 0.8, "vendor": "fedora", "version": null }, { "model": "jp1/file transmission server/ftp", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "quantum security management", "scope": null, "trust": 0.8, "vendor": "\u30c1\u30a7\u30c3\u30af \u30dd\u30a4\u30f3\u30c8 \u30bd\u30d5\u30c8\u30a6\u30a7\u30a2 \u30c6\u30af\u30ce\u30ed\u30b8\u30fc\u30ba", "version": null }, { "model": "openssl", "scope": null, "trust": 0.8, "vendor": "openssl", "version": null }, { "model": "cloud volumes ontap \u30e1\u30c7\u30a3\u30a8\u30fc\u30bf", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "jp1/base", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "gnu/linux", "scope": null, "trust": 0.8, "vendor": "debian", "version": null }, { "model": "web gateway cloud service", "scope": null, "trust": 0.8, "vendor": "\u30de\u30ab\u30d5\u30a3\u30fc", "version": null }, { "model": "multi-domain management", "scope": null, "trust": 0.8, "vendor": "\u30c1\u30a7\u30c3\u30af \u30dd\u30a4\u30f3\u30c8 \u30bd\u30d5\u30c8\u30a6\u30a7\u30a2 \u30c6\u30af\u30ce\u30ed\u30b8\u30fc\u30ba", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "db": "NVD", "id": "CVE-2021-3449" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.1.1k", "versionStartIncluding": "1.1.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:freebsd:freebsd:12.2:p1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:freebsd:freebsd:12.2:p2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:freebsd:freebsd:12.2:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:santricity_smi-s_provider:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:snapcenter:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:oncommand_workflow_automation:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:storagegrid:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:oncommand_insight:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:vmware_vsphere:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:cloud_volumes_ontap_mediator:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:e-series_performance_analyzer:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:tenable:tenable.sc:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "5.17.0", "versionStartIncluding": "5.13.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.13.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.11.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.12.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.12.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.13.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.11.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:log_correlation_engine:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "6.0.9", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:34:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway_cloud_service:10.1.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway_cloud_service:9.2.10:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway_cloud_service:8.2.19:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway:10.1.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway:9.2.10:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway:8.2.19:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:checkpoint:quantum_security_management_firmware:r80.40:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:checkpoint:quantum_security_management_firmware:r81:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:checkpoint:quantum_security_management:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:checkpoint:multi-domain_management_firmware:r80.40:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:checkpoint:multi-domain_management_firmware:r81:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:checkpoint:multi-domain_management:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:checkpoint:quantum_security_gateway_firmware:r80.40:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:checkpoint:quantum_security_gateway_firmware:r81:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:checkpoint:quantum_security_gateway:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.57:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_world_security:a9.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "17.12", "versionStartIncluding": "17.7", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.58:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:19.12:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_manager_for_storage_management:13.4.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:20.12:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:zfs_storage_appliance_kit:8.8:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:secure_global_desktop:5.6:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:graalvm:20.3.1.2:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:graalvm:21.0.0.2:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:graalvm:19.3.5:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_server:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.0.23", "versionStartIncluding": "8.0.15", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_server:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "5.7.33", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_workbench:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.0.23", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.59:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:essbase:21.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_connectors:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.0.23", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_enterpriseone_tools:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "9.2.6.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:21.12:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:secure_backup:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "18.1.0.1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_communications_policy_management:12.6.0.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:sonicwall:sma100_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "10.2.1.0-17sv", "versionStartIncluding": "10.2.0.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:sonicwall:sma100:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:sonicwall:capture_client:3.5:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:sonicwall:sonicos:7.0.1.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:ruggedcom_rcm1224_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "6.2", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:ruggedcom_rcm1224:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_lpe9403_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_lpe9403:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_m-800_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "6.2", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_m-800:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_s602_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "4.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_s602:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_s612_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "4.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_s612:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_s615_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "6.2", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_s615:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_s623_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "4.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_s623:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_s627-2m_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "4.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_s627-2m:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_sc-600_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "2.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_sc-600:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_w700_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "6.5", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_w700:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_w1700_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "2.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_w1700:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xb-200_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "4.3", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xb-200:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xc-200_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "4.3", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xc-200:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xf-200ba_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "4.3", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xf-200ba:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xm-400_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "6.4", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xm-400:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xp-200_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "4.3", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xp-200:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xr-300wg_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "4.3", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xr-300wg:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xr524-8c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "6.4", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xr524-8c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xr526-8c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "6.4", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xr526-8c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xr528-6m_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "6.4", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xr528-6m:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xr552-12_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "6.4", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xr552-12:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_cloud_connect_7_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "1.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:siemens:simatic_cloud_connect_7_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_cloud_connect_7:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_cp_1242-7_gprs_v2_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "3.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:siemens:simatic_cp_1242-7_gprs_v2_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_cp_1242-7_gprs_v2:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_hmi_basic_panels_2nd_generation_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_hmi_basic_panels_2nd_generation:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_hmi_comfort_outdoor_panels_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_hmi_comfort_outdoor_panels:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_hmi_ktp_mobile_panels_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_hmi_ktp_mobile_panels:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_mv500_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_mv500:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_net_cp_1243-1_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "3.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_net_cp_1243-1:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_net_cp1243-7_lte_eu_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "3.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_net_cp1243-7_lte_eu:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_net_cp1243-7_lte_us_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "3.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_net_cp1243-7_lte_us:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_net_cp_1243-8_irc_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "3.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_net_cp_1243-8_irc:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_net_cp_1542sp-1_irc_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "2.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_net_cp_1542sp-1_irc:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_net_cp_1543-1_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.0", "versionStartIncluding": "2.2", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_net_cp_1543-1:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_net_cp_1543sp-1_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "2.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_net_cp_1543sp-1:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_net_cp_1545-1_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "1.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_net_cp_1545-1:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_pcs_7_telecontrol_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_pcs_7_telecontrol:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_pcs_neo_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_pcs_neo:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_pdm_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "9.1.0.7", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_pdm:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_process_historian_opc_ua_server_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "2019", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_process_historian_opc_ua_server:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_rf166c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_rf166c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_rf185c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_rf185c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_rf186c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_rf186c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_rf186ci_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_rf186ci:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_rf188c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_rf188c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_rf188ci_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_rf188ci:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_rf360r_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_rf360r:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1211c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1211c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1212c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1212c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1212fc_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1212fc:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1214_fc_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1214_fc:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1214c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1214c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1214_fc_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1214_fc:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1215_fc_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1215_fc:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1215c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1215c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1217c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1217c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1500_cpu_1518-4_pn\\/dp_mfp_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1500_cpu_1518-4_pn\\/dp_mfp:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:sinamics_connect_300_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:sinamics_connect_300:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:tim_1531_irc_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.2", "versionStartIncluding": "2.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:tim_1531_irc:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:simatic_wincc_runtime_advanced:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinema_server:14.0:sp2_update1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinema_server:14.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinema_server:14.0:sp2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinema_server:14.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:simatic_logon:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "1.6.0.2", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:simatic_logon:1.5:sp3_update_1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:simatic_wincc_telecontrol:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_nms:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_nms:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_pni:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:tia_administrator:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinema_server:14.0:sp2_update2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinumerik_opc_ua_server:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_infrastructure_network_services:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0.1.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndIncluding": "14.14.0", "versionStartIncluding": "14.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndIncluding": "10.12.0", "versionStartIncluding": "10.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndIncluding": "12.12.0", "versionStartIncluding": "12.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:lts:*:*:*", "cpe_name": [], "versionEndExcluding": "14.16.1", "versionStartIncluding": "14.15.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:lts:*:*:*", "cpe_name": [], "versionEndExcluding": "12.22.1", "versionStartIncluding": "12.13.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:lts:*:*:*", "cpe_name": [], "versionEndIncluding": "10.24.0", "versionStartIncluding": "10.13.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndExcluding": "15.14.0", "versionStartIncluding": "15.0.0", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2021-3449" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "163209" }, { "db": "PACKETSTORM", "id": "163257" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "162183" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "162197" }, { "db": "PACKETSTORM", "id": "164192" } ], "trust": 0.7 }, "cve": "CVE-2021-3449", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 4.3, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Medium", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "Partial", "baseScore": 4.3, "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2021-3449", "impactScore": null, "integrityImpact": "None", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "PARTIAL", "baseScore": 4.3, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "id": "VHN-388130", "impactScore": 2.9, "integrityImpact": "NONE", "severity": "MEDIUM", "trust": 0.1, "vectorString": "AV:N/AC:M/AU:N/C:N/I:N/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "HIGH", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 5.9, "baseSeverity": "MEDIUM", "confidentialityImpact": "NONE", "exploitabilityScore": 2.2, "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H", "version": "3.1" }, { "attackComplexity": "High", "attackVector": "Network", "author": "NVD", "availabilityImpact": "High", "baseScore": 5.9, "baseSeverity": "Medium", "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2021-3449", "impactScore": null, "integrityImpact": "None", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2021-3449", "trust": 1.8, "value": "MEDIUM" }, { "author": "VULHUB", "id": "VHN-388130", "trust": 0.1, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2021-3449", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-388130" }, { "db": "VULMON", "id": "CVE-2021-3449" }, { "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "db": "NVD", "id": "CVE-2021-3449" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "An OpenSSL TLS server may crash if sent a maliciously crafted renegotiation ClientHello message from a client. If a TLSv1.2 renegotiation ClientHello omits the signature_algorithms extension (where it was present in the initial ClientHello), but includes a signature_algorithms_cert extension then a NULL pointer dereference will result, leading to a crash and a denial of service attack. A server is only vulnerable if it has TLSv1.2 and renegotiation enabled (which is the default configuration). OpenSSL TLS clients are not impacted by this issue. All OpenSSL 1.1.1 versions are affected by this issue. Users of these versions should upgrade to OpenSSL 1.1.1k. OpenSSL 1.0.2 is not impacted by this issue. Fixed in OpenSSL 1.1.1k (Affected 1.1.1-1.1.1j). OpenSSL is an open source general encryption library of the Openssl team that can implement the Secure Sockets Layer (SSLv2/v3) and Transport Layer Security (TLSv1) protocols. The product supports a variety of encryption algorithms, including symmetric ciphers, hash algorithms, secure hash algorithms, etc. On March 25, 2021, the OpenSSL Project released a security advisory, OpenSSL Security Advisory [25 March 2021], that disclosed two vulnerabilities. \nExploitation of these vulnerabilities could allow an malicious user to use a valid non-certificate authority (CA) certificate to act as a CA and sign a certificate for an arbitrary organization, user or device, or to cause a denial of service (DoS) condition. \nThis advisory is available at the following link:tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-openssl-2021-GHY28dJd. In addition to persistent storage, Red Hat\nOpenShift Container Storage provisions a multicloud data management service\nwith an S3 compatible API. \n\nSecurity Fix(es):\n\n* NooBaa: noobaa-operator leaking RPC AuthToken into log files\n(CVE-2021-3528)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, and other related information, refer to the CVE page(s) listed in\nthe References section. \n\nBug Fix(es):\n\n* Currently, a newly restored PVC cannot be mounted if some of the\nOpenShift Container Platform nodes are running on a version of Red Hat\nEnterprise Linux which is less than 8.2, and the snapshot from which the\nPVC was restored is deleted. \nWorkaround: Do not delete the snapshot from which the PVC was restored\nuntil the restored PVC is deleted. (BZ#1962483)\n\n* Previously, the default backingstore was not created on AWS S3 when\nOpenShift Container Storage was deployed, due to incorrect identification\nof AWS S3. With this update, the default backingstore gets created when\nOpenShift Container Storage is deployed on AWS S3. (BZ#1927307)\n\n* Previously, log messages were printed to the endpoint pod log even if the\ndebug option was not set. With this update, the log messages are printed to\nthe endpoint pod log only when the debug option is set. (BZ#1938106)\n\n* Previously, the PVCs could not be provisioned as the `rook-ceph-mds` did\nnot register the pod IP on the monitor servers, and hence every mount on\nthe filesystem timed out, resulting in CephFS volume provisioning failure. \nWith this update, an argument `--public-addr=podIP` is added to the MDS pod\nwhen the host network is not enabled, and hence the CephFS volume\nprovisioning does not fail. (BZ#1949558)\n\n* Previously, OpenShift Container Storage 4.2 clusters were not updated\nwith the correct cache value, and hence MDSs in standby-replay might report\nan oversized cache, as rook did not apply the `mds_cache_memory_limit`\nargument during upgrades. With this update, the `mds_cache_memory_limit`\nargument is applied during upgrades and the mds daemon operates normally. \n(BZ#1951348)\n\n* Previously, the coredumps were not generated in the correct location as\nrook was setting the config option `log_file` to an empty string since\nlogging happened on stdout and not on the files, and hence Ceph read the\nvalue of the `log_file` to build the dump path. With this update, rook does\nnot set the `log_file` and keeps Ceph\u0027s internal default, and hence the\ncoredumps are generated in the correct location and are accessible under\n`/var/log/ceph/`. (BZ#1938049)\n\n* Previously, Ceph became inaccessible, as the mons lose quorum if a mon\npod was drained while another mon was failing over. With this update,\nvoluntary mon drains are prevented while a mon is failing over, and hence\nCeph does not become inaccessible. (BZ#1946573)\n\n* Previously, the mon quorum was at risk, as the operator could erroneously\nremove the new mon if the operator was restarted during a mon failover. \nWith this update, the operator completes the same mon failover after the\noperator is restarted, and hence the mon quorum is more reliable in the\nnode drains and mon failover scenarios. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nFor details on how to apply this update, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1938106 - [GSS][RFE]Reduce debug level for logs of Nooba Endpoint pod\n1950915 - XSS Vulnerability with Noobaa version 5.5.0-3bacc6b\n1951348 - [GSS][CephFS] health warning \"MDS cache is too large (3GB/1GB); 0 inodes in use by clients, 0 stray files\" for the standby-replay\n1951600 - [4.6.z][Clone of BZ #1936545] setuid and setgid file bits are not retained after a OCS CephFS CSI restore\n1955601 - CVE-2021-3528 NooBaa: noobaa-operator leaking RPC AuthToken into log files\n1957189 - [Rebase] Use RHCS4.2z1 container image with OCS 4..6.5[may require doc update for external mode min supported RHCS version]\n1959980 - When a node is being drained, increase the mon failover timeout to prevent unnecessary mon failover\n1959983 - [GSS][mon] rook-operator scales mons to 4 after healthCheck timeout\n1962483 - [RHEL7][RBD][4.6.z clone] FailedMount error when using restored PVC on app pod\n\n5. \n\nBug Fix(es):\n\n* WMCO patch pub-key-hash annotation to Linux node (BZ#1945248)\n\n* LoadBalancer Service type with invalid external loadbalancer IP breaks\nthe datapath (BZ#1952917)\n\n* Telemetry info not completely available to identify windows nodes\n(BZ#1955319)\n\n* WMCO incorrectly shows node as ready after a failed configuration\n(BZ#1956412)\n\n* kube-proxy service terminated unexpectedly after recreated LB service\n(BZ#1963263)\n\n3. Solution:\n\nFor Windows Machine Config Operator upgrades, see the following\ndocumentation:\n\nhttps://docs.openshift.com/container-platform/4.7/windows_containers/window\ns-node-upgrades.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1945248 - WMCO patch pub-key-hash annotation to Linux node\n1946538 - CVE-2021-25736 kubernetes: LoadBalancer Service type don\u0027t create a HNS policy for empty or invalid external loadbalancer IP, what could lead to MITM\n1952917 - LoadBalancer Service type with invalid external loadbalancer IP breaks the datapath\n1955319 - Telemetry info not completely available to identify windows nodes\n1956412 - WMCO incorrectly shows node as ready after a failed configuration\n1963263 - kube-proxy service terminated unexpectedly after recreated LB service\n\n5. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.3.0 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See\nthe following Release Notes documentation, which will be updated shortly\nfor this release, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana\ngement_for_kubernetes/2.3/html/release_notes/\n\nSecurity:\n\n* fastify-reply-from: crafted URL allows prefix scape of the proxied\nbackend service (CVE-2021-21321)\n\n* fastify-http-proxy: crafted URL allows prefix scape of the proxied\nbackend service (CVE-2021-21322)\n\n* nodejs-netmask: improper input validation of octal input data\n(CVE-2021-28918)\n\n* redis: Integer overflow via STRALGO LCS command (CVE-2021-29477)\n\n* redis: Integer overflow via COPY command for large intsets\n(CVE-2021-29478)\n\n* nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)\n\n* nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n(CVE-2020-28500)\n\n* golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing\n- -u- extension (CVE-2020-28851)\n\n* golang.org/x/text: Panic in language.ParseAcceptLanguage while processing\nbcp47 tag (CVE-2020-28852)\n\n* nodejs-ansi_up: XSS due to insufficient URL sanitization (CVE-2021-3377)\n\n* oras: zip-slip vulnerability via oras-pull (CVE-2021-21272)\n\n* redis: integer overflow when configurable limit for maximum supported\nbulk input size is too big on 32-bit platforms (CVE-2021-21309)\n\n* nodejs-lodash: command injection via template (CVE-2021-23337)\n\n* nodejs-hosted-git-info: Regular Expression denial of service via\nshortcutMatch in fromUrl() (CVE-2021-23362)\n\n* browserslist: parsing of invalid queries could result in Regular\nExpression Denial of Service (ReDoS) (CVE-2021-23364)\n\n* nodejs-postcss: Regular expression denial of service during source map\nparsing (CVE-2021-23368)\n\n* nodejs-handlebars: Remote code execution when compiling untrusted compile\ntemplates with strict:true option (CVE-2021-23369)\n\n* nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in\nlib/previous-map.js (CVE-2021-23382)\n\n* nodejs-handlebars: Remote code execution when compiling untrusted compile\ntemplates with compat:true option (CVE-2021-23383)\n\n* openssl: integer overflow in CipherUpdate (CVE-2021-23840)\n\n* openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n(CVE-2021-23841)\n\n* nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n(CVE-2021-27292)\n\n* grafana: snapshot feature allow an unauthenticated remote attacker to\ntrigger a DoS via a remote API call (CVE-2021-27358)\n\n* nodejs-is-svg: ReDoS via malicious string (CVE-2021-28092)\n\n* nodejs-netmask: incorrectly parses an IP address that has octal integer\nwith invalid character (CVE-2021-29418)\n\n* ulikunitz/xz: Infinite loop in readUvarint allows for denial of service\n(CVE-2021-29482)\n\n* normalize-url: ReDoS for data URLs (CVE-2021-33502)\n\n* nodejs-trim-newlines: ReDoS in .end() method (CVE-2021-33623)\n\n* nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n(CVE-2021-23343)\n\n* html-parse-stringify: Regular Expression DoS (CVE-2021-23346)\n\n* openssl: incorrect SSLv2 rollback protection (CVE-2021-23839)\n\nFor more details about the security issues, including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npages listed in the References section. \n\nBugs:\n\n* RFE Make the source code for the endpoint-metrics-operator public (BZ#\n1913444)\n\n* cluster became offline after apiserver health check (BZ# 1942589)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension\n1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag\n1913444 - RFE Make the source code for the endpoint-metrics-operator public\n1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull\n1927520 - RHACM 2.3.0 images\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection\n1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate\n1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms\n1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization\n1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string\n1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application\n1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call\n1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS\n1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service\n1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service\n1942589 - cluster became offline after apiserver health check\n1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl()\n1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character\n1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data\n1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service\n1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option\n1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing\n1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js\n1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service\n1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS)\n1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option\n1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command\n1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets\n1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs\n1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method\n1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions\n1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id\n1983131 - Defragmenting an etcd member doesn\u0027t reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters\n\n5. It is comprised of the Apache\nTomcat Servlet container, JBoss HTTP Connector (mod_cluster), the\nPicketLink Vault extension for Apache Tomcat, and the Tomcat Native\nlibrary. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: OpenShift Container Platform 4.10.3 security update\nAdvisory ID: RHSA-2022:0056-01\nProduct: Red Hat OpenShift Enterprise\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:0056\nIssue date: 2022-03-10\nCVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 \n CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 \n CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 \n CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 \n CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 \n CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 \n CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 \n CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 \n CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 \n CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 \n CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 \n CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 \n CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 \n CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 \n CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 \n CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 \n CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 \n CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 \n CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 \n CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 \n CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 \n CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 \n CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 \n CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 \n CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 \n CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 \n CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 \n CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 \n CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 \n CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 \n CVE-2022-24407 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.10.3 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.10.3. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:0055\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nSecurity Fix(es):\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n* grafana: Snapshot authentication bypass (CVE-2021-39226)\n* golang: net/http: limit growth of header canonicalization cache\n(CVE-2021-44716)\n* nodejs-axios: Regular expression denial of service in trim function\n(CVE-2021-3749)\n* golang: syscall: don\u0027t close fd 0 on ForkExec error (CVE-2021-44717)\n* grafana: Forward OAuth Identity Token can allow users to access some data\nsources (CVE-2022-21673)\n* grafana: directory traversal vulnerability (CVE-2021-43813)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-x86_64\n\nThe image digest is\nsha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56\n\n(For s390x architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-s390x\n\nThe image digest is\nsha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69\n\n(For ppc64le architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le\n\nThe image digest is\nsha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c\n\nAll OpenShift Container Platform 4.10 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.10 see the following documentation,\nwhich will be updated shortly for this release, for moderate instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1808240 - Always return metrics value for pods under the user\u0027s namespace\n1815189 - feature flagged UI does not always become available after operator installation\n1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters\n1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly\n1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal\n1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered\n1878925 - \u0027oc adm upgrade --to ...\u0027 rejects versions which occur only in history, while the cluster-version operator supports history fallback\n1880738 - origin e2e test deletes original worker\n1882983 - oVirt csi driver should refuse to provision RWX and ROX PV\n1886450 - Keepalived router id check not documented for RHV/VMware IPI\n1889488 - The metrics endpoint for the Scheduler is not protected by RBAC\n1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom\n1896474 - Path based routing is broken for some combinations\n1897431 - CIDR support for additional network attachment with the bridge CNI plug-in\n1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes\n1907433 - Excessive logging in image operator\n1909906 - The router fails with PANIC error when stats port already in use\n1911173 - [MSTR-998] Many charts\u0027 legend names show {{}} instead of words\n1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. \n1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)\n1917893 - [ovirt] install fails: due to terraform error \"Cannot attach Virtual Disk: Disk is locked\" on vm resource\n1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1926522 - oc adm catalog does not clean temporary files\n1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. \n1928141 - kube-storage-version-migrator constantly reporting type \"Upgradeable\" status Unknown\n1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it\u0027s storageclass is not yet finished, confusing users\n1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x\n1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade\n1937085 - RHV UPI inventory playbook missing guarantee_memory\n1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion\n1938236 - vsphere-problem-detector does not support overriding log levels via storage CR\n1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods\n1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer\n1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]\n1942913 - ThanosSidecarUnhealthy isn\u0027t resilient to WAL replays. \n1943363 - [ovn] CNO should gracefully terminate ovn-northd\n1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17\n1948080 - authentication should not set Available=False APIServices_Error with 503s\n1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set\n1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0\n1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer\n1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs\n1953264 - \"remote error: tls: bad certificate\" logs in prometheus-operator container\n1955300 - Machine config operator reports unavailable for 23m during upgrade\n1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set\n1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set\n1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters\n1956496 - Needs SR-IOV Docs Upstream\n1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret\n1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid\n1956964 - upload a boot-source to OpenShift virtualization using the console\n1957547 - [RFE]VM name is not auto filled in dev console\n1958349 - ovn-controller doesn\u0027t release the memory after cluster-density run\n1959352 - [scale] failed to get pod annotation: timed out waiting for annotations\n1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not\n1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]\n1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects\n1961391 - String updates\n1961509 - DHCP daemon pod should have CPU and memory requests set but not limits\n1962066 - Edit machine/machineset specs not working\n1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1963053 - `oc whoami --show-console` should show the web console URL, not the server api URL\n1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1964327 - Support containers with name:tag@digest\n1964789 - Send keys and disconnect does not work for VNC console\n1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7\n1966445 - Unmasking a service doesn\u0027t work if it masked using MCO\n1966477 - Use GA version in KAS/OAS/OauthAS to avoid: \"audit.k8s.io/v1beta1\" is deprecated and will be removed in a future release, use \"audit.k8s.io/v1\" instead\n1966521 - kube-proxy\u0027s userspace implementation consumes excessive CPU\n1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up\n1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount\n1970218 - MCO writes incorrect file contents if compression field is specified\n1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]\n1970805 - Cannot create build when docker image url contains dir structure\n1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io\n1972827 - image registry does not remain available during upgrade\n1972962 - Should set the minimum value for the `--max-icsp-size` flag of `oc adm catalog mirror`\n1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run\n1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established\n1976301 - [ci] e2e-azure-upi is permafailing\n1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. \n1976674 - CCO didn\u0027t set Upgradeable to False when cco mode is configured to Manual on azure platform\n1976894 - Unidling a StatefulSet does not work as expected\n1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases\n1977414 - Build Config timed out waiting for condition 400: Bad Request\n1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus\n1978528 - systemd-coredump started and failed intermittently for unknown reasons\n1978581 - machine-config-operator: remove runlevel from mco namespace\n1979562 - Cluster operators: don\u0027t show messages when neither progressing, degraded or unavailable\n1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9\n1979966 - OCP builds always fail when run on RHEL7 nodes\n1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading\n1981549 - Machine-config daemon does not recover from broken Proxy configuration\n1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]\n1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues\n1982063 - \u0027Control Plane\u0027 is not translated in Simplified Chinese language in Home-\u003eOverview page\n1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands\n1982662 - Workloads - DaemonSets - Add storage: i18n misses\n1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE \"*/secrets/encryption-config\" on single node clusters\n1983758 - upgrades are failing on disruptive tests\n1983964 - Need Device plugin configuration for the NIC \"needVhostNet\" \u0026 \"isRdma\"\n1984592 - global pull secret not working in OCP4.7.4+ for additional private registries\n1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs\n1985486 - Cluster Proxy not used during installation on OSP with Kuryr\n1985724 - VM Details Page missing translations\n1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted\n1985933 - Downstream image registry recommendation\n1985965 - oVirt CSI driver does not report volume stats\n1986216 - [scale] SNO: Slow Pod recovery due to \"timed out waiting for OVS port binding\"\n1986237 - \"MachineNotYetDeleted\" in Pending state , alert not fired\n1986239 - crictl create fails with \"PID namespace requested, but sandbox infra container invalid\"\n1986302 - console continues to fetch prometheus alert and silences for normal user\n1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI\n1986338 - error creating list of resources in Import YAML\n1986502 - yaml multi file dnd duplicates previous dragged files\n1986819 - fix string typos for hot-plug disks\n1987044 - [OCPV48] Shutoff VM is being shown as \"Starting\" in WebUI when using spec.runStrategy Manual/RerunOnFailure\n1987136 - Declare operatorframework.io/arch.* labels for all operators\n1987257 - Go-http-client user-agent being used for oc adm mirror requests\n1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold\n1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP\n1988406 - SSH key dropped when selecting \"Customize virtual machine\" in UI\n1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade\n1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with \"Unable to connect to the server\"\n1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs\n1989438 - expected replicas is wrong\n1989502 - Developer Catalog is disappearing after short time\n1989843 - \u0027More\u0027 and \u0027Show Less\u0027 functions are not translated on several page\n1990014 - oc debug \u003cpod-name\u003e does not work for Windows pods\n1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created\n1990193 - \u0027more\u0027 and \u0027Show Less\u0027 is not being translated on Home -\u003e Search page\n1990255 - Partial or all of the Nodes/StorageClasses don\u0027t appear back on UI after text is removed from search bar\n1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI\n1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi-* symlinks\n1990556 - get-resources.sh doesn\u0027t honor the no_proxy settings even with no_proxy var\n1990625 - Ironic agent registers with SLAAC address with privacy-stable\n1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time\n1991067 - github.com can not be resolved inside pods where cluster is running on openstack. \n1991573 - Enable typescript strictNullCheck on network-policies files\n1991641 - Baremetal Cluster Operator still Available After Delete Provisioning\n1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator\n1991819 - Misspelled word \"ocurred\" in oc inspect cmd\n1991942 - Alignment and spacing fixes\n1992414 - Two rootdisks show on storage step if \u0027This is a CD-ROM boot source\u0027 is checked\n1992453 - The configMap failed to save on VM environment tab\n1992466 - The button \u0027Save\u0027 and \u0027Reload\u0027 are not translated on vm environment tab\n1992475 - The button \u0027Open console in New Window\u0027 and \u0027Disconnect\u0027 are not translated on vm console tab\n1992509 - Could not customize boot source due to source PVC not found\n1992541 - all the alert rules\u0027 annotations \"summary\" and \"description\" should comply with the OpenShift alerting guidelines\n1992580 - storageProfile should stay with the same value by check/uncheck the apply button\n1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply\n1992777 - [IBMCLOUD] Default \"ibm_iam_authorization_policy\" is not working as expected in all scenarios\n1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed)\n1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing\n1994094 - Some hardcodes are detected at the code level in OpenShift console components\n1994142 - Missing required cloud config fields for IBM Cloud\n1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools\n1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart\n1995335 - [SCALE] ovnkube CNI: remove ovs flows check\n1995493 - Add Secret to workload button and Actions button are not aligned on secret details page\n1995531 - Create RDO-based Ironic image to be promoted to OKD\n1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator\n1995887 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs\n1995924 - CMO should report `Upgradeable: false` when HA workload is incorrectly spread\n1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole\n1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN\n1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down\n1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page\n1996647 - Provide more useful degraded message in auth operator on DNS errors\n1996736 - Large number of 501 lr-policies in INCI2 env\n1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes\n1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP\n1996928 - Enable default operator indexes on ARM\n1997028 - prometheus-operator update removes env var support for thanos-sidecar\n1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used\n1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller. \n1997245 - \"Subscription already exists in openshift-storage namespace\" error message is seen while installing odf-operator via UI\n1997269 - Have to refresh console to install kube-descheduler\n1997478 - Storage operator is not available after reboot cluster instances\n1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n1997967 - storageClass is not reserved from default wizard to customize wizard\n1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order\n1998038 - [e2e][automation] add tests for UI for VM disk hot-plug\n1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus\n1998174 - Create storageclass gp3-csi after install ocp cluster on aws\n1998183 - \"r: Bad Gateway\" info is improper\n1998235 - Firefox warning: Cookie \u201ccsrf-token\u201d will be soon rejected\n1998377 - Filesystem table head is not full displayed in disk tab\n1998378 - Virtual Machine is \u0027Not available\u0027 in Home -\u003e Overview -\u003e Cluster inventory\n1998519 - Add fstype when create localvolumeset instance on web console\n1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses\n1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page\n1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable\n1999091 - Console update toast notification can appear multiple times\n1999133 - removing and recreating static pod manifest leaves pod in error state\n1999246 - .indexignore is not ingore when oc command load dc configuration\n1999250 - ArgoCD in GitOps operator can\u0027t manage namespaces\n1999255 - ovnkube-node always crashes out the first time it starts\n1999261 - ovnkube-node log spam (and security token leak?)\n1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -\u003e Operator Installation page\n1999314 - console-operator is slow to mark Degraded as False once console starts working\n1999425 - kube-apiserver with \"[SHOULD NOT HAPPEN] failed to update managedFields\" err=\"failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)\n1999556 - \"master\" pool should be updated before the CVO reports available at the new version occurred\n1999578 - AWS EFS CSI tests are constantly failing\n1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages\n1999619 - cloudinit is malformatted if a user sets a password during VM creation flow\n1999621 - Empty ssh_authorized_keys entry is added to VM\u0027s cloudinit if created from a customize flow\n1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined\n1999668 - openshift-install destroy cluster panic\u0027s when given invalid credentials to cloud provider (Azure Stack Hub)\n1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource\n1999771 - revert \"force cert rotation every couple days for development\" in 4.10\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n1999796 - Openshift Console `Helm` tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace. \n1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions\n1999903 - Click \"This is a CD-ROM boot source\" ticking \"Use template size PVC\" on pvc upload form\n1999983 - No way to clear upload error from template boot source\n2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter\n2000096 - Git URL is not re-validated on edit build-config form reload\n2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig\n2000236 - Confusing usage message from dynkeepalived CLI\n2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported\n2000430 - bump cluster-api-provider-ovirt version in installer\n2000450 - 4.10: Enable static PV multi-az test\n2000490 - All critical alerts shipped by CMO should have links to a runbook\n2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)\n2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster\n2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled\n2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console\n2000754 - IPerf2 tests should be lower\n2000846 - Structure logs in the entire codebase of Local Storage Operator\n2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24\n2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM\n2000938 - CVO does not respect changes to a Deployment strategy\n2000963 - \u0027Inline-volume (default fs)] volumes should store data\u0027 tests are failing on OKD with updated selinux-policy\n2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don\u0027t have snapshot and should be fullClone\n2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole\n2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error\n2001337 - Details Card in ODF Dashboard mentions OCS\n2001339 - fix text content hotplug\n2001413 - [e2e][automation] add/delete nic and disk to template\n2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log\n2001442 - Empty termination.log file for the kube-apiserver has too permissive mode\n2001479 - IBM Cloud DNS unable to create/update records\n2001566 - Enable alerts for prometheus operator in UWM\n2001575 - Clicking on the perspective switcher shows a white page with loader\n2001577 - Quick search placeholder is not displayed properly when the search string is removed\n2001578 - [e2e][automation] add tests for vm dashboard tab\n2001605 - PVs remain in Released state for a long time after the claim is deleted\n2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options\n2001620 - Cluster becomes degraded if it can\u0027t talk to Manila\n2001760 - While creating \u0027Backing Store\u0027, \u0027Bucket Class\u0027, \u0027Namespace Store\u0027 user is navigated to \u0027Installed Operators\u0027 page after clicking on ODF\n2001761 - Unable to apply cluster operator storage for SNO on GCP platform. \n2001765 - Some error message in the log of diskmaker-manager caused confusion\n2001784 - show loading page before final results instead of showing a transient message No log files exist\n2001804 - Reload feature on Environment section in Build Config form does not work properly\n2001810 - cluster admin unable to view BuildConfigs in all namespaces\n2001817 - Failed to load RoleBindings list that will lead to \u2018Role name\u2019 is not able to be selected on Create RoleBinding page as well\n2001823 - OCM controller must update operator status\n2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start\n2001835 - Could not select image tag version when create app from dev console\n2001855 - Add capacity is disabled for ocs-storagecluster\n2001856 - Repeating event: MissingVersion no image found for operand pod\n2001959 - Side nav list borders don\u0027t extend to edges of container\n2002007 - Layout issue on \"Something went wrong\" page\n2002010 - ovn-kube may never attempt to retry a pod creation\n2002012 - Cannot change volume mode when cloning a VM from a template\n2002027 - Two instances of Dotnet helm chart show as one in topology\n2002075 - opm render does not automatically pulling in the image(s) used in the deployments\n2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster\n2002125 - Network policy details page heading should be updated to Network Policy details\n2002133 - [e2e][automation] add support/virtualization and improve deleteResource\n2002134 - [e2e][automation] add test to verify vm details tab\n2002215 - Multipath day1 not working on s390x\n2002238 - Image stream tag is not persisted when switching from yaml to form editor\n2002262 - [vSphere] Incorrect user agent in vCenter sessions list\n2002266 - SinkBinding create form doesn\u0027t allow to use subject name, instead of label selector\n2002276 - OLM fails to upgrade operators immediately\n2002300 - Altering the Schedule Profile configurations doesn\u0027t affect the placement of the pods\n2002354 - Missing DU configuration \"Done\" status reporting during ZTP flow\n2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn\u0027t use commonjs\n2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation\n2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN\n2002397 - Resources search is inconsistent\n2002434 - CRI-O leaks some children PIDs\n2002443 - Getting undefined error on create local volume set page\n2002461 - DNS operator performs spurious updates in response to API\u0027s defaulting of service\u0027s internalTrafficPolicy\n2002504 - When the openshift-cluster-storage-operator is degraded because of \"VSphereProblemDetectorController_SyncError\", the insights operator is not sending the logs from all pods. \n2002559 - User preference for topology list view does not follow when a new namespace is created\n2002567 - Upstream SR-IOV worker doc has broken links\n2002588 - Change text to be sentence case to align with PF\n2002657 - ovn-kube egress IP monitoring is using a random port over the node network\n2002713 - CNO: OVN logs should have millisecond resolution\n2002748 - [ICNI2] \u0027ErrorAddingLogicalPort\u0027 failed to handle external GW check: timeout waiting for namespace event\n2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite\n2002763 - Two storage systems getting created with external mode RHCS\n2002808 - KCM does not use web identity credentials\n2002834 - Cluster-version operator does not remove unrecognized volume mounts\n2002896 - Incorrect result return when user filter data by name on search page\n2002950 - Why spec.containers.command is not created with \"oc create deploymentconfig \u003cdc-name\u003e --image=\u003cimage\u003e -- \u003ccommand\u003e\"\n2003096 - [e2e][automation] check bootsource URL is displaying on review step\n2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role\n2003120 - CI: Uncaught error with ResizeObserver on operand details page\n2003145 - Duplicate operand tab titles causes \"two children with the same key\" warning\n2003164 - OLM, fatal error: concurrent map writes\n2003178 - [FLAKE][knative] The UI doesn\u0027t show updated traffic distribution after accepting the form\n2003193 - Kubelet/crio leaks netns and veth ports in the host\n2003195 - OVN CNI should ensure host veths are removed\n2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting \u0027-e JENKINS_PASSWORD=password\u0027 ENV which was working for old container images\n2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2003239 - \"[sig-builds][Feature:Builds][Slow] can use private repositories as build input\" tests fail outside of CI\n2003244 - Revert libovsdb client code\n2003251 - Patternfly components with list element has list item bullet when they should not. \n2003252 - \"[sig-builds][Feature:Builds][Slow] starting a build using CLI start-build test context override environment BUILD_LOGLEVEL in buildconfig\" tests do not work as expected outside of CI\n2003269 - Rejected pods should be filtered from admission regression\n2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release\n2003426 - [e2e][automation] add test for vm details bootorder\n2003496 - [e2e][automation] add test for vm resources requirment settings\n2003641 - All metal ipi jobs are failing in 4.10\n2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state\n2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node\n2003683 - Samples operator is panicking in CI\n2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster \"Connection Details\" page\n2003715 - Error on creating local volume set after selection of the volume mode\n2003743 - Remove workaround keeping /boot RW for kdump support\n2003775 - etcd pod on CrashLoopBackOff after master replacement procedure\n2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver\n2003792 - Monitoring metrics query graph flyover panel is useless\n2003808 - Add Sprint 207 translations\n2003845 - Project admin cannot access image vulnerabilities view\n2003859 - sdn emits events with garbage messages\n2003896 - (release-4.10) ApiRequestCounts conditional gatherer\n2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas\n2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes\n2004059 - [e2e][automation] fix current tests for downstream\n2004060 - Trying to use basic spring boot sample causes crash on Firefox\n2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn\u0027t close after selection\n2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently\n2004203 - build config\u0027s created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver\n2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory\n2004449 - Boot option recovery menu prevents image boot\n2004451 - The backup filename displayed in the RecentBackup message is incorrect\n2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts\n2004508 - TuneD issues with the recent ConfigParser changes. \n2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions\n2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs\n2004578 - Monitoring and node labels missing for an external storage platform\n2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days\n2004596 - [4.10] Bootimage bump tracker\n2004597 - Duplicate ramdisk log containers running\n2004600 - Duplicate ramdisk log containers running\n2004609 - output of \"crictl inspectp\" is not complete\n2004625 - BMC credentials could be logged if they change\n2004632 - When LE takes a large amount of time, multiple whereabouts are seen\n2004721 - ptp/worker custom threshold doesn\u0027t change ptp events threshold\n2004736 - [knative] Create button on new Broker form is inactive despite form being filled\n2004796 - [e2e][automation] add test for vm scheduling policy\n2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque\n2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card\n2004901 - [e2e][automation] improve kubevirt devconsole tests\n2004962 - Console frontend job consuming too much CPU in CI\n2005014 - state of ODF StorageSystem is misreported during installation or uninstallation\n2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines\n2005179 - pods status filter is not taking effect\n2005182 - sync list of deprecated apis about to be removed\n2005282 - Storage cluster name is given as title in StorageSystem details page\n2005355 - setuptools 58 makes Kuryr CI fail\n2005407 - ClusterNotUpgradeable Alert should be set to Severity Info\n2005415 - PTP operator with sidecar api configured throws bind: address already in use\n2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console\n2005554 - The switch status of the button \"Show default project\" is not revealed correctly in code\n2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable\n2005761 - QE - Implementing crw-basic feature file\n2005783 - Fix accessibility issues in the \"Internal\" and \"Internal - Attached Mode\" Installation Flow\n2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty\n2005854 - SSH NodePort service is created for each VM\n2005901 - KS, KCM and KA going Degraded during master nodes upgrade\n2005902 - Current UI flow for MCG only deployment is confusing and doesn\u0027t reciprocate any message to the end-user\n2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics\n2005971 - Change telemeter to report the Application Services product usage metrics\n2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files\n2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased\n2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types\n2006101 - Power off fails for drivers that don\u0027t support Soft power off\n2006243 - Metal IPI upgrade jobs are running out of disk space\n2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn\u0027t use the 0th address\n2006308 - Backing Store YAML tab on click displays a blank screen on UI\n2006325 - Multicast is broken across nodes\n2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators\n2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource\n2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2006690 - OS boot failure \"x64 Exception Type 06 - Invalid Opcode Exception\"\n2006714 - add retry for etcd errors in kube-apiserver\n2006767 - KubePodCrashLooping may not fire\n2006803 - Set CoreDNS cache entries for forwarded zones\n2006861 - Add Sprint 207 part 2 translations\n2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap\n2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors\n2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded\n2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick\n2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails\n2007271 - CI Integration for Knative test cases\n2007289 - kubevirt tests are failing in CI\n2007322 - Devfile/Dockerfile import does not work for unsupported git host\n2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3. \n2007379 - Events are not generated for master offset for ordinary clock\n2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace\n2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address\n2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error\n2007522 - No new local-storage-operator-metadata-container is build for 4.10\n2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10\n2007580 - Azure cilium installs are failing e2e tests\n2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10\n2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes\n2007692 - 4.9 \"old-rhcos\" jobs are permafailing with storage test failures\n2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow\n2007757 - must-gather extracts imagestreams in the \"openshift\" namespace, but not Templates\n2007802 - AWS machine actuator get stuck if machine is completely missing\n2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator\n2008119 - The serviceAccountIssuer field on Authentication CR is reseted to \u201c\u201d when installation process\n2008151 - Topology breaks on clicking in empty state\n2008185 - Console operator go.mod should use go 1.16.version\n2008201 - openstack-az job is failing on haproxy idle test\n2008207 - vsphere CSI driver doesn\u0027t set resource limits\n2008223 - gather_audit_logs: fix oc command line to get the current audit profile\n2008235 - The Save button in the Edit DC form remains disabled\n2008256 - Update Internationalization README with scope info\n2008321 - Add correct documentation link for MON_DISK_LOW\n2008462 - Disable PodSecurity feature gate for 4.10\n2008490 - Backing store details page does not contain all the kebab actions. \n2008521 - gcp-hostname service should correct invalid search entries in resolv.conf\n2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount\n2008539 - Registry doesn\u0027t fall back to secondary ImageContentSourcePolicy Mirror\n2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers\n2008599 - Azure Stack UPI does not have Internal Load Balancer\n2008612 - Plugin asset proxy does not pass through browser cache headers\n2008712 - VPA webhook timeout prevents all pods from starting\n2008733 - kube-scheduler: exposed /debug/pprof port\n2008911 - Prometheus repeatedly scaling prometheus-operator replica set\n2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2008987 - OpenShift SDN Hosted Egress IP\u0027s are not being scheduled to nodes after upgrade to 4.8.12\n2009055 - Instances of OCS to be replaced with ODF on UI\n2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs\n2009083 - opm blocks pruning of existing bundles during add\n2009111 - [IPI-on-GCP] \u0027Install a cluster with nested virtualization enabled\u0027 failed due to unable to launch compute instances\n2009131 - [e2e][automation] add more test about vmi\n2009148 - [e2e][automation] test vm nic presets and options\n2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator\n2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family\n2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted\n2009384 - UI changes to support BindableKinds CRD changes\n2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped\n2009424 - Deployment upgrade is failing availability check\n2009454 - Change web terminal subscription permissions from get to list\n2009465 - container-selinux should come from rhel8-appstream\n2009514 - Bump OVS to 2.16-15\n2009555 - Supermicro X11 system not booting from vMedia with AI\n2009623 - Console: Observe \u003e Metrics page: Table pagination menu shows bullet points\n2009664 - Git Import: Edit of knative service doesn\u0027t work as expected for git import flow\n2009699 - Failure to validate flavor RAM\n2009754 - Footer is not sticky anymore in import forms\n2009785 - CRI-O\u0027s version file should be pinned by MCO\n2009791 - Installer: ibmcloud ignores install-config values\n2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13\n2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo\n2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2009873 - Stale Logical Router Policies and Annotations for a given node\n2009879 - There should be test-suite coverage to ensure admin-acks work as expected\n2009888 - SRO package name collision between official and community version\n2010073 - uninstalling and then reinstalling sriov-network-operator is not working\n2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node. \n2010181 - Environment variables not getting reset on reload on deployment edit form\n2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2010341 - OpenShift Alerting Rules Style-Guide Compliance\n2010342 - Local console builds can have out of memory errors\n2010345 - OpenShift Alerting Rules Style-Guide Compliance\n2010348 - Reverts PIE build mode for K8S components\n2010352 - OpenShift Alerting Rules Style-Guide Compliance\n2010354 - OpenShift Alerting Rules Style-Guide Compliance\n2010359 - OpenShift Alerting Rules Style-Guide Compliance\n2010368 - OpenShift Alerting Rules Style-Guide Compliance\n2010376 - OpenShift Alerting Rules Style-Guide Compliance\n2010662 - Cluster is unhealthy after image-registry-operator tests\n2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)\n2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API\n2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address\n2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing\n2010864 - Failure building EFS operator\n2010910 - ptp worker events unable to identify interface for multiple interfaces\n2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24\n2010921 - Azure Stack Hub does not handle additionalTrustBundle\n2010931 - SRO CSV uses non default category \"Drivers and plugins\"\n2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. \n2011038 - optional operator conditions are confusing\n2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass\n2011171 - diskmaker-manager constantly redeployed by LSO when creating LV\u0027s\n2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image\n2011368 - Tooltip in pipeline visualization shows misleading data\n2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels\n2011411 - Managed Service\u0027s Cluster overview page contains link to missing Storage dashboards\n2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster\n2011513 - Kubelet rejects pods that use resources that should be freed by completed pods\n2011668 - Machine stuck in deleting phase in VMware \"reconciler failed to Delete machine\"\n2011693 - (release-4.10) \"insightsclient_request_recvreport_total\" metric is always incremented\n2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn\u0027t export namespace labels anymore\n2011733 - Repository README points to broken documentarion link\n2011753 - Ironic resumes clean before raid configuration job is actually completed\n2011809 - The nodes page in the openshift console doesn\u0027t work. You just get a blank page\n2011822 - Obfuscation doesn\u0027t work at clusters with OVN\n2011882 - SRO helm charts not synced with templates\n2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot\n2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages\n2011903 - vsphere-problem-detector: session leak\n2011927 - OLM should allow users to specify a proxy for GRPC connections\n2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods\n2011960 - [tracker] Storage operator is not available after reboot cluster instances\n2011971 - ICNI2 pods are stuck in ContainerCreating state\n2011972 - Ingress operator not creating wildcard route for hypershift clusters\n2011977 - SRO bundle references non-existent image\n2012069 - Refactoring Status controller\n2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI\n2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group\n2012233 - [IBMCLOUD] IPI: \"Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)\"\n2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig\n2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off\n2012407 - [e2e][automation] improve vm tab console tests\n2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don\u0027t have namespace label\n2012562 - migration condition is not detected in list view\n2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written\n2012780 - The port 50936 used by haproxy is occupied by kube-apiserver\n2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working\n2012902 - Neutron Ports assigned to Completed Pods are not reused Edit\n2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack\n2012971 - Disable operands deletes\n2013034 - Cannot install to openshift-nmstate namespace\n2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)\n2013199 - post reboot of node SRIOV policy taking huge time\n2013203 - UI breaks when trying to create block pool before storage cluster/system creation\n2013222 - Full breakage for nightly payload promotion\n2013273 - Nil pointer exception when phc2sys options are missing\n2013321 - TuneD: high CPU utilization of the TuneD daemon. \n2013416 - Multiple assets emit different content to the same filename\n2013431 - Application selector dropdown has incorrect font-size and positioning\n2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2013545 - Service binding created outside topology is not visible\n2013599 - Scorecard support storage is not included in ocp4.9\n2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)\n2013646 - fsync controller will show false positive if gaps in metrics are observed. \n2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default\n2013751 - Service details page is showing wrong in-cluster hostname\n2013787 - There are two tittle \u0027Network Attachment Definition Details\u0027 on NAD details page\n2013871 - Resource table headings are not aligned with their column data\n2013895 - Cannot enable accelerated network via MachineSets on Azure\n2013920 - \"--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude\"\n2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG)\n2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain\n2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab)\n2013996 - Project detail page: Action \"Delete Project\" does nothing for the default project\n2014071 - Payload imagestream new tags not properly updated during cluster upgrade\n2014153 - SRIOV exclusive pooling\n2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace\n2014238 - AWS console test is failing on importing duplicate YAML definitions\n2014245 - Several aria-labels, external links, and labels aren\u0027t internationalized\n2014248 - Several files aren\u0027t internationalized\n2014352 - Could not filter out machine by using node name on machines page\n2014464 - Unexpected spacing/padding below navigation groups in developer perspective\n2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages\n2014486 - Integration Tests: OLM single namespace operator tests failing\n2014488 - Custom operator cannot change orders of condition tables\n2014497 - Regex slows down different forms and creates too much recursion errors in the log\n2014538 - Kuryr controller crash looping on self._get_vip_port(loadbalancer).id \u0027NoneType\u0027 object has no attribute \u0027id\u0027\n2014614 - Metrics scraping requests should be assigned to exempt priority level\n2014710 - TestIngressStatus test is broken on Azure\n2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly\n2014995 - oc adm must-gather cannot gather audit logs with \u0027None\u0027 audit profile\n2015115 - [RFE] PCI passthrough\n2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl \u0027--resource-group-name\u0027 parameter\n2015154 - Support ports defined networks and primarySubnet\n2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic\n2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production\n2015386 - Possibility to add labels to the built-in OCP alerts\n2015395 - Table head on Affinity Rules modal is not fully expanded\n2015416 - CI implementation for Topology plugin\n2015418 - Project Filesystem query returns No datapoints found\n2015420 - No vm resource in project view\u0027s inventory\n2015422 - No conflict checking on snapshot name\n2015472 - Form and YAML view switch button should have distinguishable status\n2015481 - [4.10] sriov-network-operator daemon pods are failing to start\n2015493 - Cloud Controller Manager Operator does not respect \u0027additionalTrustBundle\u0027 setting\n2015496 - Storage - PersistentVolumes : Claim colum value \u0027No Claim\u0027 in English\n2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on \u0027Add Capacity\u0027 button click\n2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu\n2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. \n2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart \u0027x% used\u0027 is in English\n2015549 - Observe - Metrics: Column heading and pagination text is in English\n2015557 - Workloads - DeploymentConfigs : Error message is in English\n2015568 - Compute - Nodes : CPU column\u0027s values are in English\n2015635 - Storage operator fails causing installation to fail on ASH\n2015660 - \"Finishing boot source customization\" screen should not use term \"patched\"\n2015793 - [hypershift] The collect-profiles job\u0027s pods should run on the control-plane node\n2015806 - Metrics view in Deployment reports \"Forbidden\" when not cluster-admin\n2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning\n2015837 - OS_CLOUD overwrites install-config\u0027s platform.openstack.cloud\n2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch\n2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail\n2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)\n2016008 - [4.10] Bootimage bump tracker\n2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver\n2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator\n2016054 - No e2e CI presubmit configured for release component cluster-autoscaler\n2016055 - No e2e CI presubmit configured for release component console\n2016058 - openshift-sync does not synchronise in \"ose-jenkins:v4.8\"\n2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager\n2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers\n2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. \n2016179 - Add Sprint 208 translations\n2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager\n2016235 - should update to 7.5.11 for grafana resources version label\n2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails\n2016334 - shiftstack: SRIOV nic reported as not supported\n2016352 - Some pods start before CA resources are present\n2016367 - Empty task box is getting created for a pipeline without finally task\n2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts\n2016438 - Feature flag gating is missing in few extensions contributed via knative plugin\n2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc\n2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets\n2016453 - Complete i18n for GaugeChart defaults\n2016479 - iface-id-ver is not getting updated for existing lsp\n2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear\n2016951 - dynamic actions list is not disabling \"open console\" for stopped vms\n2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available\n2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances\n2017016 - [REF] Virtualization menu\n2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn\n2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly\n2017130 - t is not a function error navigating to details page\n2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue\n2017244 - ovirt csi operator static files creation is in the wrong order\n2017276 - [4.10] Volume mounts not created with the correct security context\n2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. \n2017427 - NTO does not restart TuneD daemon when profile application is taking too long\n2017535 - Broken Argo CD link image on GitOps Details Page\n2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references\n2017564 - On-prem prepender dispatcher script overwrites DNS search settings\n2017565 - CCMO does not handle additionalTrustBundle on Azure Stack\n2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice\n2017606 - [e2e][automation] add test to verify send key for VNC console\n2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes\n2017656 - VM IP address is \"undefined\" under VM details -\u003e ssh field\n2017663 - SSH password authentication is disabled when public key is not supplied\n2017680 - [gcp] Couldn\u2019t enable support for instances with GPUs on GCP\n2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set\n2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource\n2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults\n2017761 - [e2e][automation] dummy bug for 4.9 test dependency\n2017872 - Add Sprint 209 translations\n2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances\n2017879 - Add Chinese translation for \"alternate\"\n2017882 - multus: add handling of pod UIDs passed from runtime\n2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods\n2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI\n2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS\n2018094 - the tooltip length is limited\n2018152 - CNI pod is not restarted when It cannot start servers due to ports being used\n2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time\n2018234 - user settings are saved in local storage instead of on cluster\n2018264 - Delete Export button doesn\u0027t work in topology sidebar (general issue with unknown CSV?)\n2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports)\n2018275 - Topology graph doesn\u0027t show context menu for Export CSV\n2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked\n2018380 - Migrate docs links to access.redhat.com\n2018413 - Error: context deadline exceeded, OCP 4.8.9\n2018428 - PVC is deleted along with VM even with \"Delete Disks\" unchecked\n2018445 - [e2e][automation] enhance tests for downstream\n2018446 - [e2e][automation] move tests to different level\n2018449 - [e2e][automation] add test about create/delete network attachment definition\n2018490 - [4.10] Image provisioning fails with file name too long\n2018495 - Fix typo in internationalization README\n2018542 - Kernel upgrade does not reconcile DaemonSet\n2018880 - Get \u0027No datapoints found.\u0027 when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit\n2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes\n2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950\n2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10\n2018985 - The rootdisk size is 15Gi of windows VM in customize wizard\n2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync. \n2019096 - Update SRO leader election timeout to support SNO\n2019129 - SRO in operator hub points to wrong repo for README\n2019181 - Performance profile does not apply\n2019198 - ptp offset metrics are not named according to the log output\n2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest\n2019284 - Stop action should not in the action list while VMI is not running\n2019346 - zombie processes accumulation and Argument list too long\n2019360 - [RFE] Virtualization Overview page\n2019452 - Logger object in LSO appends to existing logger recursively\n2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect\n2019634 - Pause and migration is enabled in action list for a user who has view only permission\n2019636 - Actions in VM tabs should be disabled when user has view only permission\n2019639 - \"Take snapshot\" should be disabled while VM image is still been importing\n2019645 - Create button is not removed on \"Virtual Machines\" page for view only user\n2019646 - Permission error should pop-up immediately while clicking \"Create VM\" button on template page for view only user\n2019647 - \"Remove favorite\" and \"Create new Template\" should be disabled in template action list for view only user\n2019717 - cant delete VM with un-owned pvc attached\n2019722 - The shared-resource-csi-driver-node pod runs as \u201cBestEffort\u201d qosClass\n2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as \"Always\"\n2019744 - [RFE] Suggest users to download newest RHEL 8 version\n2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level\n2019827 - Display issue with top-level menu items running demo plugin\n2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded\n2019886 - Kuryr unable to finish ports recovery upon controller restart\n2019948 - [RFE] Restructring Virtualization links\n2019972 - The Nodes section doesn\u0027t display the csr of the nodes that are trying to join the cluster\n2019977 - Installer doesn\u0027t validate region causing binary to hang with a 60 minute timeout\n2019986 - Dynamic demo plugin fails to build\n2019992 - instance:node_memory_utilisation:ratio metric is incorrect\n2020001 - Update dockerfile for demo dynamic plugin to reflect dir change\n2020003 - MCD does not regard \"dangling\" symlinks as a files, attempts to write through them on next backup, resulting in \"not writing through dangling symlink\" error and degradation. \n2020107 - cluster-version-operator: remove runlevel from CVO namespace\n2020153 - Creation of Windows high performance VM fails\n2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn\u0027t be public\n2020250 - Replacing deprecated ioutil\n2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build\n2020275 - ClusterOperators link in console returns blank page during upgrades\n2020377 - permissions error while using tcpdump option with must-gather\n2020489 - coredns_dns metrics don\u0027t include the custom zone metrics data due to CoreDNS prometheus plugin is not defined\n2020498 - \"Show PromQL\" button is disabled\n2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature\n2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI\n2020664 - DOWN subports are not cleaned up\n2020904 - When trying to create a connection from the Developer view between VMs, it fails\n2021016 - \u0027Prometheus Stats\u0027 of dashboard \u0027Prometheus Overview\u0027 miss data on console compared with Grafana\n2021017 - 404 page not found error on knative eventing page\n2021031 - QE - Fix the topology CI scripts\n2021048 - [RFE] Added MAC Spoof check\n2021053 - Metallb operator presented as community operator\n2021067 - Extensive number of requests from storage version operator in cluster\n2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes\n2021135 - [azure-file-csi-driver] \"make unit-test\" returns non-zero code, but tests pass\n2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node\n2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating\n2021152 - imagePullPolicy is \"Always\" for ptp operator images\n2021191 - Project admins should be able to list available network attachment defintions\n2021205 - Invalid URL in git import form causes validation to not happen on URL change\n2021322 - cluster-api-provider-azure should populate purchase plan information\n2021337 - Dynamic Plugins: ResourceLink doesn\u0027t render when passed a groupVersionKind\n2021364 - Installer requires invalid AWS permission s3:GetBucketReplication\n2021400 - Bump documentationBaseURL to 4.10\n2021405 - [e2e][automation] VM creation wizard Cloud Init editor\n2021433 - \"[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified\" test fail permanently on disconnected\n2021466 - [e2e][automation] Windows guest tool mount\n2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver\n2021551 - Build is not recognizing the USER group from an s2i image\n2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character\n2021629 - api request counts for current hour are incorrect\n2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page\n2021693 - Modals assigned modal-lg class are no longer the correct width\n2021724 - Observe \u003e Dashboards: Graph lines are not visible when obscured by other lines\n2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled\n2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags\n2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem\n2022053 - dpdk application with vhost-net is not able to start\n2022114 - Console logging every proxy request\n2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack (Intermittent)\n2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long\n2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error . \n2022447 - ServiceAccount in manifests conflicts with OLM\n2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. \n2022509 - getOverrideForManifest does not check manifest.GVK.Group\n2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache\n2022612 - no namespace field for \"Kubernetes / Compute Resources / Namespace (Pods)\" admin console dashboard\n2022627 - Machine object not picking up external FIP added to an openstack vm\n2022646 - configure-ovs.sh failure - Error: unknown connection \u0027WARN:\u0027\n2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox\n2022801 - Add Sprint 210 translations\n2022811 - Fix kubelet log rotation file handle leak\n2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations\n2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2022880 - Pipeline renders with minor visual artifact with certain task dependencies\n2022886 - Incorrect URL in operator description\n2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config\n2023060 - [e2e][automation] Windows VM with CDROM migration\n2023077 - [e2e][automation] Home Overview Virtualization status\n2023090 - [e2e][automation] Examples of Import URL for VM templates\n2023102 - [e2e][automation] Cloudinit disk of VM from custom template\n2023216 - ACL for a deleted egressfirewall still present on node join switch\n2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9\n2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy\n2023342 - SCC admission should take ephemeralContainers into account\n2023356 - Devfiles can\u0027t be loaded in Safari on macOS (403 - Forbidden)\n2023434 - Update Azure Machine Spec API to accept Marketplace Images\n2023500 - Latency experienced while waiting for volumes to attach to node\n2023522 - can\u0027t remove package from index: database is locked\n2023560 - \"Network Attachment Definitions\" has no project field on the top in the list view\n2023592 - [e2e][automation] add mac spoof check for nad\n2023604 - ACL violation when deleting a provisioning-configuration resource\n2023607 - console returns blank page when normal user without any projects visit Installed Operators page\n2023638 - Downgrade support level for extended control plane integration to Dev Preview\n2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10\n2023675 - Changing CNV Namespace\n2023779 - Fix Patch 104847 in 4.9\n2023781 - initial hardware devices is not loading in wizard\n2023832 - CCO updates lastTransitionTime for non-Status changes\n2023839 - Bump recommended FCOS to 34.20211031.3.0\n2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly\n2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from \"registry:5000\" repository\n2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8\n2024055 - External DNS added extra prefix for the TXT record\n2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully\n2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json\n2024199 - 400 Bad Request error for some queries for the non admin user\n2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode\n2024262 - Sample catalog is not displayed when one API call to the backend fails\n2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability\n2024316 - modal about support displays wrong annotation\n2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected\n2024399 - Extra space is in the translated text of \"Add/Remove alternate service\" on Create Route page\n2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view\n2024493 - Observe \u003e Alerting \u003e Alerting rules page throws error trying to destructure undefined\n2024515 - test-blocker: Ceph-storage-plugin tests failing\n2024535 - hotplug disk missing OwnerReference\n2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image\n2024547 - Detail page is breaking for namespace store , backing store and bucket class. \n2024551 - KMS resources not getting created for IBM FlashSystem storage\n2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel\n2024613 - pod-identity-webhook starts without tls\n2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded\n2024665 - Bindable services are not shown on topology\n2024731 - linuxptp container: unnecessary checking of interfaces\n2024750 - i18n some remaining OLM items\n2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured\n2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack\n2024841 - test Keycloak with latest tag\n2024859 - Not able to deploy an existing image from private image registry using developer console\n2024880 - Egress IP breaks when network policies are applied\n2024900 - Operator upgrade kube-apiserver\n2024932 - console throws \"Unauthorized\" error after logging out\n2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up\n2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick\n2025230 - ClusterAutoscalerUnschedulablePods should not be a warning\n2025266 - CreateResource route has exact prop which need to be removed\n2025301 - [e2e][automation] VM actions availability in different VM states\n2025304 - overwrite storage section of the DV spec instead of the pvc section\n2025431 - [RFE]Provide specific windows source link\n2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36\n2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node\n2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn\u0027t work for ExternalTrafficPolicy=local\n2025481 - Update VM Snapshots UI\n2025488 - [DOCS] Update the doc for nmstate operator installation\n2025592 - ODC 4.9 supports invalid devfiles only\n2025765 - It should not try to load from storageProfile after unchecking\"Apply optimized StorageProfile settings\"\n2025767 - VMs orphaned during machineset scaleup\n2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns \"kubevirt-hyperconverged\" while using customize wizard\n2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size\u2019s vCPUsAvailable instead of vCPUs for the sku. \n2025821 - Make \"Network Attachment Definitions\" available to regular user\n2025823 - The console nav bar ignores plugin separator in existing sections\n2025830 - CentOS capitalizaion is wrong\n2025837 - Warn users that the RHEL URL expire\n2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-*\n2025903 - [UI] RoleBindings tab doesn\u0027t show correct rolebindings\n2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2026178 - OpenShift Alerting Rules Style-Guide Compliance\n2026209 - Updation of task is getting failed (tekton hub integration)\n2026223 - Internal error occurred: failed calling webhook \"ptpconfigvalidationwebhook.openshift.io\"\n2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates\n2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct\n2026352 - Kube-Scheduler revision-pruner fail during install of new cluster\n2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment\n2026383 - Error when rendering custom Grafana dashboard through ConfigMap\n2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation\n2026396 - Cachito Issues: sriov-network-operator Image build failure\n2026488 - openshift-controller-manager - delete event is repeating pathologically\n2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. \n2026560 - Cluster-version operator does not remove unrecognized volume mounts\n2026699 - fixed a bug with missing metadata\n2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator\n2026898 - Description/details are missing for Local Storage Operator\n2027132 - Use the specific icon for Fedora and CentOS template\n2027238 - \"Node Exporter / USE Method / Cluster\" CPU utilization graph shows incorrect legend\n2027272 - KubeMemoryOvercommit alert should be human readable\n2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group\n2027288 - Devfile samples can\u0027t be loaded after fixing it on Safari (redirect caching issue)\n2027299 - The status of checkbox component is not revealed correctly in code\n2027311 - K8s watch hooks do not work when fetching core resources\n2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation\n2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don\u0027t use the downstream images\n2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation\n2027498 - [IBMCloud] SG Name character length limitation\n2027501 - [4.10] Bootimage bump tracker\n2027524 - Delete Application doesn\u0027t delete Channels or Brokers\n2027563 - e2e/add-flow-ci.feature fix accessibility violations\n2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges\n2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions\n2027685 - openshift-cluster-csi-drivers pods crashing on PSI\n2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced\n2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string\n2027917 - No settings in hostfirmwaresettings and schema objects for masters\n2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf\n2027982 - nncp stucked at ConfigurationProgressing\n2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters\n2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed\n2028030 - Panic detected in cluster-image-registry-operator pod\n2028042 - Desktop viewer for Windows VM shows \"no Service for the RDP (Remote Desktop Protocol) can be found\"\n2028054 - Cloud controller manager operator can\u0027t get leader lease when upgrading from 4.8 up to 4.9\n2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin\n2028141 - Console tests doesn\u0027t pass on Node.js 15 and 16\n2028160 - Remove i18nKey in network-policy-peer-selectors.tsx\n2028162 - Add Sprint 210 translations\n2028170 - Remove leading and trailing whitespace\n2028174 - Add Sprint 210 part 2 translations\n2028187 - Console build doesn\u0027t pass on Node.js 16 because node-sass doesn\u0027t support it\n2028217 - Cluster-version operator does not default Deployment replicas to one\n2028240 - Multiple CatalogSources causing higher CPU use than necessary\n2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn\u0027t be set in HostFirmwareSettings\n2028325 - disableDrain should be set automatically on SNO\n2028484 - AWS EBS CSI driver\u0027s livenessprobe does not respect operator\u0027s loglevel\n2028531 - Missing netFilter to the list of parameters when platform is OpenStack\n2028610 - Installer doesn\u0027t retry on GCP rate limiting\n2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting\n2028695 - destroy cluster does not prune bootstrap instance profile\n2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs\n2028802 - CRI-O panic due to invalid memory address or nil pointer dereference\n2028816 - VLAN IDs not released on failures\n2028881 - Override not working for the PerformanceProfile template\n2028885 - Console should show an error context if it logs an error object\n2028949 - Masthead dropdown item hover text color is incorrect\n2028963 - Whereabouts should reconcile stranded IP addresses\n2029034 - enabling ExternalCloudProvider leads to inoperative cluster\n2029178 - Create VM with wizard - page is not displayed\n2029181 - Missing CR from PGT\n2029273 - wizard is not able to use if project field is \"All Projects\"\n2029369 - Cypress tests github rate limit errors\n2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out\n2029394 - missing empty text for hardware devices at wizard review\n2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used\n2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl\n2029521 - EFS CSI driver cannot delete volumes under load\n2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle\n2029579 - Clicking on an Application which has a Helm Release in it causes an error\n2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn\u0027t for HPE\n2029645 - Sync upstream 1.15.0 downstream\n2029671 - VM action \"pause\" and \"clone\" should be disabled while VM disk is still being importing\n2029742 - [ovn] Stale lr-policy-list and snat rules left for egressip\n2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage\n2029785 - CVO panic when an edge is included in both edges and conditionaledges\n2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp)\n2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error\n2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2030228 - Fix StorageSpec resources field to use correct API\n2030229 - Mirroring status card reflect wrong data\n2030240 - Hide overview page for non-privileged user\n2030305 - Export App job do not completes\n2030347 - kube-state-metrics exposes metrics about resource annotations\n2030364 - Shared resource CSI driver monitoring is not setup correctly\n2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets\n2030534 - Node selector/tolerations rules are evaluated too early\n2030539 - Prometheus is not highly available\n2030556 - Don\u0027t display Description or Message fields for alerting rules if those annotations are missing\n2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation\n2030574 - console service uses older \"service.alpha.openshift.io\" for the service serving certificates. \n2030677 - BOND CNI: There is no option to configure MTU on a Bond interface\n2030692 - NPE in PipelineJobListener.upsertWorkflowJob\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2030847 - PerformanceProfile API version should be v2\n2030961 - Customizing the OAuth server URL does not apply to upgraded cluster\n2031006 - Application name input field is not autofocused when user selects \"Create application\"\n2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex\n2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn\u0027t be started\n2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue\n2031057 - Topology sidebar for Knative services shows a small pod ring with \"0 undefined\" as tooltip\n2031060 - Failing CSR Unit test due to expired test certificate\n2031085 - ovs-vswitchd running more threads than expected\n2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1\n2031228 - CVE-2021-43813 grafana: directory traversal vulnerability\n2031502 - [RFE] New common templates crash the ui\n2031685 - Duplicated forward upstreams should be removed from the dns operator\n2031699 - The displayed ipv6 address of a dns upstream should be case sensitive\n2031797 - [RFE] Order and text of Boot source type input are wrong\n2031826 - CI tests needed to confirm driver-toolkit image contents\n2031831 - OCP Console - Global CSS overrides affecting dynamic plugins\n2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional\n2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled)\n2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain)\n2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself\n2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource\n2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64\n2032141 - open the alertrule link in new tab, got empty page\n2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy\n2032296 - Cannot create machine with ephemeral disk on Azure\n2032407 - UI will show the default openshift template wizard for HANA template\n2032415 - Templates page - remove \"support level\" badge and add \"support level\" column which should not be hard coded\n2032421 - [RFE] UI integration with automatic updated images\n2032516 - Not able to import git repo with .devfile.yaml\n2032521 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the aws_vpc_dhcp_options_association resource\n2032547 - hardware devices table have filter when table is empty\n2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool\n2032566 - Cluster-ingress-router does not support Azure Stack\n2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso\n2032589 - DeploymentConfigs ignore resolve-names annotation\n2032732 - Fix styling conflicts due to recent console-wide CSS changes\n2032831 - Knative Services and Revisions are not shown when Service has no ownerReference\n2032851 - Networking is \"not available\" in Virtualization Overview\n2032926 - Machine API components should use K8s 1.23 dependencies\n2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24\n2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster\n2033013 - Project dropdown in user preferences page is broken\n2033044 - Unable to change import strategy if devfile is invalid\n2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable\n2033111 - IBM VPC operator library bump removed global CLI args\n2033138 - \"No model registered for Templates\" shows on customize wizard\n2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected\n2033239 - [IPI on Alibabacloud] \u0027openshift-install\u0027 gets the wrong region (\u2018cn-hangzhou\u2019) selected\n2033257 - unable to use configmap for helm charts\n2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn\u2019t triggered\n2033290 - Product builds for console are failing\n2033382 - MAPO is missing machine annotations\n2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations\n2033403 - Devfile catalog does not show provider information\n2033404 - Cloud event schema is missing source type and resource field is using wrong value\n2033407 - Secure route data is not pre-filled in edit flow form\n2033422 - CNO not allowing LGW conversion from SGW in runtime\n2033434 - Offer darwin/arm64 oc in clidownloads\n2033489 - CCM operator failing on baremetal platform\n2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver\n2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains\n2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating \"cluster-infrastructure-02-config.yml\" status, which leads to bootstrap failed and all master nodes NotReady\n2033538 - Gather Cost Management Metrics Custom Resource\n2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined\n2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page\n2033634 - list-style-type: disc is applied to the modal dropdowns\n2033720 - Update samples in 4.10\n2033728 - Bump OVS to 2.16.0-33\n2033729 - remove runtime request timeout restriction for azure\n2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended\n2033749 - Azure Stack Terraform fails without Local Provider\n2033750 - Local volume should pull multi-arch image for kube-rbac-proxy\n2033751 - Bump kubernetes to 1.23\n2033752 - make verify fails due to missing yaml-patch\n2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource\n2034004 - [e2e][automation] add tests for VM snapshot improvements\n2034068 - [e2e][automation] Enhance tests for 4.10 downstream\n2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore\n2034097 - [OVN] After edit EgressIP object, the status is not correct\n2034102 - [OVN] Recreate the deleted EgressIP object got InvalidEgressIP warning\n2034129 - blank page returned when clicking \u0027Get started\u0027 button\n2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0\n2034153 - CNO does not verify MTU migration for OpenShiftSDN\n2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled\n2034170 - Use function.knative.dev for Knative Functions related labels\n2034190 - unable to add new VirtIO disks to VMs\n2034192 - Prometheus fails to insert reporting metrics when the sample limit is met\n2034243 - regular user cant load template list\n2034245 - installing a cluster on aws, gcp always fails with \"Error: Incompatible provider version\"\n2034248 - GPU/Host device modal is too small\n2034257 - regular user `Create VM` missing permissions alert\n2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2034287 - do not block upgrades if we can\u0027t create storageclass in 4.10 in vsphere\n2034300 - Du validator policy is NonCompliant after DU configuration completed\n2034319 - Negation constraint is not validating packages\n2034322 - CNO doesn\u0027t pick up settings required when ExternalControlPlane topology\n2034350 - The CNO should implement the Whereabouts IP reconciliation cron job\n2034362 - update description of disk interface\n2034398 - The Whereabouts IPPools CRD should include the podref field\n2034409 - Default CatalogSources should be pointing to 4.10 index images\n2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics\n2034413 - cloud-network-config-controller fails to init with secret \"cloud-credentials\" not found in manual credential mode\n2034460 - Summary: cloud-network-config-controller does not account for different environment\n2034474 - Template\u0027s boot source is \"Unknown source\" before and after set enableCommonBootImageImport to true\n2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren\u0027t working properly\n2034493 - Change cluster version operator log level\n2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list\n2034527 - IPI deployment fails \u0027timeout reached while inspecting the node\u0027 when provisioning network ipv6\n2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer\n2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART\n2034537 - Update team\n2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds\n2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success\n2034577 - Current OVN gateway mode should be reflected on node annotation as well\n2034621 - context menu not popping up for application group\n2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10\n2034624 - Warn about unsupported CSI driver in vsphere operator\n2034647 - missing volumes list in snapshot modal\n2034648 - Rebase openshift-controller-manager to 1.23\n2034650 - Rebase openshift/builder to 1.23\n2034705 - vSphere: storage e2e tests logging configuration data\n2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail. \n2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment\n2034785 - ptpconfig with summary_interval cannot be applied\n2034823 - RHEL9 should be starred in template list\n2034838 - An external router can inject routes if no service is added\n2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent\n2034879 - Lifecycle hook\u0027s name and owner shouldn\u0027t be allowed to be empty\n2034881 - Cloud providers components should use K8s 1.23 dependencies\n2034884 - ART cannot build the image because it tries to download controller-gen\n2034889 - `oc adm prune deployments` does not work\n2034898 - Regression in recently added Events feature\n2034957 - update openshift-apiserver to kube 1.23.1\n2035015 - ClusterLogForwarding CR remains stuck remediating forever\n2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster\n2035141 - [RFE] Show GPU/Host devices in template\u0027s details tab\n2035146 - \"kubevirt-plugin~PVC cannot be empty\" shows on add-disk modal while adding existing PVC\n2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting\n2035199 - IPv6 support in mtu-migration-dispatcher.yaml\n2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing\n2035250 - Peering with ebgp peer over multi-hops doesn\u0027t work\n2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices\n2035315 - invalid test cases for AWS passthrough mode\n2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env\n2035321 - Add Sprint 211 translations\n2035326 - [ExternalCloudProvider] installation with additional network on workers fails\n2035328 - Ccoctl does not ignore credentials request manifest marked for deletion\n2035333 - Kuryr orphans ports on 504 errors from Neutron\n2035348 - Fix two grammar issues in kubevirt-plugin.json strings\n2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets\n2035409 - OLM E2E test depends on operator package that\u0027s no longer published\n2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address\n2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to \u0027ecs-cn-hangzhou.aliyuncs.com\u0027 timeout, although the specified region is \u0027us-east-1\u0027\n2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster\n2035467 - UI: Queried metrics can\u0027t be ordered on Oberve-\u003eMetrics page\n2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers\n2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class\n2035602 - [e2e][automation] add tests for Virtualization Overview page cards\n2035703 - Roles -\u003e RoleBindings tab doesn\u0027t show RoleBindings correctly\n2035704 - RoleBindings list page filter doesn\u0027t apply\n2035705 - Azure \u0027Destroy cluster\u0027 get stuck when the cluster resource group is already not existing. \n2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed\n2035772 - AccessMode and VolumeMode is not reserved for customize wizard\n2035847 - Two dashes in the Cronjob / Job pod name\n2035859 - the output of opm render doesn\u0027t contain olm.constraint which is defined in dependencies.yaml\n2035882 - [BIOS setting values] Create events for all invalid settings in spec\n2035903 - One redundant capi-operator credential requests in \u201coc adm extract --credentials-requests\u201d\n2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen\n2035927 - Cannot enable HighNodeUtilization scheduler profile\n2035933 - volume mode and access mode are empty in customize wizard review tab\n2035969 - \"ip a \" shows \"Error: Peer netns reference is invalid\" after create test pods\n2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation\n2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error\n2036029 - New added cloud-network-config operator doesn\u2019t supported aws sts format credential\n2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend\n2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes\n2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23\n2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23\n2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments\n2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists\n2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected\n2036826 - `oc adm prune deployments` can prune the RC/RS\n2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform\n2036861 - kube-apiserver is degraded while enable multitenant\n2036937 - Command line tools page shows wrong download ODO link\n2036940 - oc registry login fails if the file is empty or stdout\n2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container\n2036989 - Route URL copy to clipboard button wraps to a separate line by itself\n2036990 - ZTP \"DU Done inform policy\" never becomes compliant on multi-node clusters\n2036993 - Machine API components should use Go lang version 1.17\n2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log. \n2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api\n2037073 - Alertmanager container fails to start because of startup probe never being successful\n2037075 - Builds do not support CSI volumes\n2037167 - Some log level in ibm-vpc-block-csi-controller are hard code\n2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles\n2037182 - PingSource badge color is not matched with knativeEventing color\n2037203 - \"Running VMs\" card is too small in Virtualization Overview\n2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly\n2037237 - Add \"This is a CD-ROM boot source\" to customize wizard\n2037241 - default TTL for noobaa cache buckets should be 0\n2037246 - Cannot customize auto-update boot source\n2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately\n2037288 - Remove stale image reference\n2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources\n2037483 - Rbacs for Pods within the CBO should be more restrictive\n2037484 - Bump dependencies to k8s 1.23\n2037554 - Mismatched wave number error message should include the wave numbers that are in conflict\n2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]\n2037635 - impossible to configure custom certs for default console route in ingress config\n2037637 - configure custom certificate for default console route doesn\u0027t take effect for OCP \u003e= 4.8\n2037638 - Builds do not support CSI volumes as volume sources\n2037664 - text formatting issue in Installed Operators list table\n2037680 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037689 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037801 - Serverless installation is failing on CI jobs for e2e tests\n2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format\n2037856 - use lease for leader election\n2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10\n2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests\n2037904 - upgrade operator deployment failed due to memory limit too low for manager container\n2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]\n2038034 - non-privileged user cannot see auto-update boot source\n2038053 - Bump dependencies to k8s 1.23\n2038088 - Remove ipa-downloader references\n2038160 - The `default` project missed the annotation : openshift.io/node-selector: \"\"\n2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional\n2038196 - must-gather is missing collecting some metal3 resources\n2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)\n2038253 - Validator Policies are long lived\n2038272 - Failures to build a PreprovisioningImage are not reported\n2038384 - Azure Default Instance Types are Incorrect\n2038389 - Failing test: [sig-arch] events should not repeat pathologically\n2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket\n2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips\n2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained\n2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect\n2038663 - update kubevirt-plugin OWNERS\n2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via \"oc adm groups new\"\n2038705 - Update ptp reviewers\n2038761 - Open Observe-\u003eTargets page, wait for a while, page become blank\n2038768 - All the filters on the Observe-\u003eTargets page can\u0027t work\n2038772 - Some monitors failed to display on Observe-\u003eTargets page\n2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node\n2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces\n2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard\n2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation\n2038864 - E2E tests fail because multi-hop-net was not created\n2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console\n2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured\n2038968 - Move feature gates from a carry patch to openshift/api\n2039056 - Layout issue with breadcrumbs on API explorer page\n2039057 - Kind column is not wide enough in API explorer page\n2039064 - Bulk Import e2e test flaking at a high rate\n2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled\n2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters\n2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost\n2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy\n2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator\n2039170 - [upgrade]Error shown on registry operator \"missing the cloud-provider-config configmap\" after upgrade\n2039227 - Improve image customization server parameter passing during installation\n2039241 - Improve image customization server parameter passing during installation\n2039244 - Helm Release revision history page crashes the UI\n2039294 - SDN controller metrics cannot be consumed correctly by prometheus\n2039311 - oc Does Not Describe Build CSI Volumes\n2039315 - Helm release list page should only fetch secrets for deployed charts\n2039321 - SDN controller metrics are not being consumed by prometheus\n2039330 - Create NMState button doesn\u0027t work in OperatorHub web console\n2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations\n2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters. \n2039359 - `oc adm prune deployments` can\u0027t prune the RS where the associated Deployment no longer exists\n2039382 - gather_metallb_logs does not have execution permission\n2039406 - logout from rest session after vsphere operator sync is finished\n2039408 - Add GCP region northamerica-northeast2 to allowed regions\n2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration\n2039425 - No need to set KlusterletAddonConfig CR applicationManager-\u003eenabled: true in RAN ztp deployment\n2039491 - oc - git:// protocol used in unit tests\n2039516 - Bump OVN to ovn21.12-21.12.0-25\n2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate\n2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled\n2039541 - Resolv-prepender script duplicating entries\n2039586 - [e2e] update centos8 to centos stream8\n2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty\n2039619 - [AWS] In tree provisioner storageclass aws disk type should contain \u0027gp3\u0027 and csi provisioner storageclass default aws disk type should be \u0027gp3\u0027\n2039670 - Create PDBs for control plane components\n2039678 - Page goes blank when create image pull secret\n2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported\n2039743 - React missing key warning when open operator hub detail page (and maybe others as well)\n2039756 - React missing key warning when open KnativeServing details\n2039770 - Observe dashboard doesn\u0027t react on time-range changes after browser reload when perspective is changed in another tab\n2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard\n2039781 - [GSS] OBC is not visible by admin of a Project on Console\n2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector\n2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled\n2039880 - Log level too low for control plane metrics\n2039919 - Add E2E test for router compression feature\n2039981 - ZTP for standard clusters installs stalld on master nodes\n2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead\n2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced\n2040143 - [IPI on Alibabacloud] suggest to remove region \"cn-nanjing\" or provide better error message\n2040150 - Update ConfigMap keys for IBM HPCS\n2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth\n2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository\n2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp\n2040376 - \"unknown instance type\" error for supported m6i.xlarge instance\n2040394 - Controller: enqueue the failed configmap till services update\n2040467 - Cannot build ztp-site-generator container image\n2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn\u0027t take affect in OpenShift 4\n2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps\n2040535 - Auto-update boot source is not available in customize wizard\n2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name\n2040603 - rhel worker scaleup playbook failed because missing some dependency of podman\n2040616 - rolebindings page doesn\u0027t load for normal users\n2040620 - [MAPO] Error pulling MAPO image on installation\n2040653 - Topology sidebar warns that another component is updated while rendering\n2040655 - User settings update fails when selecting application in topology sidebar\n2040661 - Different react warnings about updating state on unmounted components when leaving topology\n2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation\n2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi\n2040694 - Three upstream HTTPClientConfig struct fields missing in the operator\n2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers\n2040710 - cluster-baremetal-operator cannot update BMC subscription CR\n2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms\n2040782 - Import YAML page blocks input with more then one generateName attribute\n2040783 - The Import from YAML summary page doesn\u0027t show the resource name if created via generateName attribute\n2040791 - Default PGT policies must be \u0027inform\u0027 to integrate with the Lifecycle Operator\n2040793 - Fix snapshot e2e failures\n2040880 - do not block upgrades if we can\u0027t connect to vcenter\n2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10\n2041093 - autounattend.xml missing\n2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates\n2041319 - [IPI on Alibabacloud] installation in region \"cn-shanghai\" failed, due to \"Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped\"\n2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23\n2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller\n2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener\n2041441 - Provision volume with size 3000Gi even if sizeRange: \u0027[10-2000]GiB\u0027 in storageclass on IBM cloud\n2041466 - Kubedescheduler version is missing from the operator logs\n2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses\n2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods)\n2041492 - Spacing between resources in inventory card is too small\n2041509 - GCP Cloud provider components should use K8s 1.23 dependencies\n2041510 - cluster-baremetal-operator doesn\u0027t run baremetal-operator\u0027s subscription webhook\n2041541 - audit: ManagedFields are dropped using API not annotation\n2041546 - ovnkube: set election timer at RAFT cluster creation time\n2041554 - use lease for leader election\n2041581 - KubeDescheduler operator log shows \"Use of insecure cipher detected\"\n2041583 - etcd and api server cpu mask interferes with a guaranteed workload\n2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure\n2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation\n2041620 - bundle CSV alm-examples does not parse\n2041641 - Fix inotify leak and kubelet retaining memory\n2041671 - Delete templates leads to 404 page\n2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category\n2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled\n2041750 - [IPI on Alibabacloud] trying \"create install-config\" with region \"cn-wulanchabu (China (Ulanqab))\" (or \"ap-southeast-6 (Philippines (Manila))\", \"cn-guangzhou (China (Guangzhou))\") failed due to invalid endpoint\n2041763 - The Observe \u003e Alerting pages no longer have their default sort order applied\n2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken\n2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied\n2041882 - cloud-network-config operator can\u0027t work normal on GCP workload identity cluster\n2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases\n2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist\n2041971 - [vsphere] Reconciliation of mutating webhooks didn\u0027t happen\n2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile\n2041999 - [PROXY] external dns pod cannot recognize custom proxy CA\n2042001 - unexpectedly found multiple load balancers\n2042029 - kubedescheduler fails to install completely\n2042036 - [IBMCLOUD] \"openshift-install explain installconfig.platform.ibmcloud\" contains not yet supported custom vpc parameters\n2042049 - Seeing warning related to unrecognized feature gate in kubescheduler \u0026 KCM logs\n2042059 - update discovery burst to reflect lots of CRDs on openshift clusters\n2042069 - Revert toolbox to rhcos-toolbox\n2042169 - Can not delete egressnetworkpolicy in Foreground propagation\n2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool\n2042265 - [IBM]\"--scale-down-utilization-threshold\" doesn\u0027t work on IBMCloud\n2042274 - Storage API should be used when creating a PVC\n2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection\n2042366 - Lifecycle hooks should be independently managed\n2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway\n2042382 - [e2e][automation] CI takes more then 2 hours to run\n2042395 - Add prerequisites for active health checks test\n2042438 - Missing rpms in openstack-installer image\n2042466 - Selection does not happen when switching from Topology Graph to List View\n2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver\n2042567 - insufficient info on CodeReady Containers configuration\n2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk\n2042619 - Overview page of the console is broken for hypershift clusters\n2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running\n2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud\n2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud\n2042770 - [IPI on Alibabacloud] with vpcID \u0026 vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly\n2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)\n2042851 - Create template from SAP HANA template flow - VM is created instead of a new template\n2042906 - Edit machineset with same machine deletion hook name succeed\n2042960 - azure-file CI fails with \"gid(0) in storageClass and pod fsgroup(1000) are not equal\"\n2043003 - [IPI on Alibabacloud] \u0027destroy cluster\u0027 of a failed installation (bug2041694) stuck after \u0027stage=Nat gateways\u0027\n2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n2043043 - Cluster Autoscaler should use K8s 1.23 dependencies\n2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)\n2043078 - Favorite system projects not visible in the project selector after toggling \"Show default projects\". \n2043117 - Recommended operators links are erroneously treated as external\n2043130 - Update CSI sidecars to the latest release for 4.10\n2043234 - Missing validation when creating several BGPPeers with the same peerAddress\n2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler\n2043254 - crio does not bind the security profiles directory\n2043296 - Ignition fails when reusing existing statically-keyed LUKS volume\n2043297 - [4.10] Bootimage bump tracker\n2043316 - RHCOS VM fails to boot on Nutanix AOS\n2043446 - Rebase aws-efs-utils to the latest upstream version. \n2043556 - Add proper ci-operator configuration to ironic and ironic-agent images\n2043577 - DPU network operator\n2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator\n2043675 - Too many machines deleted by cluster autoscaler when scaling down\n2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation\n2043709 - Logging flags no longer being bound to command line\n2043721 - Installer bootstrap hosts using outdated kubelet containing bugs\n2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather\n2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23\n2043780 - Bump router to k8s.io/api 1.23\n2043787 - Bump cluster-dns-operator to k8s.io/api 1.23\n2043801 - Bump CoreDNS to k8s.io/api 1.23\n2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown\n2043961 - [OVN-K] If pod creation fails, retry doesn\u0027t work as expected. \n2044201 - Templates golden image parameters names should be supported\n2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]\n2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter \u201ccsi.storage.k8s.io/fstype\u201d create pvc,pod successfully but write data to the pod\u0027s volume failed of \"Permission denied\"\n2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects\n2044347 - Bump to kubernetes 1.23.3\n2044481 - collect sharedresource cluster scoped instances with must-gather\n2044496 - Unable to create hardware events subscription - failed to add finalizers\n2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources\n2044680 - Additional libovsdb performance and resource consumption fixes\n2044704 - Observe \u003e Alerting pages should not show runbook links in 4.10\n2044717 - [e2e] improve tests for upstream test environment\n2044724 - Remove namespace column on VM list page when a project is selected\n2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff\n2044808 - machine-config-daemon-pull.service: use `cp` instead of `cat` when extracting MCD in OKD\n2045024 - CustomNoUpgrade alerts should be ignored\n2045112 - vsphere-problem-detector has missing rbac rules for leases\n2045199 - SnapShot with Disk Hot-plug hangs\n2045561 - Cluster Autoscaler should use the same default Group value as Cluster API\n2045591 - Reconciliation of aws pod identity mutating webhook did not happen\n2045849 - Add Sprint 212 translations\n2045866 - MCO Operator pod spam \"Error creating event\" warning messages in 4.10\n2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin\n2045916 - [IBMCloud] Default machine profile in installer is unreliable\n2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment\n2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify\n2046137 - oc output for unknown commands is not human readable\n2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance\n2046297 - Bump DB reconnect timeout\n2046517 - In Notification drawer, the \"Recommendations\" header shows when there isn\u0027t any recommendations\n2046597 - Observe \u003e Targets page may show the wrong service monitor is multiple monitors have the same namespace \u0026 label selectors\n2046626 - Allow setting custom metrics for Ansible-based Operators\n2046683 - [AliCloud]\"--scale-down-utilization-threshold\" doesn\u0027t work on AliCloud\n2047025 - Installation fails because of Alibaba CSI driver operator is degraded\n2047190 - Bump Alibaba CSI driver for 4.10\n2047238 - When using communities and localpreferences together, only localpreference gets applied\n2047255 - alibaba: resourceGroupID not found\n2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions\n2047317 - Update HELM OWNERS files under Dev Console\n2047455 - [IBM Cloud] Update custom image os type\n2047496 - Add image digest feature\n2047779 - do not degrade cluster if storagepolicy creation fails\n2047927 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2047929 - use lease for leader election\n2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2048046 - New route annotation to show another URL or hide topology URL decorator doesn\u0027t work for Knative Services\n2048048 - Application tab in User Preferences dropdown menus are too wide. \n2048050 - Topology list view items are not highlighted on keyboard navigation\n2048117 - [IBM]Shouldn\u0027t change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value\n2048413 - Bond CNI: Failed to attach Bond NAD to pod\n2048443 - Image registry operator panics when finalizes config deletion\n2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*\n2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt\n2048598 - Web terminal view is broken\n2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure\n2048891 - Topology page is crashed\n2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class\n2049043 - Cannot create VM from template\n2049156 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2049886 - Placeholder bug for OCP 4.10.0 metadata release\n2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning\n2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2\n2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0\n2050227 - Installation on PSI fails with: \u0027openstack platform does not have the required standard-attr-tag network extension\u0027\n2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]\n2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members\n2050310 - ContainerCreateError when trying to launch large (\u003e500) numbers of pods across nodes\n2050370 - alert data for burn budget needs to be updated to prevent regression\n2050393 - ZTP missing support for local image registry and custom machine config\n2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud\n2050737 - Remove metrics and events for master port offsets\n2050801 - Vsphere upi tries to access vsphere during manifests generation phase\n2050883 - Logger object in LSO does not log source location accurately\n2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit\n2052062 - Whereabouts should implement client-go 1.22+\n2052125 - [4.10] Crio appears to be coredumping in some scenarios\n2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config\n2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. \n2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052598 - kube-scheduler should use configmap lease\n2052599 - kube-controller-manger should use configmap lease\n2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1\n2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch\n2052756 - [4.10] PVs are not being cleaned up after PVC deletion\n2053175 - oc adm catalog mirror throws \u0027missing signature key\u0027 error when using file://local/index\n2053218 - ImagePull fails with error \"unable to pull manifest from example.com/busy.box:v5 invalid reference format\"\n2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs\n2053268 - inability to detect static lifecycle failure\n2053314 - requestheader IDP test doesn\u0027t wait for cleanup, causing high failure rates\n2053323 - OpenShift-Ansible BYOH Unit Tests are Broken\n2053339 - Remove dev preview badge from IBM FlashSystem deployment windows\n2053751 - ztp-site-generate container is missing convenience entrypoint\n2053945 - [4.10] Failed to apply sriov policy on intel nics\n2054109 - Missing \"app\" label\n2054154 - RoleBinding in project without subject is causing \"Project access\" page to fail\n2054244 - Latest pipeline run should be listed on the top of the pipeline run list\n2054288 - console-master-e2e-gcp-console is broken\n2054562 - DPU network operator 4.10 branch need to sync with master\n2054897 - Unable to deploy hw-event-proxy operator\n2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently\n2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line\n2055371 - Remove Check which enforces summary_interval must match logSyncInterval\n2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11\n2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API\n2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured\n2056479 - ovirt-csi-driver-node pods are crashing intermittently\n2056572 - reconcilePrecaching error: cannot list resource \"clusterserviceversions\" in API group \"operators.coreos.com\" at the cluster scope\"\n2056629 - [4.10] EFS CSI driver can\u0027t unmount volumes with \"wait: no child processes\"\n2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs\n2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation\n2056948 - post 1.23 rebase: regression in service-load balancer reliability\n2057438 - Service Level Agreement (SLA) always show \u0027Unknown\u0027\n2057721 - Fix Proxy support in RHACM 2.4.2\n2057724 - Image creation fails when NMstateConfig CR is empty\n2058641 - [4.10] Pod density test causing problems when using kube-burner\n2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install\n2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials\n2060956 - service domain can\u0027t be resolved when networkpolicy is used in OCP 4.10-rc\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2014-3577\nhttps://access.redhat.com/security/cve/CVE-2016-10228\nhttps://access.redhat.com/security/cve/CVE-2017-14502\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2018-1000858\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9169\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-25013\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-8927\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-9952\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-13434\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-15358\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-25660\nhttps://access.redhat.com/security/cve/CVE-2020-25677\nhttps://access.redhat.com/security/cve/CVE-2020-27618\nhttps://access.redhat.com/security/cve/CVE-2020-27781\nhttps://access.redhat.com/security/cve/CVE-2020-29361\nhttps://access.redhat.com/security/cve/CVE-2020-29362\nhttps://access.redhat.com/security/cve/CVE-2020-29363\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/cve/CVE-2021-3326\nhttps://access.redhat.com/security/cve/CVE-2021-3449\nhttps://access.redhat.com/security/cve/CVE-2021-3450\nhttps://access.redhat.com/security/cve/CVE-2021-3516\nhttps://access.redhat.com/security/cve/CVE-2021-3517\nhttps://access.redhat.com/security/cve/CVE-2021-3518\nhttps://access.redhat.com/security/cve/CVE-2021-3520\nhttps://access.redhat.com/security/cve/CVE-2021-3521\nhttps://access.redhat.com/security/cve/CVE-2021-3537\nhttps://access.redhat.com/security/cve/CVE-2021-3541\nhttps://access.redhat.com/security/cve/CVE-2021-3733\nhttps://access.redhat.com/security/cve/CVE-2021-3749\nhttps://access.redhat.com/security/cve/CVE-2021-20305\nhttps://access.redhat.com/security/cve/CVE-2021-21684\nhttps://access.redhat.com/security/cve/CVE-2021-22946\nhttps://access.redhat.com/security/cve/CVE-2021-22947\nhttps://access.redhat.com/security/cve/CVE-2021-25215\nhttps://access.redhat.com/security/cve/CVE-2021-27218\nhttps://access.redhat.com/security/cve/CVE-2021-30666\nhttps://access.redhat.com/security/cve/CVE-2021-30761\nhttps://access.redhat.com/security/cve/CVE-2021-30762\nhttps://access.redhat.com/security/cve/CVE-2021-33928\nhttps://access.redhat.com/security/cve/CVE-2021-33929\nhttps://access.redhat.com/security/cve/CVE-2021-33930\nhttps://access.redhat.com/security/cve/CVE-2021-33938\nhttps://access.redhat.com/security/cve/CVE-2021-36222\nhttps://access.redhat.com/security/cve/CVE-2021-37750\nhttps://access.redhat.com/security/cve/CVE-2021-39226\nhttps://access.redhat.com/security/cve/CVE-2021-41190\nhttps://access.redhat.com/security/cve/CVE-2021-43813\nhttps://access.redhat.com/security/cve/CVE-2021-44716\nhttps://access.redhat.com/security/cve/CVE-2021-44717\nhttps://access.redhat.com/security/cve/CVE-2022-0532\nhttps://access.redhat.com/security/cve/CVE-2022-21673\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL\n0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne\neGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM\nCEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF\naDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC\nY/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp\nsQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO\nRDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN\nrs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry\nbSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z\n7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT\nb5PUYUBIZLc=\n=GUDA\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. This software, such as Apache HTTP Server, is\ncommon to multiple JBoss middleware products, and is packaged under Red Hat\nJBoss Core Services to allow for faster distribution of updates, and for a\nmore consistent update experience. \n\nThis release adds the new Apache HTTP Server 2.4.37 Service Pack 7 packages\nthat are part of the JBoss Core Services offering. Refer to the Release Notes for information on the most\nsignificant bug fixes and enhancements included in this release. Solution:\n\nBefore applying the update, back up your existing installation, including\nall applications, configuration files, databases and database settings, and\nso on. \n\nThe References section of this erratum contains a download link for the\nupdate. You must be logged in to download the update. Bugs fixed (https://bugzilla.redhat.com/):\n\n1941547 - CVE-2021-3450 openssl: CA certificate check bypass with X509_V_FLAG_X509_STRICT\n1941554 - CVE-2021-3449 openssl: NULL pointer dereference in signature_algorithms processing\n\n5. ==========================================================================\nUbuntu Security Notice USN-5038-1\nAugust 12, 2021\n\npostgresql-10, postgresql-12, postgresql-13 vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 21.04\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in PostgreSQL. \n\nSoftware Description:\n- postgresql-13: Object-relational SQL database\n- postgresql-12: Object-relational SQL database\n- postgresql-10: Object-relational SQL database\n\nDetails:\n\nIt was discovered that the PostgresQL planner could create incorrect plans\nin certain circumstances. A remote attacker could use this issue to cause\nPostgreSQL to crash, resulting in a denial of service, or possibly obtain\nsensitive information from memory. (CVE-2021-3677)\n\nIt was discovered that PostgreSQL incorrectly handled certain SSL\nrenegotiation ClientHello messages from clients. A remote attacker could\npossibly use this issue to cause PostgreSQL to crash, resulting in a denial\nof service. (CVE-2021-3449)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 21.04:\n postgresql-13 13.4-0ubuntu0.21.04.1\n\nUbuntu 20.04 LTS:\n postgresql-12 12.8-0ubuntu0.20.04.1\n\nUbuntu 18.04 LTS:\n postgresql-10 10.18-0ubuntu0.18.04.1\n\nThis update uses a new upstream release, which includes additional bug\nfixes. After a standard system update you need to restart PostgreSQL to\nmake all the necessary changes. \n\nSecurity Fix(es):\n\n* golang: crypto/tls: certificate of wrong type is causing TLS client to\npanic\n(CVE-2021-34558)\n* golang: net: lookup functions may return invalid host names\n(CVE-2021-33195)\n* golang: net/http/httputil: ReverseProxy forwards connection headers if\nfirst one is empty (CVE-2021-33197)\n* golang: match/big.Rat: may cause a panic or an unrecoverable fatal error\nif passed inputs with very large exponents (CVE-2021-33198)\n* golang: encoding/xml: infinite loop when using xml.NewTokenDecoder with a\ncustom TokenReader (CVE-2021-27918)\n* golang: net/http: panic in ReadRequest and ReadResponse when reading a\nvery large header (CVE-2021-31525)\n* golang: archive/zip: malformed archive may cause panic or memory\nexhaustion (CVE-2021-33196)\n\nIt was found that the CVE-2021-27918, CVE-2021-31525 and CVE-2021-33196\nhave been incorrectly mentioned as fixed in RHSA for Serverless client kn\n1.16.0. Bugs fixed (https://bugzilla.redhat.com/):\n\n1983596 - CVE-2021-34558 golang: crypto/tls: certificate of wrong type is causing TLS client to panic\n1983651 - Release of OpenShift Serverless Serving 1.17.0\n1983654 - Release of OpenShift Serverless Eventing 1.17.0\n1989564 - CVE-2021-33195 golang: net: lookup functions may return invalid host names\n1989570 - CVE-2021-33197 golang: net/http/httputil: ReverseProxy forwards connection headers if first one is empty\n1989575 - CVE-2021-33198 golang: math/big.Rat: may cause a panic or an unrecoverable fatal error if passed inputs with very large exponents\n1992955 - CVE-2021-3703 serverless: incomplete fix for CVE-2021-27918 / CVE-2021-31525 / CVE-2021-33196\n\n5. OpenSSL Security Advisory [25 March 2021]\n=========================================\n\nCA certificate check bypass with X509_V_FLAG_X509_STRICT (CVE-2021-3450)\n========================================================================\n\nSeverity: High\n\nThe X509_V_FLAG_X509_STRICT flag enables additional security checks of the\ncertificates present in a certificate chain. It is not set by default. \n\nStarting from OpenSSL version 1.1.1h a check to disallow certificates in\nthe chain that have explicitly encoded elliptic curve parameters was added\nas an additional strict check. \n\nAn error in the implementation of this check meant that the result of a\nprevious check to confirm that certificates in the chain are valid CA\ncertificates was overwritten. This effectively bypasses the check\nthat non-CA certificates must not be able to issue other certificates. \n\nIf a \"purpose\" has been configured then there is a subsequent opportunity\nfor checks that the certificate is a valid CA. All of the named \"purpose\"\nvalues implemented in libcrypto perform this check. Therefore, where\na purpose is set the certificate chain will still be rejected even when the\nstrict flag has been used. A purpose is set by default in libssl client and\nserver certificate verification routines, but it can be overridden or\nremoved by an application. \n\nIn order to be affected, an application must explicitly set the\nX509_V_FLAG_X509_STRICT verification flag and either not set a purpose\nfor the certificate verification or, in the case of TLS client or server\napplications, override the default purpose. \n\nThis issue was reported to OpenSSL on 18th March 2021 by Benjamin Kaduk\nfrom Akamai and was discovered by Xiang Ding and others at Akamai. The fix was\ndeveloped by Tom\u00e1\u0161 Mr\u00e1z. \n\nThis issue was reported to OpenSSL on 17th March 2021 by Nokia. The fix was\ndeveloped by Peter K\u00e4stle and Samuel Sapalski from Nokia. \n\nNote\n====\n\nOpenSSL 1.0.2 is out of support and no longer receiving public updates. Extended\nsupport is available for premium support customers:\nhttps://www.openssl.org/support/contracts.html\n\nOpenSSL 1.1.0 is out of support and no longer receiving updates of any kind. \n\nReferences\n==========\n\nURL for this Security Advisory:\nhttps://www.openssl.org/news/secadv/20210325.txt\n\nNote: the online version of the advisory may be updated with additional details\nover time. \n\nFor details of OpenSSL severity classifications please see:\nhttps://www.openssl.org/policies/secpolicy.html\n", "sources": [ { "db": "NVD", "id": "CVE-2021-3449" }, { "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "db": "VULHUB", "id": "VHN-388130" }, { "db": "VULMON", "id": "CVE-2021-3449" }, { "db": "PACKETSTORM", "id": "163209" }, { "db": "PACKETSTORM", "id": "163257" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "162183" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "162197" }, { "db": "PACKETSTORM", "id": "163815" }, { "db": "PACKETSTORM", "id": "164192" }, { "db": "PACKETSTORM", "id": "169659" } ], "trust": 2.61 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2021-3449", "trust": 2.9 }, { "db": "TENABLE", "id": "TNS-2021-06", "trust": 1.2 }, { "db": "TENABLE", "id": "TNS-2021-09", "trust": 1.2 }, { "db": "TENABLE", "id": "TNS-2021-05", "trust": 1.2 }, { "db": "TENABLE", "id": "TNS-2021-10", "trust": 1.2 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/03/28/3", "trust": 1.2 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/03/27/2", "trust": 1.2 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/03/28/4", "trust": 1.2 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/03/27/1", "trust": 1.2 }, { "db": "SIEMENS", "id": "SSA-772220", "trust": 1.2 }, { "db": "SIEMENS", "id": "SSA-389290", "trust": 1.2 }, { "db": "PULSESECURE", "id": "SA44845", "trust": 1.2 }, { "db": "MCAFEE", "id": "SB10356", "trust": 1.2 }, { "db": "JVN", "id": "JVNVU92126369", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2021-001383", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "162197", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "163257", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "162183", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "162114", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162076", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162350", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162041", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162013", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162383", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162699", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162337", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162151", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162189", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162196", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162172", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161984", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162201", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162307", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162200", "trust": 0.1 }, { "db": "SEEBUG", "id": "SSVID-99170", "trust": 0.1 }, { "db": "VULHUB", "id": "VHN-388130", "trust": 0.1 }, { "db": "ICS CERT", "id": "ICSA-22-104-05", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2021-3449", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163209", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163747", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166279", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163815", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164192", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169659", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-388130" }, { "db": "VULMON", "id": "CVE-2021-3449" }, { "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "db": "PACKETSTORM", "id": "163209" }, { "db": "PACKETSTORM", "id": "163257" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "162183" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "162197" }, { "db": "PACKETSTORM", "id": "163815" }, { "db": "PACKETSTORM", "id": "164192" }, { "db": "PACKETSTORM", "id": "169659" }, { "db": "NVD", "id": "CVE-2021-3449" } ] }, "id": "VAR-202103-1464", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-388130" } ], "trust": 0.6431162642424243 }, "last_update_date": "2024-07-23T21:43:25.615000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "hitachi-sec-2021-119 Software product security information", "trust": 0.8, "url": "https://www.debian.org/security/2021/dsa-4875" }, { "title": "Debian Security Advisories: DSA-4875-1 openssl -- security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=b5207bd1e788bc6e8d94f410cf4801bc" }, { "title": "Red Hat: CVE-2021-3449", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=cve-2021-3449" }, { "title": "Amazon Linux 2: ALAS2-2021-1622", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2021-1622" }, { "title": "Arch Linux Issues: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2021-3449 log" }, { "title": "Cisco: Multiple Vulnerabilities in OpenSSL Affecting Cisco Products: March 2021", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=cisco_security_advisories_and_alerts_ciscoproducts\u0026qid=cisco-sa-openssl-2021-ghy28djd" }, { "title": "Hitachi Security Advisories: Vulnerability in JP1/Base and JP1/ File Transmission Server/FTP", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2021-130" }, { "title": "Tenable Security Advisories: [R1] Tenable.sc 5.18.0 Fixes One Third-party Vulnerability", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=tns-2021-06" }, { "title": "Tenable Security Advisories: [R1] Nessus 8.13.2 Fixes Multiple Third-party Vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=tns-2021-05" }, { "title": "Hitachi Security Advisories: Multiple Vulnerabilities in Hitachi Ops Center Common Services", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2021-117" }, { "title": "Hitachi Security Advisories: Multiple Vulnerabilities in Hitachi Ops Center Analyzer viewpoint", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2021-119" }, { "title": "Tenable Security Advisories: [R1] Nessus Network Monitor 5.13.1 Fixes Multiple Third-party Vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=tns-2021-09" }, { "title": "Tenable Security Advisories: [R1] LCE 6.0.9 Fixes Multiple Third-party Vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=tns-2021-10" }, { "title": "Red Hat: Moderate: OpenShift Container Platform 4.10.3 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220056 - security advisory" }, { "title": "CVE-2021-3449 OpenSSL \u003c1.1.1k DoS exploit", "trust": 0.1, "url": "https://github.com/terorie/cve-2021-3449 " }, { "title": "CVE-2021-3449 OpenSSL \u003c1.1.1k DoS exploit", "trust": 0.1, "url": "https://github.com/gitchangye/cve " }, { "title": "NSAPool-PenTest", "trust": 0.1, "url": "https://github.com/alicemongodin/nsapool-pentest " }, { "title": "Analysis of attack vectors for embedded Linux", "trust": 0.1, "url": "https://github.com/fefi7/attacking_embedded_linux " }, { "title": "openssl-cve", "trust": 0.1, "url": "https://github.com/yonhan3/openssl-cve " }, { "title": "CVE-Check", "trust": 0.1, "url": "https://github.com/falk-werner/cve-check " }, { "title": "SEEKER_dataset", "trust": 0.1, "url": "https://github.com/sf4bin/seeker_dataset " }, { "title": "Year of the Jellyfish (YotJF)", "trust": 0.1, "url": "https://github.com/rnbochsr/yr_of_the_jellyfish " }, { "title": "https://github.com/tianocore-docs/ThirdPartySecurityAdvisories", "trust": 0.1, "url": "https://github.com/tianocore-docs/thirdpartysecurityadvisories " }, { "title": "TASSL-1.1.1k", "trust": 0.1, "url": "https://github.com/jntass/tassl-1.1.1k " }, { "title": "Trivy by Aqua security\nRefer this official repository for explore Trivy Action", "trust": 0.1, "url": "https://github.com/scholarnishu/trivy-by-aquasecurity " }, { "title": "Trivy by Aqua security\nRefer this official repository for explore Trivy Action", "trust": 0.1, "url": "https://github.com/thecyberbaby/trivy-by-aquasecurity " }, { "title": "\ud83d\udc31 Catlin Vulnerability Scanner \ud83d\udc31", "trust": 0.1, "url": "https://github.com/vinamra28/tekton-image-scan-trivy " }, { "title": "DEVOPS + ACR + TRIVY", "trust": 0.1, "url": "https://github.com/arindam0310018/04-apr-2022-devops__scan-images-in-acr-using-trivy " }, { "title": "Trivy Demo", "trust": 0.1, "url": "https://github.com/fredrkl/trivy-demo " }, { "title": "GitHub Actions CI App Pipeline", "trust": 0.1, "url": "https://github.com/isgo-golgo13/gokit-gorillakit-enginesvc " }, { "title": "Awesome Stars", "trust": 0.1, "url": "https://github.com/taielab/awesome-hacking-lists " }, { "title": "podcast-dl-gael", "trust": 0.1, "url": "https://github.com/githubforsnap/podcast-dl-gael " }, { "title": "sec-tools", "trust": 0.1, "url": "https://github.com/matengfei000/sec-tools " }, { "title": "sec-tools", "trust": 0.1, "url": "https://github.com/anquanscan/sec-tools " }, { "title": "\u66f4\u65b0\u4e8e 2023-11-27 08:36:01\n\u5b89\u5168\n\u5f00\u53d1\n\u672a\u5206\u7c7b\n\u6742\u4e03\u6742\u516b", "trust": 0.1, "url": "https://github.com/20142995/sectool " }, { "title": "Vulnerability", "trust": 0.1, "url": "https://github.com/tzwlhack/vulnerability " }, { "title": "OpenSSL-CVE-lib", "trust": 0.1, "url": "https://github.com/chnzzh/openssl-cve-lib " }, { "title": "PoC in GitHub", "trust": 0.1, "url": "https://github.com/soosmile/poc " }, { "title": "PoC in GitHub", "trust": 0.1, "url": "https://github.com/manas3c/cve-poc " }, { "title": "The Register", "trust": 0.1, "url": "https://www.theregister.co.uk/2021/03/25/openssl_bug_fix/" } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-3449" }, { "db": "JVNDB", "id": "JVNDB-2021-001383" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-476", "trust": 1.1 }, { "problemtype": "NULL Pointer dereference (CWE-476) [NVD Evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "VULHUB", "id": "VHN-388130" }, { "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "db": "NVD", "id": "CVE-2021-3449" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.3, "url": "https://tools.cisco.com/security/center/content/ciscosecurityadvisory/cisco-sa-openssl-2021-ghy28djd" }, { "trust": 1.3, "url": "https://www.openssl.org/news/secadv/20210325.txt" }, { "trust": 1.3, "url": "https://www.debian.org/security/2021/dsa-4875" }, { "trust": 1.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3449" }, { "trust": 1.2, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-389290.pdf" }, { "trust": 1.2, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-772220.pdf" }, { "trust": 1.2, "url": "https://kb.pulsesecure.net/articles/pulse_security_advisories/sa44845" }, { "trust": 1.2, "url": "https://psirt.global.sonicwall.com/vuln-detail/snwlid-2021-0013" }, { "trust": 1.2, "url": "https://security.netapp.com/advisory/ntap-20210326-0006/" }, { "trust": 1.2, "url": "https://security.netapp.com/advisory/ntap-20210513-0002/" }, { "trust": 1.2, "url": "https://www.tenable.com/security/tns-2021-05" }, { "trust": 1.2, "url": "https://www.tenable.com/security/tns-2021-06" }, { "trust": 1.2, "url": "https://www.tenable.com/security/tns-2021-09" }, { "trust": 1.2, "url": "https://www.tenable.com/security/tns-2021-10" }, { "trust": 1.2, "url": "https://security.gentoo.org/glsa/202103-03" }, { "trust": 1.2, "url": "https://security.freebsd.org/advisories/freebsd-sa-21:07.openssl.asc" }, { "trust": 1.2, "url": "https://www.oracle.com//security-alerts/cpujul2021.html" }, { "trust": 1.2, "url": "https://www.oracle.com/security-alerts/cpuapr2021.html" }, { "trust": 1.2, "url": "https://www.oracle.com/security-alerts/cpuapr2022.html" }, { "trust": 1.2, "url": "https://www.oracle.com/security-alerts/cpujul2022.html" }, { "trust": 1.2, "url": "https://www.oracle.com/security-alerts/cpuoct2021.html" }, { "trust": 1.2, "url": "https://lists.debian.org/debian-lts-announce/2021/08/msg00029.html" }, { "trust": 1.2, "url": "http://www.openwall.com/lists/oss-security/2021/03/27/1" }, { "trust": 1.2, "url": "http://www.openwall.com/lists/oss-security/2021/03/27/2" }, { "trust": 1.2, "url": "http://www.openwall.com/lists/oss-security/2021/03/28/3" }, { "trust": 1.2, "url": "http://www.openwall.com/lists/oss-security/2021/03/28/4" }, { "trust": 1.1, "url": "https://kc.mcafee.com/corporate/index?page=content\u0026id=sb10356" }, { "trust": 1.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=fb9fa6b51defd48157eeb207f52181f735d96148" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/ccbfllvqvilivgzmbjl3ixzgkwqisynp/" }, { "trust": 1.0, "url": "https://security.netapp.com/advisory/ntap-20240621-0006/" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu92126369/" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2021-3449" }, { "trust": 0.7, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2021-3450" }, { "trust": 0.7, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.7, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-15358" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-20305" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-13434" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-29362" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2017-14502" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2019-9169" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-29361" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-3326" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2019-25013" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-8927" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-29363" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2016-10228" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-27618" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-8286" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-28196" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15358" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27618" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-8231" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13434" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-8285" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28196" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9169" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29362" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-2708" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-2708" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-8284" }, { "trust": 0.4, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29361" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3450" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8231" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29363" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3520" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3537" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3518" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3516" }, { "trust": 0.3, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3517" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3541" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13776" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-3842" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-13776" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-24977" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24977" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3842" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8284" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20305" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8285" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8286" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8927" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3326" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-27219" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-20454" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-13050" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-20843" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-1000858" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-14889" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-1730" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-13627" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20271" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-19906" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-15903" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-27218" }, { "trust": 0.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=fb9fa6b51defd48157eeb207f52181f735d96148" }, { "trust": 0.1, "url": "https://kc.mcafee.com/corporate/index?page=content\u0026amp;id=sb10356" }, { "trust": 0.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/ccbfllvqvilivgzmbjl3ixzgkwqisynp/" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/476.html" }, { "trust": 0.1, "url": "https://github.com/terorie/cve-2021-3449" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-104-05" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26116" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2479" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23240" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3139" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13543" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26137" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9951" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23239" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36242" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27619" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9948" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13012" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25659" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-14866" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26116" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14866" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13584" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26137" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13543" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36242" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13584" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27783" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25659" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27783" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27619" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9983" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3177" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3528" }, { "trust": 0.1, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25678" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23336" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25678" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13012" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25736" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2130" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27219" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.7/windows_containers/window" }, { "trust": 0.1, "url": "https://issues.jboss.org/):" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25736" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28469" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28500" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20934" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29418" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28852" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33034" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28092" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28851" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-1730" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33909" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29482" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23337" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32399" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27358" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23369" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21321" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23368" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-11668" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23362" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23364" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23343" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21309" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23841" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23383" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28918" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28851" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3560" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28852" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23840" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33033" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20934" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25217" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28469" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3016" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3377" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28500" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21272" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29477" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27292" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23346" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29478" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11668" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23839" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33623" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21322" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23382" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33910" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:1196" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9925" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9802" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30762" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33938" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9895" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8625" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44716" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8812" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3899" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8819" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43813" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3867" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8720" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9893" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33930" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8808" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3902" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24407" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25215" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3900" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30761" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33928" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9805" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8820" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9807" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8769" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8710" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-37750" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8813" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9850" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27781" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8811" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0055" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22947" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9803" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9862" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2014-3577" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3749" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3885" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15503" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41190" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10018" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25660" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8835" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8764" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3733" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8844" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3865" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3864" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21684" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14391" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3862" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0056" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3901" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-39226" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8823" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3895" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44717" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-11793" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0532" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9894" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8816" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8771" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3897" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9806" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8814" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8743" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33929" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3121" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9915" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36222" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8815" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8783" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20807" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9952" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22946" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21673" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8766" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3868" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8846" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3894" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25677" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30666" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8782" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3521" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:1200" }, { "trust": 0.1, "url": "https://access.redhat.com/jbossnetwork/restricted/listsoftware.html?product=core.service.apachehttp\u0026downloadtype=securitypatches\u0026version=2.4.37" }, { "trust": 0.1, "url": "https://ubuntu.com/security/notices/usn-5038-1" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3677" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/postgresql-10/10.18-0ubuntu0.18.04.1" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/postgresql-12/12.8-0ubuntu0.20.04.1" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/postgresql-13/13.4-0ubuntu0.21.04.1" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27918" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33196" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33195" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27918" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27218" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html/serverless/index" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33196" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33197" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/serverless/index" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33195" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33198" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33198" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-31525" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-34558" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3556" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33197" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20271" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3421" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-31525" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3703" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.7/html/serverless/index" }, { "trust": 0.1, "url": "https://www.openssl.org/support/contracts.html" }, { "trust": 0.1, "url": "https://www.openssl.org/policies/secpolicy.html" } ], "sources": [ { "db": "VULHUB", "id": "VHN-388130" }, { "db": "VULMON", "id": "CVE-2021-3449" }, { "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "db": "PACKETSTORM", "id": "163209" }, { "db": "PACKETSTORM", "id": "163257" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "162183" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "162197" }, { "db": "PACKETSTORM", "id": "163815" }, { "db": "PACKETSTORM", "id": "164192" }, { "db": "PACKETSTORM", "id": "169659" }, { "db": "NVD", "id": "CVE-2021-3449" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-388130" }, { "db": "VULMON", "id": "CVE-2021-3449" }, { "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "db": "PACKETSTORM", "id": "163209" }, { "db": "PACKETSTORM", "id": "163257" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "162183" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "162197" }, { "db": "PACKETSTORM", "id": "163815" }, { "db": "PACKETSTORM", "id": "164192" }, { "db": "PACKETSTORM", "id": "169659" }, { "db": "NVD", "id": "CVE-2021-3449" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-03-25T00:00:00", "db": "VULHUB", "id": "VHN-388130" }, { "date": "2021-03-25T00:00:00", "db": "VULMON", "id": "CVE-2021-3449" }, { "date": "2021-05-06T00:00:00", "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "date": "2021-06-17T18:34:10", "db": "PACKETSTORM", "id": "163209" }, { "date": "2021-06-23T15:44:15", "db": "PACKETSTORM", "id": "163257" }, { "date": "2021-08-06T14:02:37", "db": "PACKETSTORM", "id": "163747" }, { "date": "2021-04-14T16:40:32", "db": "PACKETSTORM", "id": "162183" }, { "date": "2022-03-11T16:38:38", "db": "PACKETSTORM", "id": "166279" }, { "date": "2021-04-15T13:50:04", "db": "PACKETSTORM", "id": "162197" }, { "date": "2021-08-13T14:20:11", "db": "PACKETSTORM", "id": "163815" }, { "date": "2021-09-17T16:04:56", "db": "PACKETSTORM", "id": "164192" }, { "date": "2021-03-25T12:12:12", "db": "PACKETSTORM", "id": "169659" }, { "date": "2021-03-25T15:15:13.450000", "db": "NVD", "id": "CVE-2021-3449" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-08-29T00:00:00", "db": "VULHUB", "id": "VHN-388130" }, { "date": "2023-11-07T00:00:00", "db": "VULMON", "id": "CVE-2021-3449" }, { "date": "2021-09-13T07:43:00", "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "date": "2024-06-21T19:15:19.710000", "db": "NVD", "id": "CVE-2021-3449" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "PACKETSTORM", "id": "163815" } ], "trust": 0.1 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "OpenSSL\u00a0 In \u00a0NULL\u00a0 Pointer dereference vulnerability", "sources": [ { "db": "JVNDB", "id": "JVNDB-2021-001383" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "xss", "sources": [ { "db": "PACKETSTORM", "id": "163209" } ], "trust": 0.1 } }
Sightings
Author | Source | Type | Date |
---|
Nomenclature
- Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
- Confirmed: The vulnerability is confirmed from an analyst perspective.
- Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
- Patched: This vulnerability was successfully patched by the user reporting the sighting.
- Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
- Not confirmed: The user expresses doubt about the veracity of the vulnerability.
- Not patched: This vulnerability was not successfully patched by the user reporting the sighting.