var-202302-0195
Vulnerability from variot
The function PEM_read_bio_ex() reads a PEM file from a BIO and parses and decodes the "name" (e.g. "CERTIFICATE"), any header data and the payload data. If the function succeeds then the "name_out", "header" and "data" arguments are populated with pointers to buffers containing the relevant decoded data. The caller is responsible for freeing those buffers. It is possible to construct a PEM file that results in 0 bytes of payload data. In this case PEM_read_bio_ex() will return a failure code but will populate the header argument with a pointer to a buffer that has already been freed. If the caller also frees this buffer then a double free will occur. This will most likely lead to a crash. This could be exploited by an attacker who has the ability to supply malicious PEM files for parsing to achieve a denial of service attack.
The functions PEM_read_bio() and PEM_read() are simple wrappers around PEM_read_bio_ex() and therefore these functions are also directly affected.
These functions are also called indirectly by a number of other OpenSSL functions including PEM_X509_INFO_read_bio_ex() and SSL_CTX_use_serverinfo_file() which are also vulnerable. Some OpenSSL internal uses of these functions are not vulnerable because the caller does not free the header argument if PEM_read_bio_ex() returns a failure code. These locations include the PEM_read_bio_TYPE() functions as well as the decoders introduced in OpenSSL 3.0.
The OpenSSL asn1parse command line application is also impacted by this issue. OpenSSL has payload data 0 become a part-time worker PEM When creating a file, PEM_read_bio_ex() A double free vulnerability exists because when returns a failure code, it introduces a pointer to an already freed buffer into the header argument.Malicious by attacker PEM Denial of service by providing files ( crash ) It may be in a state. Bugs fixed (https://bugzilla.redhat.com/):
2212085 - CVE-2023-3089 openshift: OCP & FIPS mode
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Important: OpenShift Container Platform 4.13.0 security update Advisory ID: RHSA-2023:1326-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2023:1326 Issue date: 2023-05-17 CVE Names: CVE-2021-4235 CVE-2021-4238 CVE-2021-20329 CVE-2021-38561 CVE-2021-43519 CVE-2021-44964 CVE-2022-1271 CVE-2022-1586 CVE-2022-1587 CVE-2022-1785 CVE-2022-1897 CVE-2022-1927 CVE-2022-2509 CVE-2022-2990 CVE-2022-3080 CVE-2022-3259 CVE-2022-4203 CVE-2022-4304 CVE-2022-4450 CVE-2022-21698 CVE-2022-23525 CVE-2022-23526 CVE-2022-26280 CVE-2022-27191 CVE-2022-29154 CVE-2022-29824 CVE-2022-34903 CVE-2022-38023 CVE-2022-38177 CVE-2022-38178 CVE-2022-40674 CVE-2022-41316 CVE-2022-41717 CVE-2022-41721 CVE-2022-41723 CVE-2022-41724 CVE-2022-41725 CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 CVE-2022-42898 CVE-2022-42919 CVE-2022-46146 CVE-2022-47629 CVE-2023-0056 CVE-2023-0215 CVE-2023-0216 CVE-2023-0217 CVE-2023-0229 CVE-2023-0286 CVE-2023-0361 CVE-2023-0401 CVE-2023-0620 CVE-2023-0665 CVE-2023-0778 CVE-2023-25000 CVE-2023-25165 CVE-2023-25173 CVE-2023-25577 CVE-2023-25725 CVE-2023-25809 CVE-2023-27561 CVE-2023-28642 CVE-2023-30570 CVE-2023-30841 =====================================================================
- Summary:
Red Hat OpenShift Container Platform release 4.13.0 is now available with updates to packages and images that fix several bugs and add enhancements.
This release includes a security update for Red Hat OpenShift Container Platform 4.13.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.13.0. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHSA-2023:1325
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.13/release_notes/ocp-4-13-release-notes.html
Security Fix(es):
-
goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be (CVE-2021-4238)
-
go-yaml: Denial of Service in go-yaml (CVE-2021-4235)
-
mongo-go-driver: specific cstrings input may not be properly validated (CVE-2021-20329)
-
golang: out-of-bounds read in golang.org/x/text/language leads to DoS (CVE-2021-38561)
-
prometheus/client_golang: Denial of service using InstrumentHandlerCounter (CVE-2022-21698)
-
helm: Denial of service through through repository index file (CVE-2022-23525)
-
helm: Denial of service through schema file (CVE-2022-23526)
-
golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)
-
vault: insufficient certificate revocation list checking (CVE-2022-41316)
-
golang: net/http: excessive memory growth in a Go server accepting HTTP/2 requests (CVE-2022-41717)
-
x/net/http2/h2c: request smuggling (CVE-2022-41721)
-
net/http, golang.org/x/net/http2: avoid quadratic complexity in HPACK decoding (CVE-2022-41723)
-
golang: crypto/tls: large handshake records may cause panics (CVE-2022-41724)
-
golang: net/http, mime/multipart: denial of service from excessive resource consumption (CVE-2022-41725)
-
exporter-toolkit: authentication bypass via cache poisoning (CVE-2022-46146)
-
vault: Vault’s Microsoft SQL Database Storage Backend Vulnerable to SQL Injection Via Configuration File (CVE-2023-0620)
-
hashicorp/vault: Vault’s PKI Issuer Endpoint Did Not Correctly Authorize Access to Issuer Metadata (CVE-2023-0665)
-
hashicorp/vault: Cache-Timing Attacks During Seal and Unseal Operations (CVE-2023-25000)
-
helm: getHostByName Function Information Disclosure (CVE-2023-25165)
-
containerd: Supplementary groups are not set up properly (CVE-2023-25173)
-
runc: volume mount race condition (regression of CVE-2019-19921) (CVE-2023-27561)
-
runc: AppArmor can be bypassed when
/proc
inside the container is symlinked with a specific mount configuration (CVE-2023-28642) -
baremetal-operator: plain-text username and hashed password readable by anyone having a cluster-wide read-access (CVE-2023-30841)
-
runc: Rootless runc makes
/sys/fs/cgroup
writable (CVE-2023-25809)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
All OpenShift Container Platform 4.13 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift CLI (oc) or web console. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.13/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.13 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.13/release_notes/ocp-4-13-release-notes.html
You may download the oc tool and use it to inspect release image metadata for x86_64, s390x, ppc64le, and aarch64 architectures. The image digests may be found at https://quay.io/repository/openshift-release-dev/ocp-release?tab=tags
The sha values for the release are:
(For x86_64 architecture) The image digest is sha256:74b23ed4bbb593195a721373ed6693687a9b444c97065ce8ac653ba464375711
(For s390x architecture) The image digest is sha256:a32d509d960eb3e889a22c4673729f95170489789c85308794287e6e9248fb79
(For ppc64le architecture) The image digest is sha256:bca0e4a4ed28b799e860e302c4f6bb7e11598f7c136c56938db0bf9593fb76f8
(For aarch64 architecture) The image digest is sha256:e07e4075c07fca21a1aed9d7f9c165696b1d0fa4940a219a000894e5683d846c
All OpenShift Container Platform 4.13 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.13/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1770297 - console odo download link needs to go to an official location or have caveats [openshift-4.4]
1853264 - Metrics produce high unbound cardinality
1877261 - [RFE] Mounted volume size issue when restore a larger size pvc than snapshot
1904573 - OpenShift: containers modify /etc/passwd group writable
1943194 - when using gpus, more nodes than needed are created by the node autoscaler
1948666 - After entering valid git repo url on Import from git page, throwing warning message instead Validated
1971033 - CVE-2021-20329 mongo-go-driver: specific cstrings input may not be properly validated
2005232 - Pods list page should only show Create Pod button to user has sufficient permission
2016006 - Repositories list does not show the running pipelinerun as last pipelinerun
2027000 - The user is ignored when we create a new file using a MachineConfig
2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter
2047299 - nodeport not reachable port connection timeout
2050230 - Implement LIST call chunking in openshift-sdn
2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server
2065166 - GCP - Less privileged service accounts are created with Service Account User role
2066388 - Wrong Error generates when https is missing in the value of regionEndpoint
in configs.imageregistry.operator.openshift.io/cluster
2066664 - [cluster-storage-operator] - Minimize wildcard/privilege Usage in Cluster and Local Roles
2070744 - openshift-install destroy in us-gov-west-1 results in infinite loop - AWS govcloud
2075548 - Support AllocateLoadBalancerNodePorts=False with ETP=local, LGW mode
2076619 - Could not create deployment with an unknown git repo and builder image build strategy
2078222 - egressIPs behave inconsistently towards in-cluster traffic (hosts and services backed by host-networked pods)
2079981 - PVs not deleting on azure (or very slow to delete) since CSI migration to azuredisk
2081858 - OVN-Kubernetes: SyncServices for nodePortWatcherIptables should propagate failures back to caller
2083087 - "Delete dependent objects of this resource" might cause confusions
2084452 - PodDisruptionBudgets help message should be semantic
2087043 - Cluster API components should use K8s 1.24 dependencies
2087553 - No rhcos-4.11/x86_64 images in the 2 new regions on alibabacloud, "ap-northeast-2 (South Korea (Seoul))" and "ap-southeast-7 (Thailand (Bangkok))"
2089093 - CVO hotloops on OperatorGroup due to the diff of "upgradeStrategy": string("Default")
2089138 - CVO hotloops on ValidatingWebhookConfiguration /performance-addon-operator
2090680 - upgrade for a disconnected cluster get hang on retrieving and verifying payload
2092567 - Network policy is not being applied as expected
2092811 - Datastore name is too long
2093339 - [rebase v1.24] Only known images used by tests
2095719 - serviceaccounts are not updated after upgrade from 4.10 to 4.11
2100181 - WebScale: configure-ovs.sh fails because it picks the wrong default interface
2100429 - [apiserver-auth] default SCC restricted allow volumes don't have "ephemeral" caused deployment with Generic Ephemeral Volumes stuck at Pending
2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS
2104978 - MCD degrades are not overwrite-able by subsequent errors
2110565 - PDB: Remove add/edit/remove actions in Pod resource action menu
2110570 - Topology sidebar: Edit pod count shows not the latest replicas value when edit the count again
2110982 - On GCP, need to check load balancer health check IPs required for restricted installation
2113973 - operator scc is nor fixed when we define a custom scc with readOnlyRootFilesystem: true
2114515 - Getting critical NodeFilesystemAlmostOutOfSpace alert for 4K tmpfs
2115265 - Search page: LazyActionMenus are shown below Add/Remove from navigation button
2116686 - [capi] Cluster kind should be valid
2117374 - Improve Pod Admission failure for restricted-v2 denials that pass with restricted
2135339 - CVE-2022-41316 vault: insufficient certificate revocation list checking
2149436 - CVE-2022-46146 exporter-toolkit: authentication bypass via cache poisoning
2154196 - CVE-2022-23526 helm: Denial of service through schema file
2154202 - CVE-2022-23525 helm: Denial of service through through repository index file
2156727 - CVE-2021-4235 go-yaml: Denial of Service in go-yaml
2156729 - CVE-2021-4238 goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be
2161274 - CVE-2022-41717 golang: net/http: excessive memory growth in a Go server accepting HTTP/2 requests
2162182 - CVE-2022-41721 x/net/http2/h2c: request smuggling
2168458 - CVE-2023-25165 helm: getHostByName Function Information Disclosure
2174485 - CVE-2023-25173 containerd: Supplementary groups are not set up properly
2175721 - CVE-2023-27561 runc: volume mount race condition (regression of CVE-2019-19921)
2178358 - CVE-2022-41723 net/http, golang.org/x/net/http2: avoid quadratic complexity in HPACK decoding
2178488 - CVE-2022-41725 golang: net/http, mime/multipart: denial of service from excessive resource consumption
2178492 - CVE-2022-41724 golang: crypto/tls: large handshake records may cause panics
2182883 - CVE-2023-28642 runc: AppArmor can be bypassed when /proc
inside the container is symlinked with a specific mount configuration
2182884 - CVE-2023-25809 runc: Rootless runc makes /sys/fs/cgroup
writable
2182972 - CVE-2023-25000 hashicorp/vault: Cache-Timing Attacks During Seal and Unseal Operations
2182981 - CVE-2023-0665 hashicorp/vault: Vault?s PKI Issuer Endpoint Did Not Correctly Authorize Access to Issuer Metadata
2184663 - CVE-2023-0620 vault: Vault?s Microsoft SQL Database Storage Backend Vulnerable to SQL Injection Via Configuration File
2190116 - CVE-2023-30841 baremetal-operator: plain-text username and hashed password readable by anyone having a cluster-wide read-access
- JIRA issues fixed (https://issues.jboss.org/):
OCPBUGS-10036 - Enable aesgcm encryption provider by default in openshift/api
OCPBUGS-10038 - Enable aesgcm encryption provider by default in openshift/cluster-config-operator
OCPBUGS-10042 - Enable aesgcm encryption provider by default in openshift/cluster-kube-apiserver-operator
OCPBUGS-10043 - Enable aesgcm encryption provider by default in openshift/cluster-openshift-apiserver-operator
OCPBUGS-10044 - Enable aesgcm encryption provider by default in openshift/cluster-authentication-operator
OCPBUGS-10047 - oc-mirror print log: unable to parse reference oci://mno/redhat-operator-index:v4.12
OCPBUGS-10057 - With WPC card configured as GM or BC, phc2sys clock lock state is shown as FREERUN in ptp metrics while it should be LOCKED
OCPBUGS-10213 - aws: mismatch between RHCOS and AWS SDK regions
OCPBUGS-10220 - Newly provisioned machines unable to join cluster
OCPBUGS-10221 - Risk cache warming takes too long on channel changes
OCPBUGS-10237 - Limit the nested repository path while mirroring the images using oc-mirror for those who cant have nested paths in their container registry
OCPBUGS-10239 - [release-4.13] Fix of ServiceAccounts gathering
OCPBUGS-10249 - PollConsoleUpdates won't fire toast if one or more manifests errors when plugins change
OCPBUGS-10267 - NetworkManager TUI quits regardless of a detected unsupported configuration
OCPBUGS-10271 - [4.13] Netflink overflow alert
OCPBUGS-10278 - Graph-data is not mounted on graph-builder correctly while install using graph-data image built by oc-mirror
OCPBUGS-10281 - Openshift Ansible OVS version out of sync with RHCOS
OCPBUGS-10291 - Broken link for Ansible tagging
OCPBUGS-10298 - TenantID is ignored in some cases
OCPBUGS-10320 - Catalogs should not be included in the ImageContentSourcePolicy.yaml
OCPBUGS-10321 - command cannot be worked after chroot /host for oc debug pod
OCPBUGS-1033 - Multiple extra manifests in the same file are not applied correctly
OCPBUGS-10334 - Nutanix cloud-controller-manager pod not have permission to get/list ConfigMap
OCPBUGS-10353 - kube-apiserver not receiving or processing shutdown signal after coreos 9.2 bump
OCPBUGS-10367 - Pausing pools in OCP 4.13 will cause critical alerts to fire
OCPBUGS-10377 - [gcp] IPI installation with Shielded VMs enabled failed on restarting the master machines
OCPBUGS-10404 - Workload annotation missing from deployments
OCPBUGS-10421 - RHCOS 4.13 live iso x84_64 contains restrictive policy.json
OCPBUGS-10426 - node-topology is not exported due to kubelet.sock: connect: permission denied
OCPBUGS-10427 - 4.1 born cluster fails to scale-up due to podman run missing --authfile
flag
OCPBUGS-10432 - CSI Inline Volume admission plugin does not log object name correctly
OCPBUGS-10440 - OVN IPSec - does not create IPSec tunnels
OCPBUGS-10474 - OpenShift pipeline TaskRun(s) column Duration is not present as column in UI
OCPBUGS-10476 - Disable netlink mode of netclass collector in Node Exporter.
OCPBUGS-1048 - if tag categories don't exist, the installation will fail to bootstrap
OCPBUGS-10483 - [4.13 arm64 image][AWS EFS] Driver fails to get installed/exec format error
OCPBUGS-10558 - MAPO failing to retrieve flavour information after rotating credentials
OCPBUGS-10585 - [4.13] Request to update RHCOS installer bootimage metadata
OCPBUGS-10586 - Console shows x509 error when requesting token from oauth endpoint
OCPBUGS-10597 - The agent-tui shows again during the installation
OCPBUGS-1061 - administrator console, monitoring-alertmanager-edit user list or create silence, "Observe - Alerting - Silences" page is pending
OCPBUGS-10645 - 4.13: Operands running management side missing affinity, tolerations, node selector and priority rules than the operator
OCPBUGS-10656 - create image command erroneously logs that Base ISO was obtained from release
OCPBUGS-10657 - When releaseImage is a digest the create image command generates spurious warning
OCPBUGS-10658 - Wrong PrimarySubnet in OpenstackProviderSpec when using Failure Domains
OCPBUGS-10661 - machine API operator failing with No Major.Minor.Patch elements found
OCPBUGS-10678 - Developer catalog shows ImageStreams as samples which has no sampleRepo
OCPBUGS-10679 - Show type of sample on the samples view
OCPBUGS-10689 - [IPI on BareMetal]: Workers failing inspection when installing with proxy
OCPBUGS-10697 - [release-4.13] User is allowed to create IP Address pool with duplicate entries for namespace and matchExpression for serviceSelector and namespaceSelector
OCPBUGS-10698 - [release-4.13] Already assigned IP address is removed from a service on editing the ip address pool.
OCPBUGS-10710 - Metal virtual media job permafails during early bootstrap
OCPBUGS-10716 - Image Registry default to Removed on IBM cloud after 4.13.0-ec.3
OCPBUGS-10739 - [4.13] Bootimage bump tracker
OCPBUGS-10744 - [4.13] EgressFirewall status disappeared
OCPBUGS-10746 - Downstream Operator-SDK v1.22.2 to OCP 4.13
OCPBUGS-10771 - upgrade test failure with "Cluster operator control-plane-machine-set is not available"
OCPBUGS-10773 - TestNewAppRun unit test failing
OCPBUGS-10792 - Hypershift namespace servicemonitor has wrong API group
OCPBUGS-10793 - Ignore device list missing in Node Exporter
OCPBUGS-10796 - [4.13] Egress firewall is not retried on error
OCPBUGS-10799 - Network policy perf improvements
OCPBUGS-10801 - [4.13] Upgrade to 4.10 stalled on timeout completing syncEgressFirewall
OCPBUGS-10811 - Missing vCenter build number in telemetry
OCPBUGS-10813 - SCOS bootstrap should skip pivot when root is not writable
OCPBUGS-10826 - RHEL 9.2 doesn't contain the kernel-abi-whitelists
package.
OCPBUGS-10832 - Edit Deployment (and DC) form doesn't enable Save button when changing strategy type
OCPBUGS-10833 - update the default pipelineRun template name
OCPBUGS-10834 - [OVNK] [IC] Having only one leader election in the master process
OCPBUGS-10873 - OVN to OVN-H migration seems broken
OCPBUGS-10888 - oauth-server fails to invalidate cache, causing non existing groups being referenced
OCPBUGS-10890 - Hypershift replace upgrade: node in NotReady after upgrading from a 4.14 image to another 4.14 image
OCPBUGS-10891 - Cluster Autoscaler balancing similar nodes test fails randomly
OCPBUGS-10892 - Passwords printed in log messages
OCPBUGS-10893 - Remove unsupported warning in oc-mirror when using the --skip-pruning flag
OCPBUGS-10902 - [IBMCloud] destroyed the private cluster, fail to cleanup the dns records
OCPBUGS-10903 - [IBMCloud] fail to ssh to master/bootstrap/worker nodes from the bastion inside a customer vpc.
OCPBUGS-10907 - move to rhel9 in DTK for 4.13
OCPBUGS-10914 - Node healthz server: return unhealthy when pod is to be deleted
OCPBUGS-10919 - Update Samples Operator to use latest jenkins 4.12 release
OCPBUGS-10923 - Cluster bootstrap waits for only one master to join before finishing
OCPBUGS-10929 - Kube 1.26 for ovn-k
OCPBUGS-10946 - For IPv6-primary dual-stack cluster, kubelet.service renders only single node-ip
OCPBUGS-10951 - When imagesetconfigure without OCI FBC format config, but command with use-oci-feature flag, the oc-mirror command should check the imagesetconfigure firstly and print error immediately
OCPBUGS-10953 - ovnkube-node does not close up correctly
OCPBUGS-10955 - [release-4.13] NMstate complains about ping not working when adding multiple routing tables with different gateways
OCPBUGS-10960 - [4.13] Vertical Scaling: do not trigger inadvertent machine deletion during bootstrap
OCPBUGS-10965 - The network-tools image stream is missing in the cluster samples
OCPBUGS-10982 - [4.13] nodeSelector in EgressFirewall doesn't work in dualstack cluster
OCPBUGS-10989 - Agent create sub-command is returning fatal error
OCPBUGS-10990 - EgressIP doesn't work in GCP XPN cluster
OCPBUGS-11004 - Bootstrap kubelet client cert should include system:serviceaccounts group
OCPBUGS-11010 - [vsphere] zone cluster installation fails if vSphere Cluster is embedded in Folder
OCPBUGS-11022 - [4.13][scale] all egressfirewalls will be updated on every node update
OCPBUGS-11023 - [4.13][scale] Ingress network policy creates more flows than before
OCPBUGS-11031 - SNO OCP upgrade from 4.12 to 4.13 failed due to node-tuning operator is not available - tuned pod stuck at Terminating
OCPBUGS-11032 - Update the validation interval for the cluster transfer to 12 hours
OCPBUGS-11040 - --container-runtime is being removed in k8s 1.27
OCPBUGS-11054 - GCP: add europe-west12 region to the survey as supported region
OCPBUGS-11055 - APIServer service isn't selected correctly for PublicAndPrivate cluster when external-dns is not configured
OCPBUGS-11058 - [4.13] Conmon leaks symbolic links in /var/run/crio when pods are deleted
OCPBUGS-11068 - nodeip-configuration not enabled for VSphere UPI
OCPBUGS-11107 - Alerts display incorrect source when adding external alert sources
OCPBUGS-11117 - The provided gcc RPM inside DTK does not match the gcc used to build the kernel
OCPBUGS-11120 - DTK docs should mention the ubi9 base image instead of ubi8
OCPBUGS-11213 - BMH moves to deleting before all finalizers are processed
OCPBUGS-11218 - "pipelines-as-code-pipelinerun-go" configMap is not been used for the Go repository
OCPBUGS-11222 - kube-controller-manager cluster operator is degraded due connection refused while querying rules
OCPBUGS-11227 - Relax CSR check due to k8s 1.27 changes
OCPBUGS-11232 - All projects options shows as undefined after selection in Dev perspective Pipelines page
OCPBUGS-11248 - Secret name variable get renders in Create Image pull secret alert
OCPBUGS-1125 - Fix disaster recovery test [sig-etcd][Feature:DisasterRecovery][Disruptive] [Feature:EtcdRecovery] Cluster should restore itself after quorum loss [Serial]
OCPBUGS-11257 - egressip cannot be assigned on hypershift hosted cluster node
OCPBUGS-11261 - [AWS][4.13] installer get stuck if BYO private hosted zone is configured
OCPBUGS-11263 - PTP KPI version 4.13 RC2 WPC - offset jumps to huge numbers
OCPBUGS-11307 - Egress firewall node selector test missing
OCPBUGS-11333 - startupProbe for UWM prometheus is still 15m
OCPBUGS-11339 - ose-ansible-operator base image version is still 4.12 in the operators that generated by operator-sdk 4.13
OCPBUGS-11340 - ose-helm-operator base image version is still 4.12 in the operators that generated by operator-sdk 4.13
OCPBUGS-11341 - openshift-manila-csi-driver is missing the workload.openshift.io/allowed label
OCPBUGS-11354 - CPMS: node readiness transitions not always trigger reconcile
OCPBUGS-11384 - Switching from enabling realTime to disabling Realtime Workloadhint causes stalld to be enabled
OCPBUGS-11390 - Service Binding Operator installation fails: "A subscription for this operator already exists in namespace ..."
OCPBUGS-11424 - [release-4.13] new whereabouts reconciler relies on HOSTNAME which != spec.nodeName
OCPBUGS-11427 - [release-4.13] whereabouts reads wrong annotation "k8s.v1.cni.cncf.io/networks-status", should be "k8s.v1.cni.cncf.io/network-status"
OCPBUGS-11456 - PTP - When GM and downstream slaves are configured on same server, ptp metrics show slaves as FREERUN
OCPBUGS-11458 - Ingress Takes 40s on Average Downtime During GCP OVN Upgrades
OCPBUGS-11460 - CPMS doesn't always generate configurations for AWS
OCPBUGS-11468 - Community operator cannot be mirrored due to malformed image address
OCPBUGS-11469 - [release4.13] "exclude bundles with olm.deprecated
property when rendering" not backport
OCPBUGS-11473 - NS autolabeler requires RoleBinding subject namespace to be set when using ServiceAccount
OCPBUGS-11485 - [4.13] NVMe disk by-id rename breaks LSO/ODF
OCPBUGS-11503 - Update 4.13 cluster-network-operator image in Dockerfile to be consistent with ART
OCPBUGS-11506 - CPMS e2e periodics tests timeout failures
OCPBUGS-11507 - Potential 4.12 to 4.13 upgrade failure due to NIC rename
OCPBUGS-11510 - Setting cpu-quota.crio.io to disable
with crun causes container creation to fail
OCPBUGS-11511 - [4.13] static container pod cannot be running due to CNI request failed with status 400
OCPBUGS-11529 - [Azure] fail to collect the vm serial log with ?gather bootstrap?
OCPBUGS-11536 - Cluster monitoring operator runs node-exporter with btrfs collector
OCPBUGS-11545 - multus-admission-controller should not run as root under Hypershift-managed CNO
OCPBUGS-11558 - multus-admission-controller should not run as root under Hypershift-managed CNO
OCPBUGS-11589 - Ensure systemd is compatible with rhel8 journalctl
OCPBUGS-11598 - openshift-azure-routes triggered continously on rhel9
OCPBUGS-11606 - User configured In-cluster proxy configuration squashed in hypershift
OCPBUGS-11643 - Updating kube-rbac-proxy images to be consistent with ART
OCPBUGS-11657 - [4.13] Static IPv6 LACP bonding is randomly failing in RHCOS 413.92
OCPBUGS-11659 - Error extracting libnmstate.so.1.3.3 when create image
OCPBUGS-11661 - AWS s3 policy changes block all OCP installs on AWS
OCPBUGS-11669 - Bump to kubernetes 1.26.3
OCPBUGS-11683 - [4.13] Add Controller health to CEO liveness probe
OCPBUGS-11694 - [4.13] Update legacy toolbox to use registry.redhat.io/rhel9/support-tools
OCPBUGS-11706 - ccoctl cannot create STS documents in 4.10-4.13 due to s3 policy changes
OCPBUGS-11750 - TuningCNI cnf-test failure: sysctl allowlist update
OCPBUGS-11765 - [4.13] Keep current OpenSSH default config in RHCOS 9
OCPBUGS-11776 - [4.13] VSphereStorageDriver does not document the platform default
OCPBUGS-11778 - Upgrade SNO: no resolv.conf caused by failure in forcedns dispatcher script
OCPBUGS-11787 - Update 4.14 ose-vmware-vsphere-csi-driver image to be consistent with ART
OCPBUGS-11789 - [4.13] Bootimage bump tracker
OCPBUGS-11799 - [4.13] Bootimage bump tracker
OCPBUGS-11823 - [Reliability]kube-apiserver's memory usage keep increasing to max 3GB in 7 days
OCPBUGS-11848 - PtpOperatorsConfig not applying correctly
OCPBUGS-11866 - Pipeline is not removed when Deployment/DC/Knative Service or Application is deleted
OCPBUGS-11870 - [4.13] Nodes in Ironic are created without namespaces initially
OCPBUGS-11876 - oc-mirror generated file-based catalogs crashloop
OCPBUGS-11908 - Got the file exists
error when different digest direct to the same tag
OCPBUGS-11917 - the warn message won't disappear in co/node-tuning when scale down machineset
OCPBUGS-11919 - Console metrics could have a high cardinality (4.13)
OCPBUGS-11950 - fail to create vSphere IPI cluster as apiVIP and ingressVIP are not in machine networks
OCPBUGS-11955 - NTP config not applied
OCPBUGS-11968 - Instance shouldn't be moved back from f to a
OCPBUGS-11985 - [4.13] Ironic inspector service should be proxied
OCPBUGS-12172 - Users don't know what type of resource is being created by Import from Git or Deploy Image flows
OCPBUGS-12179 - agent-tui is failing to start when using libnmstate.2
OCPBUGS-12186 - Pipeline doesn't render correctly when displayed but looks fine in edit mode
OCPBUGS-12198 - create hosted cluster failed with aws s3 access issue
OCPBUGS-12212 - cluster failed to convert from dualstack to ipv4 single stack
OCPBUGS-12225 - Add new OCP 4.13 storage admission plugin
OCPBUGS-12257 - Catalogs rebuilt by oc-mirror are in crashloop : cache is invalid
OCPBUGS-12259 - oc-mirror fails to complete with heads only complaining about devworkspace-operator
OCPBUGS-12271 - Hypershift conformance test fails new cpu partitioning tests
OCPBUGS-12272 - Importing a kn Service shows a non-working Open URL decorator also when the Add Route checkbox was unselected
OCPBUGS-12273 - When Creating Sample Devfile from the Samples Page, Topology Icon is not set
OCPBUGS-12450 - [4.13] Fix Flake TestAttemptToScaleDown/scale_down_only_by_one_machine_at_a_time
OCPBUGS-12465 - --use-oci-feature leads to confusion and needs to be better named
OCPBUGS-12478 - CSI driver + operator containers are not pinned to mgmt cores
OCPBUGS-1264 - e2e-vsphere-zones failing due to unable to parse cloud-config
OCPBUGS-12698 - redfish-virtualmedia mount not working
OCPBUGS-12703 - redfish-virtualmedia mount not working
OCPBUGS-12708 - [4.13] Changing a PreprovisioningImage ImageURL and/or ExtraKernelParams should reboot the host
OCPBUGS-1272 - "opm alpha render-veneer basic" doesn't support pipe stdin
OCPBUGS-12737 - Multus admission controller must have "hypershift.openshift.io/release-image" annotation when CNO is managed by Hypershift
OCPBUGS-12786 - OLM CatalogSources in guest cluster cannot pull images if pre-GA
OCPBUGS-12804 - Dual stack VIPs incompatible with EnableUnicast setting
OCPBUGS-12854 - cluster-reader
role cannot access "k8s.ovn.org" API Group resources
OCPBUGS-12862 - IPv6 ingress VIP not configured in keepalived on vSphere Dual-stack
OCPBUGS-12865 - Kubernetes-NMState CI is perma-failing
OCPBUGS-12933 - Node Tuning Operator crashloops when in Hypershift mode
OCPBUGS-12994 - TCP DNS Local Preference is not working for Openshift SDN
OCPBUGS-12999 - Backport owners through 4.13, 4.12
OCPBUGS-13029 - Update Cluster Sample Operator dependencies and libraries for OCP 4.13
OCPBUGS-13057 - ppc64le releases don't install because ovs fails to start (invalid permissions)
OCPBUGS-13069 - [whereabouts-cni] CNO must use reconciliation controller in order to support dual stack in 4.12 [4.13 dependency]
OCPBUGS-13071 - CI fails on TestClientTLS
OCPBUGS-13072 - Capture tests don't work in OVNK
OCPBUGS-13076 - Load balancers/ Ingress controller removal race condition
OCPBUGS-13157 - CI fails on TestRouterCompressionOperation
OCPBUGS-13254 - Nutanix cloud provider should use Kubernetes 1.26 dependencies
OCPBUGS-1327 - [IBMCloud] Worker machines unreachable during initial bring up
OCPBUGS-1352 - OVN silently failing in case of a stuck pod
OCPBUGS-1427 - Ignore non-ready endpoints when processing endpointslices
OCPBUGS-1428 - service account token secret reference
OCPBUGS-1435 - [Ingress Node Firewall Operator] [Web Console] Allow user to override namespace where the operator is installed, currently user can install it only in openshift-operators ns
OCPBUGS-1443 - Unable to get ClusterVersion error while upgrading 4.11 to 4.12
OCPBUGS-1453 - TargetDown alert expression is NOT correctly joining kube-state-metrics metric
OCPBUGS-1458 - cvo pod crashloop during bootstrap: featuregates: connection refused
OCPBUGS-1486 - Avoid re-metric'ing the pods that are already setup when ovnkube-master disrupts/reinitializes/restarts/goes through leader election
OCPBUGS-1557 - Default to floating automaticRestart for new GCP instances
OCPBUGS-1560 - [vsphere] installation fails when only configure single zone in install-config
OCPBUGS-1565 - Possible split brain with keepalived unicast
OCPBUGS-1566 - Automation Offline CPUs Test cases
OCPBUGS-1577 - Incorrect network configuration in worker node with two interfaces
OCPBUGS-1604 - Common resources out-of-date when using multicluster switcher
OCPBUGS-1606 - Multi-cluster: We should not filter OLM catalog by console pod architecture and OS on managed clusters
OCPBUGS-1612 - [vsphere] installation errors out when missing topology in a failure domain
OCPBUGS-1617 - Remove unused node.kubernetes.io/not-reachable toleration
OCPBUGS-1627 - [vsphere] installation fails when setting user-defined folder in failure domain
OCPBUGS-1646 - [osp][octavia lb] LBs type svcs not updated until all the LBs are created
OCPBUGS-166 - 4.11 SNOs fail to complete install because of "failed to get pod annotation: timed out waiting for annotations: context deadline exceeded"
OCPBUGS-1665 - Scorecard failed because of the request of PodSecurity
OCPBUGS-1671 - Creating a statefulset with the example image from the UI on ARM64 leads to a Pod in crashloopbackoff due to the only-amd64 image provided
OCPBUGS-1704 - [gcp] when the optional Service Usage API is disabled, IPI installation cannot succeed
OCPBUGS-1725 - Affinity rule created in router deployment for single-replica infrastructure and "NodePortService" endpoint publishing strategy
OCPBUGS-1741 - Can't load additional Alertmanager templates with latest 4.12 OpenShift
OCPBUGS-1748 - PipelineRun templates must be fetched from OpenShift namespace
OCPBUGS-1761 - osImages that cannot be pulled do not set the node as Degraded properly
OCPBUGS-1769 - gracefully fail when iam:GetRole is denied
OCPBUGS-1778 - Can't install clusters with schedulable masters
OCPBUGS-1791 - Wait-for install-complete did not exit upon completion.
OCPBUGS-1805 - [vsphere-csi-driver-operator] CSI cloud.conf doesn't list multiple datacenters when specified
OCPBUGS-1807 - Ingress Operator startup bad log message formatting
OCPBUGS-1844 - Ironic dnsmasq doesn't include existing DNS settings during iPXE boot
OCPBUGS-1852 - [RHOCP 4.10] Subscription tab for operator doesn't land on correct URL
OCPBUGS-186 - PipelineRun task status overlaps status text
OCPBUGS-1998 - Cluster monitoring fails to achieve new level during upgrade w/ unavailable node
OCPBUGS-2015 - TestCertRotationTimeUpgradeable failing consistently in kube-apiserver-operator
OCPBUGS-2083 - OCP 4.10.33 uses a weak 3DES cipher in the VMWare CSI Operator for communication and provides no method to disable it
OCPBUGS-2088 - User can set rendezvous host to be a worker
OCPBUGS-2141 - doc link in PrometheusDataPersistenceNotConfigured message is 4.8
OCPBUGS-2145 - 'maxUnavailable' and 'minAvailable' on PDB creation page - i18n misses
OCPBUGS-2209 - Hard eviction thresholds is different with k8s default when PAO is enabled
OCPBUGS-2248 - [alibabacloud] IPI installation failed with master nodes being NotReady and CCM error "alicloud: unable to split instanceid and region from providerID"
OCPBUGS-2260 - KubePodNotReady - Increase Tolerance During Master Node Restarts
OCPBUGS-2306 - On Make Serverless page, to change values of the inputs minpod, maxpod and concurrency fields, we need to click the ? + ? or ? - ', it can't be changed by typing in it.
OCPBUGS-2319 - metal-ipi upgrade success rate dropped 30+% in last week
OCPBUGS-2384 - [2035720] [IPI on Alibabacloud] deploying a private cluster by 'publish: Internal' failed due to 'dns_public_record'
OCPBUGS-2440 - unknown field logs in prometheus-operator
OCPBUGS-2471 - BareMetalHost is available without cleaning if the cleaning attempt fails
OCPBUGS-2479 - Right border radius is 0 for the pipeline visualization wrapper in dark mode
OCPBUGS-2500 - Developer Topology always blanks with large contents when first rendering
OCPBUGS-2513 - Disconnected cluster installation fails with pull secret must contain auth for "registry.ci.openshift.org"
OCPBUGS-2525 - [CI Watcher] Ongoing timeout failures associated with multiple CRD-extensions tests
OCPBUGS-2532 - Upgrades from 4.11.9 to latest 4.12.x Nightly builds do not succeed
OCPBUGS-2551 - "Error loading" when normal user check operands on All namespaces
OCPBUGS-2569 - ovn-k network policy races
OCPBUGS-2579 - Helm Charts and Samples are not disabled in topology actions if actions are disabled in customization
OCPBUGS-266 - Project Access tab cannot differentiate between users and groups
OCPBUGS-2666 - create a project
link not backed by RBAC check
OCPBUGS-272 - Getting duplicate word "find" when kube-apiserver degraded=true if webhook matches a virtual resource
OCPBUGS-2727 - ClusterVersionRecommendedUpdate condition blocks explicitly allowed upgrade which is not in the available updates
OCPBUGS-2729 - should ignore enP.* NICs from node-exporter on Azure cluster
OCPBUGS-2735 - Operand List Page Layout Incorrect on small screen size.
OCPBUGS-2738 - CVE-2022-26945 CVE-2022-30321 CVE-2022-30322 CVE-2022-30323 ose-baremetal-installer-container: various flaws [openshift-4.13.z]
OCPBUGS-2824 - The dropdown list component will be covered by deployment details page on Topology page
OCPBUGS-2827 - OVNK: NAT issue for packets exceeding check_pkt_larger() for NodePort services that route to hostNetworked pods
OCPBUGS-2841 - Need validation rule for supported arch
OCPBUGS-2845 - Unable to use application credentials for Cinder CSI after OpenStack credentials update
OCPBUGS-2847 - GCP XPN should only be available with Tech Preview
OCPBUGS-2851 - [OCI feature] registries.conf support in oc mirror
OCPBUGS-2852 - etcd failure: failed to make etcd client for endpoints [https://[2620:52:0:1eb:367x:5axx:xxx:xxx]:2379]: context deadline exceeded
OCPBUGS-2868 - Container networking pods cannot be access hosted network pods on another node in ipv6 single stack cluster
OCPBUGS-2873 - Prometheus doesn't reload TLS certificate and key files on disk
OCPBUGS-2886 - The LoadBalaner section shouldn't be set when using Kuryr on cloud-provider
OCPBUGS-2891 - AWS Deprovision Fails with unrecognized elastic load balancing resource type listener
OCPBUGS-2895 - [RFE] 4.11 Azure DiskEncryptionSet static validation does not support upper-case letters
OCPBUGS-2904 - If all the actions are disabled in add page, Details on/off toggle switch to be disabled
OCPBUGS-2907 - provisioning of baremetal nodes fails when using multipath device as rootDeviceHints
OCPBUGS-2921 - br-ex interface not configured makes ovnkube-node Pod to crashloop
OCPBUGS-2922 - 'Status' column sorting doesn't work as expected
OCPBUGS-2926 - Unable to gather OpenStack console logs since kernel cmd line has no console args
OCPBUGS-2934 - Ingress node firewall pod 's events container on the node causing pod in CrashLoopBackOff state when sctp module is loaded on node
OCPBUGS-2941 - CIRO unable to detect swift when content-type is omitted in 204-responses
OCPBUGS-2946 - [AWS] curl network Loadbalancer always get "Connection time out"
OCPBUGS-2948 - Whereabouts CNI timesout while iterating exclude range
OCPBUGS-2988 - apiserver pods cannot reach etcd on single node IPv6 cluster: transport: authentication handshake failed: x509: certificate is valid for ::1, 127.0.0.1, ::1, fd69::2, not 2620:52:0:198::10"
OCPBUGS-2991 - CI jobs are failing with: admission webhook "validation.csi.vsphere.vmware.com" denied the request
OCPBUGS-2992 - metal3 pod crashloops on OKD in BareMetal IPI or assisted-installer bare metal installations
OCPBUGS-2994 - Keepalived monitor stuck for long period of time on kube-api call while installing
OCPBUGS-2996 - [4.13] Bootimage bump tracker
OCPBUGS-3018 - panic in WaitForBootstrapComplete
OCPBUGS-3021 - GCP: missing me-west1 region
OCPBUGS-3024 - Service list shows undefined:80 when type is ExternalName or LoadBalancer
OCPBUGS-3027 - Metrics are not available when running console in development mode
OCPBUGS-3029 - BareMetalHost CR fails to delete on cluster cleanup
OCPBUGS-3033 - Clicking the logo in the masthead goes to /dashboards
, even if metrics are disabled
OCPBUGS-3041 - Guard Pod Hostnames Too Long and Truncated Down Into Collisions With Other Masters
OCPBUGS-3069 - Should show information on page if the upgrade to a target version doesn't take effect.
OCPBUGS-3072 - Operator-sdk run bundle with old sqllite index image failed
OCPBUGS-3079 - RPS hook only sets the first queue, but there are now many
OCPBUGS-3085 - [IPI-BareMetal]: Dual stack deployment failed on BootStrap stage
OCPBUGS-3093 - The control plane should tag AWS security groups at creation
OCPBUGS-3096 - The terraform binaries shipped by the installer are not statically linked
OCPBUGS-3109 - Change text colour for ConsoleNotification that notifies user that the cluster is being
OCPBUGS-3114 - CNO reporting incorrect status
OCPBUGS-3123 - Operator attempts to render both GA and Tech Preview API Extensions
OCPBUGS-3127 - nodeip-configuration retries forever on network failure, blocking ovs-configuration, spamming syslog
OCPBUGS-3168 - Add Capacity button does not exist after upgrade OCP version [OCP4.11->OCP4.12]
OCPBUGS-3172 - Console shouldn't try to install dynamic plugins if permissions aren't available
OCPBUGS-3180 - Regression in ptp-operator conformance tests
OCPBUGS-3186 - [ibmcloud] unclear error msg when zones is not match with the Subnets in BYON install
OCPBUGS-3192 - [4.8][OVN] RHEL 7.9 DHCP worker ovs-configuration fails
OCPBUGS-3195 - Service-ca controller exits immediately with an error on sigterm
OCPBUGS-3206 - [sdn2ovn] Migration failed in vsphere cluster
OCPBUGS-3207 - SCOS build fails due to pinned kernel
OCPBUGS-3214 - Installer does not always add router CA to kubeconfig
OCPBUGS-3228 - Broken secret created while starting a Pipeline
OCPBUGS-3235 - Topology gets stuck loading
OCPBUGS-3245 - ovn-kubernetes ovnkube-master containers crashlooping after 4.11.0-0.okd-2022-10-15-073651 update
OCPBUGS-3248 - CVE-2022-27191 ose-installer-container: golang: crash in a golang.org/x/crypto/ssh server [openshift-4]
OCPBUGS-3253 - No warning when using wait-for vs. agent wait-for commands
OCPBUGS-3272 - Unhealthy Readiness probe failed message failing CI when ovnkube DBs are still coming up
OCPBUGS-3275 - No-op: Unable to retrieve machine from node "xxx": expecting one machine for node xxx got: []
OCPBUGS-3277 - Install failure in create-cluster-and-infraenv.service
OCPBUGS-3278 - Shouldn't need to put host data in platform baremetal section in installconfig
OCPBUGS-3280 - Install ends in preparing-failed due to container-images-available validation
OCPBUGS-3283 - remove unnecessary RBAC in KCM
OCPBUGS-3292 - DaemonSet "/openshift-network-diagnostics/network-check-target" is not available
OCPBUGS-3314 - 'gitlab.secretReference' disappears when the buildconfig is edited on ?From View?
OCPBUGS-3316 - Branch name should sanitised to match actual github branch name in repository plr list
OCPBUGS-3320 - New master will be created if add duplicated failuredomains in controlplanemachineset
OCPBUGS-3331 - Update dependencies in CMO release 4.13
OCPBUGS-3334 - Console should be using v1 apiVersion for ConsolePlugin model
OCPBUGS-3337 - revert "force cert rotation every couple days for development" in 4.12
OCPBUGS-3338 - Environment cannot find Python
OCPBUGS-3358 - Revert BUILD-407
OCPBUGS-3372 - error message is too generic when creating a silence with end time before start
OCPBUGS-3373 - cluster-monitoring-view user can not list servicemonitors on "Observe -> Targets" page
OCPBUGS-3377 - CephCluster and StorageCluster resources use the same paths
OCPBUGS-3381 - Make ovnkube-trace work on hypershift deployments
OCPBUGS-3382 - Unable to configure cluster-wide proxy
OCPBUGS-3391 - seccomp profile unshare.json missing from nodes
OCPBUGS-3395 - Event Source is visible without even creating knative-eventing and knative-serving.
OCPBUGS-3404 - IngressController.spec.nodePlacement.nodeSelector.matchExpressions does not work
OCPBUGS-3414 - Missing 'ImageContentSourcePolicy' and 'CatalogSource' in the oci fbc feature implementation
OCPBUGS-3424 - Azure Disk CSI Driver Operator gets degraded without "CSISnapshot" capability
OCPBUGS-3426 - Update Cluster Sample Operator dependencies and libraries for OCP 4.13
OCPBUGS-3427 - Skip broken [sig-devex][Feature:ImageEcosystem] tests
OCPBUGS-3438 - cloud-network-config-controller not using proxy settings of the management cluster
OCPBUGS-3440 - Authentication operator doesn't respond to console being enabled
OCPBUGS-3441 - Update cluster-authentication-operator not to go degraded without console
OCPBUGS-3444 - [4.13] Descheduler pod is OOM killed when using descheduler-operator profiles on big clusters
OCPBUGS-3456 - track rhcos-4.12
branch for fedora-coreos-config submodule
OCPBUGS-3458 - Surface ClusterVersion RetrievedUpdates condition messages
OCPBUGS-3465 - IBM operator needs deployment manifest fixes
OCPBUGS-3473 - Allow listing crio and kernel versions in machine-os components
OCPBUGS-3476 - Show Tag label and tag name if tag is detected in repository PipelineRun list and details page
OCPBUGS-3480 - Baremetal Provisioning fails on HP Gen9 systems due to eTag handling
OCPBUGS-3499 - Route CRD validation behavior must be the same as openshift-apiserver behavior
OCPBUGS-3501 - Route CRD host-assignment behavior must be the same as openshift-apiserver behavior
OCPBUGS-3502 - CRD-based and openshift-apiserver-based Route validation/defaulting must use the shared implementation
OCPBUGS-3508 - masters repeatedly losing connection to API and going NotReady
OCPBUGS-3524 - The storage account for the CoreOS image is publicly accessible when deploying fully private cluster on Azure
OCPBUGS-3526 - oc fails to extract layers that set xattr on Darwin
OCPBUGS-3539 - [OVN-provider]loadBalancer svc with monitors not working
OCPBUGS-3612 - [IPI] Baremetal ovs-configure.sh script fails to start secondary bridge br-ex1
OCPBUGS-3621 - EUS upgrade stuck on worker pool update: error running skopeo inspect --no-tags
OCPBUGS-3648 - Container security operator Image Manifest Vulnerabilities encounters runtime errors under some circumstances
OCPBUGS-3659 - Expose AzureDisk metrics port over HTTPS
OCPBUGS-3662 - don't enforce PSa in 4.12
OCPBUGS-3667 - PTP 4.12 Regression - CLOCK REALTIME status is locked when physical interface is down
OCPBUGS-3668 - 4.12.0-rc.0 fails to deploy on VMware IPI
OCPBUGS-3676 - After node's reboot some pods fail to start - deleteLogicalPort failed for pod cannot delete GR SNAT for pod
OCPBUGS-3693 - Router e2e: drop template.openshift.io apigroup dependency
OCPBUGS-3709 - Special characters in subject name breaks prefilling role binding form
OCPBUGS-3713 - [vsphere-problem-detector] fully qualified username must be used when checking permissions
OCPBUGS-3714 - 'oc adm upgrade ...' should expose ClusterVersion Failing=True
OCPBUGS-3739 - Pod stuck in containerCreating state when the node on which it is running is Terminated
OCPBUGS-3744 - Egress router POD creation is failing while using openshift-sdn network plugin
OCPBUGS-3755 - Create Alertmanager silence form does not explain the new "Negative matcher" option
OCPBUGS-3761 - Consistent e2e test failure:Events.Events: event view displays created pod
OCPBUGS-3765 - [RFE] Add kernel-rpm-macros to DTK image
OCPBUGS-3771 - contrib/multicluster-environment.sh needs to be updated to work with ACM cluster proxy
OCPBUGS-3776 - Manage columns tooltip remains displayed after dialog is closed
OCPBUGS-3777 - [Dual Stack] ovn-ipsec crashlooping due to cert signing issues
OCPBUGS-3797 - [4.13] Bump OVS control plane to get "ovsdb/transaction.c: Refactor assess_weak_refs."
OCPBUGS-3822 - Cluster-admin cannot know whether operator is fully deleted or not after normal user trigger "Delete CSV"
OCPBUGS-3827 - CCM not able to remove a LB in ERROR state
OCPBUGS-3877 - RouteTargetReference missing default for "weight" in Route CRD v1 schema
OCPBUGS-3880 - [Ingress Node Firewall] Change the logo used for ingress node firewall operator
OCPBUGS-3883 - Hosted ovnkubernetes pods are not being spread among workers evenly
OCPBUGS-3896 - Console nav toggle button reports expanded in both expanded and not expanded states
OCPBUGS-3904 - Delete/Add a failureDomain in CPMS to trigger update cannot work right on GCP
OCPBUGS-3909 - Node is degraded when a machine config deploys a unit with content and mask=true
OCPBUGS-3916 - expr for SDNPodNotReady is wrong due to there is not node label for kube_pod_status_ready
OCPBUGS-3919 - Azure: unable to configure EgressIP if an ASG is set
OCPBUGS-3921 - Openshift-install bootstrap operation cannot find a cloud defined in clouds.yaml in the current directory
OCPBUGS-3923 - [CI] cluster-monitoring-operator produces more watch requests than expected
OCPBUGS-3924 - Remove autoscaling/v2beta2 in 4.12 and later
OCPBUGS-3929 - Use flowcontrol/v1beta2 for apf manifests in 4.13
OCPBUGS-3931 - When all extensions are installed, "libkadm5" rpm package is duplicated in the rpm -q
command
OCPBUGS-3933 - Fails to deprovision cluster when swift omits 'content-type'
OCPBUGS-3945 - Handle 0600 kubeconfig
OCPBUGS-3951 - Dynamic plugin extensions disappear from the UI when a codeRef fails to load
OCPBUGS-3960 - Use kernel-rt from ose repo
OCPBUGS-3965 - must-gather namespace should have ?privileged? warn and audit pod security labels besides enforce
OCPBUGS-3973 - [SNO] csi-snapshot-controller CO is degraded when upgrade from 4.12 to 4.13 and reports permissions issue.
OCPBUGS-3974 - CIRO panics when suspended flag is nil
OCPBUGS-3975 - "Failed to open directory, disabling udev device properties" in node-exporter logs
OCPBUGS-3978 - AWS EBS CSI driver operator is degraded without "CSISnapshot" capability
OCPBUGS-3985 - Allow PSa enforcement in 4.13 by using featuresets
OCPBUGS-3987 - Some nmstate validations are skipped when NM config is in agent-config.yaml
OCPBUGS-3990 - HyperShift control plane operators have wrong priorityClass
OCPBUGS-3993 - egressIP annotation including two interfaces when multiple networks
OCPBUGS-4000 - fix operator naming convention
OCPBUGS-4008 - Console deployment does not roll out when managed cluster configmap is updated
OCPBUGS-4012 - Disabled Serverless add actions should not be displayed in topology menu
OCPBUGS-4026 - Endless rerender loop and a stuck browser on the add and topology page when SBO is installed
OCPBUGS-4047 - [CI-Watcher] e2e test flake: Create key/value secrets Validate a key/value secret
OCPBUGS-4049 - MCO reconcile fails if user replace the pull secret to empty one
OCPBUGS-4052 - [ALBO] OpenShift Load Balancer Operator does not properly support cluster wide proxy
OCPBUGS-4054 - cluster-ingress-operator's configurable-route controller's startup is noisy
OCPBUGS-4089 - Kube-State-metrics pod fails to start due to panic
OCPBUGS-4090 - OCP on OSP - Image registry is deployed with cinder instead of swift storage backend
OCPBUGS-4101 - Empty/missing node-sizing SYSTEM_RESERVED_ES parameter can result in kubelet not starting
OCPBUGS-4110 - Form footer buttons are misaligned in web terminal form
OCPBUGS-4119 - Random SYN drops in OVS bridges of OVN-Kubernetes
OCPBUGS-4166 - Update Cluster Sample Operator dependencies and libraries for OCP 4.13
OCPBUGS-4168 - Prometheus continuously restarts due to slow WAL replay
OCPBUGS-4173 - vsphere-problem-detector should re-check passwords after change
OCPBUGS-4181 - Prometheus and Alertmanager incorrect ExternalURL configured
OCPBUGS-4184 - Use mTLS authentication for all monitoring components instead of bearer token
OCPBUGS-4203 - Unnecessary padding around alert atop debug pod terminal
OCPBUGS-4206 - getContainerStateValue contains incorrectly internationalized text
OCPBUGS-4207 - Remove debug level logging on openshift-config-operator
OCPBUGS-4219 - Add runbook link to PrometheusRuleFailures
OCPBUGS-4225 - [4.13] boot sequence override request fails with Base.1.8.PropertyNotWritable on Lenovo SE450
OCPBUGS-4232 - CNCC: Wrong log format for Azure locking
OCPBUGS-4245 - L2 does not work if a metallb is not able to listen to arp requests on a single interface
OCPBUGS-4252 - Node Terminal tab results in error
OCPBUGS-4253 - Add PodNetworkConnectivityCheck for must-gather
OCPBUGS-4266 - crio.service should use a more safe restart policy to provide recoverability against concurrency issues
OCPBUGS-4279 - Custom Victory-Core components in monitoring ui code causing build issues
OCPBUGS-4280 - Return 0 when oc import-image
failed
OCPBUGS-4282 - [IR-269]Can't pull sub-manifest image using imagestream of manifest list
OCPBUGS-4291 - [OVN]Sometimes after reboot egress node, egress IP cannot be applied anymore.
OCPBUGS-4293 - Specify resources.requests for operator pod
OCPBUGS-4298 - Specify resources.requests for operator pod
OCPBUGS-4302 - Specify resources.requests for operator pod
OCPBUGS-4305 - [4.13] Improve ironic logging configuration in metal3
OCPBUGS-4317 - [IBM][4.13][Snapshot] restore size in snapshot is not the same size of pvc request size
OCPBUGS-4328 - Update installer images to be consistent with ART
OCPBUGS-434 - After FIPS enabled in S390X, ingress controller in degraded state
OCPBUGS-4343 - Use flowcontrol/v1beta3 for apf manifests in 4.13
OCPBUGS-4347 - set TLS cipher suites in Kube RBAC sidecars
OCPBUGS-4350 - CNO in HyperShift reports upgrade complete in clusteroperator prematurely
OCPBUGS-4352 - [RHOCP] HPA shows different API versions in web console
OCPBUGS-4357 - Bump samples operator k8s dep to 1.25.2
OCPBUGS-4359 - cluster-dns-operator corrupts /etc/hosts when fs full
OCPBUGS-4367 - Debug log messages missing from output and Info messages malformed
OCPBUGS-4377 - Service name search ability while creating the Route from console
OCPBUGS-4401 - limit cluster-policy-controller RBAC permissions
OCPBUGS-4411 - ovnkube node pod crashed after converting to a dual-stack cluster network
OCPBUGS-4417 - ip-reconciler removes the overlappingrangeipreservations whether the pod is alive or not
OCPBUGS-4425 - Egress FW ACL rules are invalid in dualstack mode
OCPBUGS-4447 - [MetalLB Operator] The CSV needs an update to reflect the correct version of operator
OCPBUGS-446 - Cannot Add a project from DevConsole in airgap mode using git importing
OCPBUGS-4483 - apply retry logic to ovnk-node controllers
OCPBUGS-4490 - hypershift: csi-snapshot-controller uses wrong kubeconfig
OCPBUGS-4491 - hypershift: aws-ebs-csi-driver-operator uses wrong kubeconfig
OCPBUGS-4492 - [4.13] The property TransferProtocolType is required for VirtualMedia.InsertMedia
OCPBUGS-4502 - [4.13] [OVNK] Add support for service session affinity timeout
OCPBUGS-4516 - oc-mirror
does not work as expected relative path for OCI format copy
OCPBUGS-4517 - Better to detail the --command-os of mac for oc adm release extract
command
OCPBUGS-4521 - all kubelet targets are down after a few hours
OCPBUGS-4524 - Hold lock when deleting completed pod during update event
OCPBUGS-4525 - Don't log in iterateRetryResources when there are no retry entries
OCPBUGS-4535 - There is no 4.13 gcp-filestore-csi-driver-operator version for test
OCPBUGS-4536 - Image registry panics while deploying OCP in eu-south-2 AWS region
OCPBUGS-4537 - Image registry panics while deploying OCP in eu-central-2 AWS region
OCPBUGS-4538 - Image registry panics while deploying OCP in ap-south-2 AWS region
OCPBUGS-4541 - Azure: remove deprecated ADAL
OCPBUGS-4546 - CVE-2021-38561 ose-installer-container: golang: out-of-bounds read in golang.org/x/text/language leads to DoS [openshift-4]
OCPBUGS-4549 - Azure: replace deprecated AD Graph API
OCPBUGS-4550 - [CI] console-operator produces more watch requests than expected
OCPBUGS-4571 - The operator recommended namespace is incorrect after change installation mode to "A specific namespace on the cluster"
OCPBUGS-4574 - Machine stuck in no phase when creating in a nonexistent zone and stuck in Deleting when deleting on GCP
OCPBUGS-463 - OVN-Kubernetes should not send IPs with leading zeros to OVN
OCPBUGS-4630 - Bump documentationBaseURL to 4.13
OCPBUGS-4635 - [OCP 4.13] ironic container images have old packages
OCPBUGS-4638 - Support RHOBS monitoring for HyperShift in CNO
OCPBUGS-4652 - Fixes for RHCOS 9 based on RHEL 9.0
OCPBUGS-4654 - Azure: UPI: Fix storage arm template to work with Galleries and MAO
OCPBUGS-4659 - Network Policy executes duplicate transactions for every pod update
OCPBUGS-4684 - In DeploymentConfig both the Form view and Yaml view are not in sync
OCPBUGS-4689 - SNO not able to bring up Provisioning resource in 4.11.17
OCPBUGS-4691 - Topology sidebar actions doesn't show the latest resource data
OCPBUGS-4692 - PTP operator: Use priority class node critical
OCPBUGS-4700 - read-only update UX: confusing "Update blocked" pop-up
OCPBUGS-4701 - read-only update UX: confusing "Control plane is hosted" banner
OCPBUGS-4703 - Router can migrate to use LivenessProbe.TerminationGracePeriodSeconds
OCPBUGS-4712 - ironic-proxy daemonset not deleted when provisioningNetwork is changed from Disabled to Managed/Unmanaged
OCPBUGS-4724 - [4.13] egressIP annotations not present on OpenShift on Openstack multiAZ installation
OCPBUGS-4725 - mapi_machinehealthcheck_short_circuit not properly reconciling causing MachineHealthCheckUnterminatedShortCircuit alert to fire
OCPBUGS-4746 - Removal of detection of host kubelet kubeconfig breaks IBM Cloud ROKS
OCPBUGS-4756 - OLM generates invalid component selector labels
OCPBUGS-4757 - Revert Catalog PSA decisions for 4.13 (OLM)
OCPBUGS-4758 - Revert Catalog PSA decisions for 4.13 (Marketplace)
OCPBUGS-4769 - Old AWS boot images vs. 4.12: unknown provider 'ec2'
OCPBUGS-4780 - Update openshift/builder release-4.13 to go1.19
OCPBUGS-4781 - Get Helm Release seems to be using List Releases api
OCPBUGS-4793 - CMO may generate Kubernetes events with a wrong object reference
OCPBUGS-4802 - Update formatting with gofmt for go1.19
OCPBUGS-4825 - Pods completed + deleted may leak
OCPBUGS-4827 - Ingress Controller is missing a required AWS resource permission for SC2S region us-isob-east-1
OCPBUGS-4873 - openshift-marketplace namespace missing "audit-version" and "warn-version" PSA label
OCPBUGS-4874 - Baremetal host data is still sometimes required
OCPBUGS-4883 - Default Git type to other info alert should get remove after changing the git type
OCPBUGS-4894 - Disabled Serverless add actions should not be displayed for Knative Service
OCPBUGS-4899 - coreos-installer output not available in the logs
OCPBUGS-4900 - Volume limits test broken on AWS and GCP TechPreview clusters
OCPBUGS-4906 - Cross-namespace template processing is not being tested
OCPBUGS-4909 - Can't reach own service when egress netpol are enabled
OCPBUGS-4913 - Need to wait longer for VM to obtain IP from DHCP
OCPBUGS-4941 - Fails to deprovision cluster when swift omits 'content-type' and there are empty containers
OCPBUGS-4950 - OLM K8s Dependencies should be at 1.25
OCPBUGS-4954 - [IBMCloud] COS Reclamation prevents ResourceGroup cleanup
OCPBUGS-4955 - Bundle Unpacker Using "Always" ImagePullPolicy for digests
OCPBUGS-4969 - ROSA Machinepool EgressIP Labels Not Discovered
OCPBUGS-4975 - Missing translation in ceph storage plugin
OCPBUGS-4986 - precondition: Do not claim warnings would have blocked
OCPBUGS-4997 - Agent ISO does not respect proxy settings
OCPBUGS-5001 - MachineConfigControllerPausedPoolKubeletCA should have a working runbook URI
OCPBUGS-501 - oc get dc fails when AllRequestBodies audit-profile is set in apiserver
OCPBUGS-5010 - Should always delete the must-gather pod when run the must-gather
OCPBUGS-5016 - Editing Pipeline in the ocp console to get information error
OCPBUGS-5018 - Upgrade from 4.11 to 4.12 with Windows machine workers (Spot Instances) failing due to: hcnCreateEndpoint failed in Win32: The object already exists.
OCPBUGS-5036 - Cloud Controller Managers do not react to changes in configuration leading to assorted errors
OCPBUGS-5045 - unit test data race with egress ip tests
OCPBUGS-5068 - [4.13] virtual media provisioning fails when iLO Ironic driver is used
OCPBUGS-5073 - Connection reset by peer issue with SSL OAuth Proxy when route objects are created more than 80.
OCPBUGS-5079 - [CI Watcher] pull-ci-openshift-console-master-e2e-gcp-console jobs: Process did not finish before 4h0m0s timeout
OCPBUGS-5085 - Should only show the selected catalog when after apply the ICSP and catalogsource
OCPBUGS-5101 - [GCP] [capi] Deletion of cluster is happening , it shouldn't be allowed
OCPBUGS-5116 - machine.openshift.io API is not supported in Machine API webhooks
OCPBUGS-512 - Permission denied when write data to mounted gcp filestore volume instance
OCPBUGS-5124 - kubernetes-nmstate does not pass CVP tests in 4.12
OCPBUGS-5136 - provisioning on ilo4-virtualmedia BMC driver fails with error: "Creating vfat image failed: Unexpected error while running command"
OCPBUGS-5140 - [alibabacloud] IPI install got bootstrap failure and without any node ready, due to enforced EIP bandwidth 5 Mbit/s
OCPBUGS-5151 - Installer - provisioning interface on master node not getting ipv4 dhcp ip address from bootstrap dhcp server on OCP IPI BareMetal install
OCPBUGS-5164 - Add support for API version v1beta1 for knativeServing and knativeEventing
OCPBUGS-5165 - Dev Sandbox clusters uses clusterType OSD and there is no way to enforce DEVSANDBOX
OCPBUGS-5182 - [azure] Fail to create master node with vm size in family ECIADSv5 and ECIASv5
OCPBUGS-5184 - [azure] Fail to create master node with vm size in standardNVSv4Family
OCPBUGS-5188 - Wrong message in MCCDrainError alert
OCPBUGS-5234 - [azure] Azure Stack Hub (wwt) UPI installation failed to scale up worker nodes using machinesets
OCPBUGS-5235 - mapi_instance_create_failed metric cannot work when set acceleratedNetworking: true on Azure
OCPBUGS-5269 - remove unnecessary RBAC in KCM: file removal
OCPBUGS-5275 - remove unnecessary RBAC in OCM
OCPBUGS-5287 - Bug with Red Hat Integration - 3scale - Managed Application Services causes operator-install-single-namespace.spec.ts to fail
OCPBUGS-5292 - Multus: Interface name contains an invalid character / [ocp 4.13]
OCPBUGS-5300 - WriteRequestBodies audit profile records routes/status events at RequestResponse level
OCPBUGS-5306 - One old machine stuck in Deleting and many co get degraded when doing master replacement on the cluster with OVN network
OCPBUGS-5346 - Reported vSphere Connection status is misleading
OCPBUGS-5347 - Clusteroperator Available condition is updated every 2 mins when operator is disabled
OCPBUGS-5353 - Dashboard graph should not be stacked - Kubernetes / Compute Resources / Pod Dashboard
OCPBUGS-5410 - [AWS-EBS-CSI-Driver] provision volume using customer kms key couldn't restore its snapshot successfully
OCPBUGS-5423 - openshift-marketplace pods cause PodSecurityViolation alert to fire
OCPBUGS-5428 - Many plugin SDK extension docs are missing descriptions
OCPBUGS-5432 - Downstream Operator-SDK v1.25.1 to OCP 4.13
OCPBUGS-5458 - wal: max entry size limit exceeded
OCPBUGS-5465 - Context Deadline exceeded when PTP service is disabled from the switch
OCPBUGS-5466 - Default CatalogSource aren't always reverted to default settings
OCPBUGS-5492 - CI "[Feature:bond] should create a pod with bond interface" fail for MTU migration jobs
OCPBUGS-5497 - MCDRebootError alarm disappears after 15 minutes
OCPBUGS-5498 - Host inventory quick start for OCP
OCPBUGS-5505 - Upgradeability check is throttled too much and with unnecessary non-determinism
OCPBUGS-5508 - Report topology usage in vSphere environment via telemetry
OCPBUGS-5517 - [Azure/ARO] Update Azure SDK to v63.1.0+incompatible
OCPBUGS-5520 - MCDPivotError alert fires due temporary transient failures
OCPBUGS-5523 - Catalog, fatal error: concurrent map read and map write
OCPBUGS-5524 - Disable vsphere intree tests that exercise multiple tests
OCPBUGS-5534 - [UI] When OCP and ODF are upgraded, refresh web console pop-up doesn't appear after ODF upgrade resulting in dashboard crash
OCPBUGS-5540 - Typo in WTO for Milliseconds
OCPBUGS-5542 - Project dropdown order is not as smart as project list page order
OCPBUGS-5546 - Machine API Provider Azure should not modify the Machine spec
OCPBUGS-5547 - Webhook Secret (1 of 2) is not removed when Knative Service is deleted
OCPBUGS-5559 - add default noProxy config for Azure
OCPBUGS-5733 - [Openshift Pipelines] Description of parameters are not shown in pipelinerun description page
OCPBUGS-5734 - Azure: VIP 168.63.129.16 should be noProxy to all clouds except Public
OCPBUGS-5736 - The main section of the page will keep loading after normal user login
OCPBUGS-5759 - Deletion of BYOH Windows node hangs in Ready,SchedulingDisabled
OCPBUGS-5802 - update sprig to v3 in cno
OCPBUGS-5836 - Incorrect redirection when user try to download windows oc binary
OCPBUGS-5842 - executes /host/usr/bin/oc
OCPBUGS-5851 - [CI-Watcher]: Using OLM descriptor components deletes operand
OCPBUGS-5873 - etcd_object_counts is deprecated and replaced with apiserver_storage_objects, causing "etcd Object Count" dashboard to only show OpenShift resources
OCPBUGS-5888 - Failed to install 4.13 ocp on SNO with "error during syncRequiredMachineConfigPools"
OCPBUGS-5891 - oc-mirror heads-only does not work with target name
OCPBUGS-5903 - gather default ingress controller definition
OCPBUGS-5922 - [2047299 Jira placeholder] nodeport not reachable port connection timeout
OCPBUGS-5939 - revert "force cert rotation every couple days for development" in 4.13
OCPBUGS-5948 - Runtime error using API Explorer with AdmissionReview resource
OCPBUGS-5949 - oc --icsp mapping scope does not match openshift icsp mapping scope
OCPBUGS-5959 - [4.13] Bootimage bump tracker
OCPBUGS-5988 - Degraded etcd on assisted-installer installation- bootstrap etcd is not removed properly
OCPBUGS-5991 - Kube APIServer panics in admission controller
OCPBUGS-5997 - Add Git Repository form shows empty permission content and non-working help link until a git url is entered
OCPBUGS-6004 - apiserver pods cannot reach etcd on single node IPv6 cluster: transport: authentication handshake failed: x509: certificate is valid for ::1, 127.0.0.1, ::1, fd69::2, not 2620:52:0:198::10"
OCPBUGS-6011 - openshift-client package has wrong version of kubectl bundled
OCPBUGS-6018 - The MCO can generate a rendered config with old KubeletConfig contents, blocking upgrades
OCPBUGS-6026 - cannot change /etc folder ownership inside pod
OCPBUGS-6033 - metallb 4.12.0-202301042354 (OCP 4.12) refers to external image
OCPBUGS-6049 - Do not show UpdateInProgress when status is Failing
OCPBUGS-6053 - availableUpdates: null
results in run-time error on Cluster Settings page
OCPBUGS-6055 - thanos-ruler-user-workload-1 pod is getting repeatedly re-created after upgrade do 4.10.41
OCPBUGS-6063 - PVs(vmdk) get deleted when scaling down machineSet with vSphere IPI
OCPBUGS-6089 - Unnecessary event reprocessing
OCPBUGS-6092 - ovs-configuration.service fails - Error: Connection activation failed: No suitable device found for this connection
OCPBUGS-6097 - CVO hotloops on ImageStream and logs the information incorrectly
OCPBUGS-6098 - Show Git icon and URL in repository link in PLR details page should be based on the git provider
OCPBUGS-6101 - Daemonset is not upgraded after operator upgrade
OCPBUGS-6175 - Image registry Operator does not use Proxy when connecting to openstack
OCPBUGS-6185 - Update 4.13 ose-cluster-config-operator image to be consistent with ART
OCPBUGS-6187 - Update 4.13 openshift-state-metrics image to be consistent with ART
OCPBUGS-6189 - Update 4.13 ose-cluster-authentication-operator image to be consistent with ART
OCPBUGS-6191 - Update 4.13 ose-network-metrics-daemon image to be consistent with ART
OCPBUGS-6197 - Update 4.13 ose-openshift-apiserver image to be consistent with ART
OCPBUGS-6201 - Update 4.13 openshift-enterprise-pod image to be consistent with ART
OCPBUGS-6202 - Update 4.13 ose-cluster-kube-apiserver-operator image to be consistent with ART
OCPBUGS-6213 - Update 4.13 ose-machine-config-operator image to be consistent with ART
OCPBUGS-6222 - Update 4.13 ose-alibaba-cloud-csi-driver image to be consistent with ART
OCPBUGS-6228 - Update 4.13 coredns image to be consistent with ART
OCPBUGS-6231 - Update 4.13 ose-kube-storage-version-migrator image to be consistent with ART
OCPBUGS-6232 - Update 4.13 marketplace-operator image to be consistent with ART
OCPBUGS-6233 - Update 4.13 ose-cluster-openshift-apiserver-operator image to be consistent with ART
OCPBUGS-6234 - Update 4.13 ose-cluster-bootstrap image to be consistent with ART
OCPBUGS-6235 - Update 4.13 cluster-network-operator image to be consistent with ART
OCPBUGS-6238 - Update 4.13 oauth-server image to be consistent with ART
OCPBUGS-6240 - Update 4.13 ose-cluster-kube-storage-version-migrator-operator image to be consistent with ART
OCPBUGS-6241 - Update 4.13 operator-lifecycle-manager image to be consistent with ART
OCPBUGS-6247 - Update 4.13 ose-cluster-ingress-operator image to be consistent with ART
OCPBUGS-6262 - Add more logs to "oc extract" in mco-first boot service
OCPBUGS-6265 - When installing SNO with bootstrap in place it takes CVO 6 minutes to acquire the leader lease
OCPBUGS-6270 - Irrelevant vsphere platform data is required
OCPBUGS-6272 - E2E tests: Entire pipeline flow from Builder page Start the pipeline with workspace
OCPBUGS-631 - machineconfig service is failed to start because Podman storage gets corrupted
OCPBUGS-6486 - Image upload fails when installing cluster
OCPBUGS-6503 - admin ack test nondeterministically does a check post-upgrade
OCPBUGS-6504 - IPI Baremetal Master Node in DualStack getting fd69:: address randomly, OVN CrashLoopBackOff
OCPBUGS-6507 - Don't retry network policy peer pods if ips couldn't be fetched
OCPBUGS-6577 - Node-exporter NodeFilesystemAlmostOutOfSpace alert exception needed
OCPBUGS-6610 - Developer - Topology : 'Filter by resource' drop-down i18n misses
OCPBUGS-6621 - Image registry panics while deploying OCP in ap-southeast-4 AWS region
OCPBUGS-6624 - Issue deploying the master node with IPI
OCPBUGS-6634 - Let the console able to build on other architectures and compatible with prow builds
OCPBUGS-6646 - Ingress node firewall CI is broken with latest
OCPBUGS-6647 - User Preferences - Applications : Resource type drop-down i18n misses
OCPBUGS-6651 - Nodes unready in PublicAndPrivate / Private Hypershift setups behind a proxy
OCPBUGS-6660 - Uninstall Operator? modal instructions always reference optional checkbox
OCPBUGS-6663 - Platform baremetal warnings during create image when fields not defined
OCPBUGS-6682 - [OVN] ovs-configuration vSphere vmxnet3 allmulti workaround is now permanent
OCPBUGS-6698 - Fix conflict error message in cluster-ingress-operator's ensureNodePortService
OCPBUGS-6700 - Cluster-ingress-operator's updateIngressClass function logs success message when error
OCPBUGS-6701 - The ingress-operator spuriously updates ingressClass on startup
OCPBUGS-6714 - Traffic from egress IPs was interrupted after Cluster patch to Openshift 4.10.46
OCPBUGS-672 - Redhat-operators are failing regularly due to startup probe timing out which in turn increases CPU/Mem usage on Master nodes
OCPBUGS-6722 - s390x: failed to generate asset "Image": multiple "disk" artifacts found
OCPBUGS-6730 - Pod latency spikes are observed when there is a compaction/leadership transfer
OCPBUGS-6731 - Gathered Environment variables (HTTP_PROXY/HTTPS_PROXY) may contain sensible information and should be obfuscated
OCPBUGS-6741 - opm fails to serve FBC if cachedir not provided
OCPBUGS-6757 - Pipeline Repository (Pipeline-as-Code) list page shows an empty Event type column
OCPBUGS-6760 - Couldn't update/delete cpms on gcp private cluster
OCPBUGS-6762 - Enhance the user experience for the name-filter-input on Metrics target page
OCPBUGS-6765 - "Delete dependent objects of this resource" might cause confusions
OCPBUGS-6777 - [gcp][CORS-1988] "create manifests" without an existing "install-config.yaml" missing 4 YAML files in "
OCPBUGS-7421 - Missing i18n key for PAC section in Git import form
OCPBUGS-7424 - Bump cluster-ingress-operator to k8s APIs v0.26.1
OCPBUGS-7427 - dynamic-demo-plugin.spec.ts requires 10 minutes of unnecessary wait time
OCPBUGS-7438 - Egress service does not handle invalid nodeSelectors correctly
OCPBUGS-7482 - Fix handling of single failure-domain (non-tagged) deployments in vsphere
OCPBUGS-7483 - Hypershift installs on "platform: none" are broken
OCPBUGS-7488 - test flake: should not reconcile SC when state is Unmanaged
OCPBUGS-7495 - Platform type is ignored
OCPBUGS-7517 - Helm page crashes on old releases with a new Secret
OCPBUGS-7519 - NFS Storage Tests trigger Kernel Panic on Azure and Metal
OCPBUGS-7523 - Add new AWS regions for ROSA
OCPBUGS-7542 - Bump router to k8s APIs v0.26.1
OCPBUGS-7555 - Enable default sysctls for kubelet
OCPBUGS-7558 - Rebase coredns to 1.10.1
OCPBUGS-7563 - vSphere install can't complete with out-of-tree CCM
OCPBUGS-7579 - [azure] failed to parse client certificate when using certificate-based Service Principal with passpharse
OCPBUGS-7611 - PTPOperator config transportHost with AMQ is not detected
OCPBUGS-7616 - vSphere multiple in-tree test failures (non-zonal)
OCPBUGS-7617 - Azure Disk volume is taking time to attach/detach
OCPBUGS-7622 - vSphere UPI jobs failing with 'Managed cluster should have machine resources'
OCPBUGS-7648 - Bump cluster-dns-operator to k8s APIs v0.26.1
OCPBUGS-7689 - Project Admin is able to Label project with empty string in RHOCP 4
OCPBUGS-7696 - [ Azure ]not able to deploy machine with publicIp:true
OCPBUGS-7707 - /etc/NetworkManager/dispatcher.d needs to be relabeled during pivot from 8.6 to 9.2
OCPBUGS-7719 - Update to 4.13.0-ec.3 stuck on leaked MachineConfig
OCPBUGS-7729 - Remove ETCD liviness probe.
OCPBUGS-7731 - Need to cancel threads when agent-tui timeout is stopped
OCPBUGS-7733 - Afterburn fails on AWS/GCP clusters born in OCP 4.1/4.2
OCPBUGS-7743 - SNO upgrade from 4.12 to 4.13 rhel9.2 is broken cause of dnsmasq default config
OCPBUGS-7750 - fix gofmt check issue in network-metrics-daemon
OCPBUGS-7754 - ART having trouble building olm images
OCPBUGS-7774 - RawCNIConfig is printed in byte representation on failure, not human readable
OCPBUGS-7785 - migrate to using Lease for leader election
OCPBUGS-7806 - add "nfs-export" under PV details page
OCPBUGS-7809 - sg3_utils package is missing in the assisted-installer-agent Docker file
OCPBUGS-781 - ironic-proxy is using a deprecated field to fetch cluster VIP
OCPBUGS-7833 - Storage tests failing in no-capabilities job
OCPBUGS-7837 - hypershift: aws-ebs-csi-driver-operator uses guest cluster proxy causing PV provisioning failure
OCPBUGS-7860 - [azure] message is unclear when missing clientCertificatePassword in osServicePrincipal.json
OCPBUGS-7876 - [Descheduler] Enabling LifeCycleUtilization to test namespace filtering does not work
OCPBUGS-7879 - Devfile isn't be processed correctly on 'Add from git repo'
OCPBUGS-7896 - MCO should not add keepalived pod manifests in case of VSPHERE UPI
OCPBUGS-7899 - ODF Monitor pods failing to be bounded because timeout issue with thin-csi SC
OCPBUGS-7903 - Pool degraded with error: rpm-ostree kargs: signal: terminated
OCPBUGS-7909 - Baremetal runtime prepender creates /etc/resolv.conf mode 0600 and bad selinux context
OCPBUGS-794 - OLM version rule is not clear
OCPBUGS-7940 - apiserver panics in admission controller
OCPBUGS-7943 - AzureFile CSI driver does not compile with cachito
OCPBUGS-7970 - [E2E] Always close the filter dropdown in listPage.filter.by
OCPBUGS-799 - Reply packet for DNS conversation to service IP uses pod IP as source
OCPBUGS-8066 - Create Serverless Function form breaks if Pipeline Operator is not installed
OCPBUGS-8086 - Visual issues with listing items
OCPBUGS-8243 - [release 4.13] Gather Monitoring pods' Persistent Volumes
OCPBUGS-8308 - Bump openshift/kubernetes to 1.26.2
OCPBUGS-8312 - IPI on Power VS clusters cannot deploy MCO
OCPBUGS-8326 - Azure cloud provider should use Kubernetes 1.26 dependencies
OCPBUGS-8341 - Unable to set capabilities with agent installer based installation
OCPBUGS-8342 - create cluster-manifests fails when imageContentSources is missing
OCPBUGS-8353 - PXE support is incomplete
OCPBUGS-8381 - Console shows x509 error when requesting token from oauth endpoint
OCPBUGS-8401 - Bump openshift/origin to kube 1.26.2
OCPBUGS-8424 - ControlPlaneMachineSet: Machine's Node should be Ready to consider the Machine Ready
OCPBUGS-8445 - cgroups default setting in OCP 4.13 generates extra MachineConfig
OCPBUGS-8463 - OpenStack Failure domains as 4.13 TechPreview
OCPBUGS-8471 - [4.13] egress firewall only createas 1 acl for long namespace names
OCPBUGS-8475 - TestBoundTokenSignerController causes unrecoverable disruption in e2e-gcp-operator CI job
OCPBUGS-8481 - CAPI rebases 4.13 backports
OCPBUGS-8490 - agent-tui: display additional checks only when primary check fails
OCPBUGS-8498 - aws-ebs-csi-driver-operator ServiceAccount does not include the HCP pull-secret in its imagePullSecrets
OCPBUGS-8505 - [4.13] egress firewall acls are deleted on restart
OCPBUGS-8511 - [4.13+ ONLY] Don't use port 80 in bootstrap IPI bare metal
OCPBUGS-855 - When setting allowedRegistries urls the openshift-samples operator is degraded
OCPBUGS-859 - monitor not working with UDP lb when externalTrafficPolicy: Local
OCPBUGS-860 - CSR are generated with incorrect Subject Alternate Names
OCPBUGS-8699 - Metal IPI Install Rate Below 90%
OCPBUGS-8701 - oc patch project
not working with OCP 4.12
OCPBUGS-8702 - OKD SCOS: remove workaround for rpm-ostree auth
OCPBUGS-8703 - fails to switch to kernel-rt with rhel 9.2
OCPBUGS-8710 - [4.13] don't enforce PSa in 4.13
OCPBUGS-8712 - AES-GCM encryption at rest is not supported by kube-apiserver-operator
OCPBUGS-8719 - Allow the user to scroll the content of the agent-tui details view
OCPBUGS-8741 - [4.13] Pods in same deployment will have different ability to query services in same namespace from one another; ocp 4.10
OCPBUGS-8742 - Origin tests should not specify readyz
as the health check path
OCPBUGS-881 - fail to create install-config.yaml as apiVIP and ingressVIP are not in machine networks
OCPBUGS-8941 - Introduce tooltips for contextual information
OCPBUGS-904 - Alerts from MCO are missing namespace
OCPBUGS-9079 - ICMP fragmentation needed sent to pods behind a service don't seem to reach the pods
OCPBUGS-91 - [ExtDNS] New TXT record breaks downward compatibility by retroactively limiting record length
OCPBUGS-9132 - WebSCale: ovn logical router polices incorrect/l3 gw config not updated after IP change
OCPBUGS-9185 - Pod latency spikes are observed when there is a compaction/leadership transfer
OCPBUGS-9233 - ConsoleQuickStart {{copy}} and {{execute}} features do not work in some cases
OCPBUGS-931 - [osp][octavia lb] NodePort allocation cannot be disabled for LB type svcs
OCPBUGS-9338 - editor toggle radio input doesn't have distinguishable attributes
OCPBUGS-9389 - Detach code in vsphere csi driver is failing
OCPBUGS-948 - OLM sets invalid SCC label on its namespaces
OCPBUGS-95 - NMstate removes egressip in OpenShift cluster with SDN plugin
OCPBUGS-9913 - bacport tests for PDBUnhealthyPodEvictionPolicy as Tech Preview
OCPBUGS-9924 - Remove unsupported warning in oc-mirror when using the --skip-pruning flag
OCPBUGS-9926 - Enable node healthz server for ovnk in CNO
OCPBUGS-9951 - fails to reconcile to RT kernel on interrupted updates
OCPBUGS-9957 - Garbage collect grafana-dashboard-etcd
OCPBUGS-996 - Control Plane Machine Set Operator OnDelete update should cause an error when more than one machine is ready in an index
OCPBUGS-9963 - Better to change the error information more clearly to help understand
OCPBUGS-9968 - Operands running management side missing affinity, tolerations, node selector and priority rules than the operator
- References:
https://access.redhat.com/security/cve/CVE-2021-4235 https://access.redhat.com/security/cve/CVE-2021-4238 https://access.redhat.com/security/cve/CVE-2021-20329 https://access.redhat.com/security/cve/CVE-2021-38561 https://access.redhat.com/security/cve/CVE-2021-43519 https://access.redhat.com/security/cve/CVE-2021-44964 https://access.redhat.com/security/cve/CVE-2022-1271 https://access.redhat.com/security/cve/CVE-2022-1586 https://access.redhat.com/security/cve/CVE-2022-1587 https://access.redhat.com/security/cve/CVE-2022-1785 https://access.redhat.com/security/cve/CVE-2022-1897 https://access.redhat.com/security/cve/CVE-2022-1927 https://access.redhat.com/security/cve/CVE-2022-2509 https://access.redhat.com/security/cve/CVE-2022-2990 https://access.redhat.com/security/cve/CVE-2022-3080 https://access.redhat.com/security/cve/CVE-2022-3259 https://access.redhat.com/security/cve/CVE-2022-4203 https://access.redhat.com/security/cve/CVE-2022-4304 https://access.redhat.com/security/cve/CVE-2022-4450 https://access.redhat.com/security/cve/CVE-2022-21698 https://access.redhat.com/security/cve/CVE-2022-23525 https://access.redhat.com/security/cve/CVE-2022-23526 https://access.redhat.com/security/cve/CVE-2022-26280 https://access.redhat.com/security/cve/CVE-2022-27191 https://access.redhat.com/security/cve/CVE-2022-29154 https://access.redhat.com/security/cve/CVE-2022-29824 https://access.redhat.com/security/cve/CVE-2022-34903 https://access.redhat.com/security/cve/CVE-2022-38023 https://access.redhat.com/security/cve/CVE-2022-38177 https://access.redhat.com/security/cve/CVE-2022-38178 https://access.redhat.com/security/cve/CVE-2022-40674 https://access.redhat.com/security/cve/CVE-2022-41316 https://access.redhat.com/security/cve/CVE-2022-41717 https://access.redhat.com/security/cve/CVE-2022-41721 https://access.redhat.com/security/cve/CVE-2022-41723 https://access.redhat.com/security/cve/CVE-2022-41724 https://access.redhat.com/security/cve/CVE-2022-41725 https://access.redhat.com/security/cve/CVE-2022-42010 https://access.redhat.com/security/cve/CVE-2022-42011 https://access.redhat.com/security/cve/CVE-2022-42012 https://access.redhat.com/security/cve/CVE-2022-42898 https://access.redhat.com/security/cve/CVE-2022-42919 https://access.redhat.com/security/cve/CVE-2022-46146 https://access.redhat.com/security/cve/CVE-2022-47629 https://access.redhat.com/security/cve/CVE-2023-0056 https://access.redhat.com/security/cve/CVE-2023-0215 https://access.redhat.com/security/cve/CVE-2023-0216 https://access.redhat.com/security/cve/CVE-2023-0217 https://access.redhat.com/security/cve/CVE-2023-0229 https://access.redhat.com/security/cve/CVE-2023-0286 https://access.redhat.com/security/cve/CVE-2023-0361 https://access.redhat.com/security/cve/CVE-2023-0401 https://access.redhat.com/security/cve/CVE-2023-0620 https://access.redhat.com/security/cve/CVE-2023-0665 https://access.redhat.com/security/cve/CVE-2023-0778 https://access.redhat.com/security/cve/CVE-2023-25000 https://access.redhat.com/security/cve/CVE-2023-25165 https://access.redhat.com/security/cve/CVE-2023-25173 https://access.redhat.com/security/cve/CVE-2023-25577 https://access.redhat.com/security/cve/CVE-2023-25725 https://access.redhat.com/security/cve/CVE-2023-25809 https://access.redhat.com/security/cve/CVE-2023-27561 https://access.redhat.com/security/cve/CVE-2023-28642 https://access.redhat.com/security/cve/CVE-2023-30570 https://access.redhat.com/security/cve/CVE-2023-30841 https://access.redhat.com/security/updates/classification/#important https://docs.openshift.com/container-platform/4.13/release_notes/ocp-4-13-release-notes.html
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2023 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBZGVrhNzjgjWX9erEAQjD7BAAihZ8nlrasEU8QISGjHMUkUXKPHgV6LlZ IT2h0MLam8ICSCDdZ8PUVXhWP+CTTIYYdpEPTaIdKdB16iecRXm2ML8GtQ38zSjC LpCB4NUmAdoH91FbT2oazgrCgg+2hizfufLYk/8nNm9yVR0zT5kZbuXMFZH/PbCb dYYyRsXsNt4+MpaWGf1q3jS7OX8l5UXbfO+nnKHWoow5/PeclygxFbRclr7o62Dy tnfgs+OwbroI6L0nohsUTk4Es1koyD8FaGdo28ViLcgVH1VDhBqzHXSAe1P+XmAc PSG6slSRIrgJpARfN8OEI89wfI+ttyqEi4yAdoKjCo/pbshhLw3JZQcavmQc8XEK o1afTtx0XFHJsAdZRjvq+7zExqnDANRLbtkkYG2gYuc8LgGmh6P0ZlhxQFMS3f/T cTLSLaP6XSnHQaJyc0kqULHcWBZRzepcIDPYkmTCbCVCwLjXuIoF6eMQvo7eRXCy 4qN3nT0+M90jWxf/uQzo9NpeWFB7y2cccHMvaPzZ8cAAxpwM3Rphutu9lzRfJCl8 TMincIMIFq3vLmrfxHX5YOKfgH/Kjc06TbtnzxtucFQVNFxyKIWKgJB/hl1mGDTJ 8cibppoX+mLmUirPuu+5JwaAmq7skX5HKX3r3t8sajmij17nS2Ff8q52ZLgdZQ6H XbiJN3SZj5U= =WGO2 -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:
Red Hat Advanced Cluster Management for Kubernetes 2.7.3 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html/release_notes/
Security fix(es) * CVE-2022-25881 http-cache-semantics: Regular Expression Denial of Service (ReDoS) vulnerability * CVE-2022-3841 RHACM: unauthenticated SSRF in console API endpoint * CVE-2023-29017 vm2: Sandbox Escape * CVE-2023-29199 vm2: Sandbox Escape * CVE-2023-30547 vm2: Sandbox Escape when exception sanitization
- Bugs fixed (https://bugzilla.redhat.com/):
2139426 - CVE-2022-3841 RHACM: unauthenticated SSRF in console API endpoint 2165824 - CVE-2022-25881 http-cache-semantics: Regular Expression Denial of Service (ReDoS) vulnerability 2185374 - CVE-2023-29017 vm2: sandbox escape 2187409 - CVE-2023-29199 vm2: Sandbox Escape 2187608 - CVE-2023-30547 vm2: Sandbox Escape when exception sanitization
- Relevant releases/architectures:
Red Hat Enterprise Linux AppStream (v. 9) - noarch Red Hat Enterprise Linux CRB (v. 9) - aarch64, noarch, x86_64
- Description:
EDK (Embedded Development Kit) is a project to enable UEFI support for Virtual Machines. This package contains a sample 64-bit UEFI firmware for QEMU and KVM.
Security Fix(es):
-
openssl: X.400 address type confusion in X.509 GeneralName (CVE-2023-0286)
-
edk2: integer underflow in SmmEntryPoint function leads to potential SMM privilege escalation (CVE-2021-38578)
-
openssl: timing attack in RSA Decryption implementation (CVE-2022-4304)
-
openssl: double free after calling PEM_read_bio_ex (CVE-2022-4450)
-
openssl: use-after-free following BIO_new_NDEF (CVE-2023-0215)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 9.2 Release Notes linked from the References section. Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1960321 - CVE-2021-38578 edk2: integer underflow in SmmEntryPoint function leads to potential SMM privilege escalation 1983086 - Assertion failure when creating 1024 VCPU VM: [...]UefiCpuPkg/CpuMpPei/CpuBist.c(186): !EFI_ERROR (Status) 2125336 - Please add edk2-aarch64 and edk2-tools to CRB in RHEL 9 2132951 - edk2: Sort traditional virtualization builds before Confidential Computing builds 2157656 - [edk2] [aarch64] Unable to initialize EFI firmware when using edk2-aarch64-20221207gitfff6d81270b5-1.el9 in some hardwares 2162307 - Broken GRUB output on a serial console 2164440 - CVE-2023-0286 openssl: X.400 address type confusion in X.509 GeneralName 2164487 - CVE-2022-4304 openssl: timing attack in RSA Decryption implementation 2164492 - CVE-2023-0215 openssl: use-after-free following BIO_new_NDEF 2164494 - CVE-2022-4450 openssl: double free after calling PEM_read_bio_ex 2168046 - [edk2] BIOS Release Date string is unexpected length 2174605 - [EDK2] disable dynamic mmio window
- Package List:
Red Hat Enterprise Linux AppStream (v. 9):
Source: edk2-20221207gitfff6d81270b5-9.el9_2.src.rpm
noarch: edk2-aarch64-20221207gitfff6d81270b5-9.el9_2.noarch.rpm edk2-ovmf-20221207gitfff6d81270b5-9.el9_2.noarch.rpm
Red Hat Enterprise Linux CRB (v. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- Summary:
The Migration Toolkit for Containers (MTC) 1.7.9 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):
2174485 - CVE-2023-25173 containerd: Supplementary groups are not set up properly 2178488 - CVE-2022-41725 golang: net/http, mime/multipart: denial of service from excessive resource consumption 2178492 - CVE-2022-41724 golang: crypto/tls: large handshake records may cause panics
- Description:
Red Hat JBoss Web Server is a fully integrated and certified set of components for hosting Java web applications. It is comprised of the Apache Tomcat Servlet container, JBoss HTTP Connector (mod_cluster), the PicketLink Vault extension for Apache Tomcat, and the Tomcat Native library. Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. JIRA issues fixed (https://issues.redhat.com/):
JWS-2933 - Update openssl from JBCS to versions from 2.4.51-SP2
- Bugs fixed (https://bugzilla.redhat.com/):
2139896 - Requested TSC frequency outside tolerance range & TSC scaling not supported 2145146 - CDI operator is not creating PrometheusRule resource with alerts if CDI resource is incorrect 2148383 - Migration metrics values are not sum up values from all VMIs 2149409 - HPP mounter deployment can't mount as unprivileged 2168489 - Overview -> Migrations - The ?Bandwidth consumption? Graph display with wrong values 2184435 - [cnv-4.12] virt-handler should not delete any pre-configured mediated devices i these are provided by an external provider 2222191 - [cnv-4.12] manually increasing the number of virt-api pods does not work
5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202302-0195", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "ucosminexus application server", "scope": null, "trust": 1.6, "vendor": "\u65e5\u7acb", "version": null }, { "model": "ucosminexus service platform", "scope": null, "trust": 1.6, "vendor": "\u65e5\u7acb", "version": null }, { "model": "ucosminexus primary server base", "scope": null, "trust": 1.6, "vendor": "\u65e5\u7acb", "version": null }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "3.0.0" }, { "model": "network security", "scope": "lt", "trust": 1.0, "vendor": "stormshield", "version": "4.3.16" }, { "model": "network security", "scope": "gte", "trust": 1.0, "vendor": "stormshield", "version": "4.0.0" }, { "model": "openssl", "scope": "lt", "trust": 1.0, "vendor": "openssl", "version": "1.1.1t" }, { "model": "openssl", "scope": "lt", "trust": 1.0, "vendor": "openssl", "version": "3.0.8" }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "1.1.1" }, { "model": "network security", "scope": "gte", "trust": 1.0, "vendor": "stormshield", "version": "4.4.0" }, { "model": "network security", "scope": "lt", "trust": 1.0, "vendor": "stormshield", "version": "4.6.3" }, { "model": "jp1/navigation platform for developers", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "jp1/it desktop management 2 - operations director", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "ucosminexus service architect", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "\u5f97\u9078\u8857\u30fbgcb", "scope": null, "trust": 0.8, "vendor": "\u65e5\u672c\u96fb\u6c17", "version": null }, { "model": "ucosminexus application server-r", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "iot \u5171\u901a\u57fa\u76e4", "scope": null, "trust": 0.8, "vendor": "\u65e5\u672c\u96fb\u6c17", "version": null }, { "model": "jp1/data highway - server", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "jp1/automatic job management system 3 - manager", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "jp1/it desktop management 2 - smart device manager", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "jp1/performance management", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "jp1/it desktop management 2 - manager", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "jp1/service support starter edition", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "jp1/automatic job management system 3 - definitions assistant", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "connexive application platform", "scope": null, "trust": 0.8, "vendor": "\u65e5\u672c\u96fb\u6c17", "version": null }, { "model": "jp1/service support", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "ucosminexus developer", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "vran", "scope": null, "trust": 0.8, "vendor": "\u65e5\u672c\u96fb\u6c17", "version": null }, { "model": "jp1/automatic operation", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "connexive pf", "scope": null, "trust": 0.8, "vendor": "\u65e5\u672c\u96fb\u6c17", "version": null }, { "model": "jp1/snmp system observer", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "esmpro/serveragent", "scope": null, "trust": 0.8, "vendor": "\u65e5\u672c\u96fb\u6c17", "version": null }, { "model": "openssl", "scope": null, "trust": 0.8, "vendor": "openssl", "version": null }, { "model": "nec multimedia olap for \u6620\u50cf\u5206\u6790\u30b5\u30fc\u30d3\u30b9", "scope": null, "trust": 0.8, "vendor": "\u65e5\u672c\u96fb\u6c17", "version": null }, { "model": "ix \u30eb\u30fc\u30bf", "scope": null, "trust": 0.8, "vendor": "\u65e5\u672c\u96fb\u6c17", "version": null }, { "model": "jp1/base", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "neoface monitor", "scope": null, "trust": 0.8, "vendor": "\u65e5\u672c\u96fb\u6c17", "version": null }, { "model": "spoolserver/reportfiling", "scope": null, "trust": 0.8, "vendor": "\u65e5\u672c\u96fb\u6c17", "version": null }, { "model": "\u30d7\u30ed\u30b0\u30e9\u30df\u30f3\u30b0\u74b0\u5883 for java", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "jp1/navigation platform", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "cosminexus http server", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "nec enhanced speech analysis", "scope": null, "trust": 0.8, "vendor": "\u65e5\u672c\u96fb\u6c17", "version": null }, { "model": "jp1/file transmission server/ftp", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "jp1/operations analytics", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "\u990a\u6b96\u9b5a\u30b5\u30a4\u30ba\u6e2c\u5b9a\u81ea\u52d5\u5316\u30b5\u30fc\u30d3\u30b9", "scope": null, "trust": 0.8, "vendor": "\u65e5\u672c\u96fb\u6c17", "version": null }, { "model": "jp1/data highway - server starter edition", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "webotx application server", "scope": null, "trust": 0.8, "vendor": "\u65e5\u672c\u96fb\u6c17", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-003616" }, { "db": "NVD", "id": "CVE-2022-4450" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.0.8", "versionStartIncluding": "3.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.1.1t", "versionStartIncluding": "1.1.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:stormshield:stormshield_network_security:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "4.6.3", "versionStartIncluding": "4.4.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:stormshield:stormshield_network_security:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "4.3.16", "versionStartIncluding": "4.0.0", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2022-4450" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "173547" }, { "db": "PACKETSTORM", "id": "172441" }, { "db": "PACKETSTORM", "id": "171957" }, { "db": "PACKETSTORM", "id": "172460" }, { "db": "PACKETSTORM", "id": "172238" }, { "db": "PACKETSTORM", "id": "172147" }, { "db": "PACKETSTORM", "id": "172733" }, { "db": "PACKETSTORM", "id": "174517" } ], "trust": 0.8 }, "cve": "CVE-2022-4450", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 7.5, "baseSeverity": "HIGH", "confidentialityImpact": "NONE", "exploitabilityScore": 3.9, "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "High", "baseScore": 7.5, "baseSeverity": "High", "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2022-4450", "impactScore": null, "integrityImpact": "None", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2022-4450", "trust": 1.8, "value": "HIGH" } ] } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-003616" }, { "db": "NVD", "id": "CVE-2022-4450" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "The function PEM_read_bio_ex() reads a PEM file from a BIO and parses and\ndecodes the \"name\" (e.g. \"CERTIFICATE\"), any header data and the payload data. \nIf the function succeeds then the \"name_out\", \"header\" and \"data\" arguments are\npopulated with pointers to buffers containing the relevant decoded data. The\ncaller is responsible for freeing those buffers. It is possible to construct a\nPEM file that results in 0 bytes of payload data. In this case PEM_read_bio_ex()\nwill return a failure code but will populate the header argument with a pointer\nto a buffer that has already been freed. If the caller also frees this buffer\nthen a double free will occur. This will most likely lead to a crash. This\ncould be exploited by an attacker who has the ability to supply malicious PEM\nfiles for parsing to achieve a denial of service attack. \n\nThe functions PEM_read_bio() and PEM_read() are simple wrappers around\nPEM_read_bio_ex() and therefore these functions are also directly affected. \n\nThese functions are also called indirectly by a number of other OpenSSL\nfunctions including PEM_X509_INFO_read_bio_ex() and\nSSL_CTX_use_serverinfo_file() which are also vulnerable. Some OpenSSL internal\nuses of these functions are not vulnerable because the caller does not free the\nheader argument if PEM_read_bio_ex() returns a failure code. These locations\ninclude the PEM_read_bio_TYPE() functions as well as the decoders introduced in\nOpenSSL 3.0. \n\nThe OpenSSL asn1parse command line application is also impacted by this issue. OpenSSL has payload data 0 become a part-time worker PEM When creating a file, PEM_read_bio_ex() A double free vulnerability exists because when returns a failure code, it introduces a pointer to an already freed buffer into the header argument.Malicious by attacker PEM Denial of service by providing files ( crash ) It may be in a state. Bugs fixed (https://bugzilla.redhat.com/):\n\n2212085 - CVE-2023-3089 openshift: OCP \u0026 FIPS mode\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Important: OpenShift Container Platform 4.13.0 security update\nAdvisory ID: RHSA-2023:1326-01\nProduct: Red Hat OpenShift Enterprise\nAdvisory URL: https://access.redhat.com/errata/RHSA-2023:1326\nIssue date: 2023-05-17\nCVE Names: CVE-2021-4235 CVE-2021-4238 CVE-2021-20329 \n CVE-2021-38561 CVE-2021-43519 CVE-2021-44964 \n CVE-2022-1271 CVE-2022-1586 CVE-2022-1587 \n CVE-2022-1785 CVE-2022-1897 CVE-2022-1927 \n CVE-2022-2509 CVE-2022-2990 CVE-2022-3080 \n CVE-2022-3259 CVE-2022-4203 CVE-2022-4304 \n CVE-2022-4450 CVE-2022-21698 CVE-2022-23525 \n CVE-2022-23526 CVE-2022-26280 CVE-2022-27191 \n CVE-2022-29154 CVE-2022-29824 CVE-2022-34903 \n CVE-2022-38023 CVE-2022-38177 CVE-2022-38178 \n CVE-2022-40674 CVE-2022-41316 CVE-2022-41717 \n CVE-2022-41721 CVE-2022-41723 CVE-2022-41724 \n CVE-2022-41725 CVE-2022-42010 CVE-2022-42011 \n CVE-2022-42012 CVE-2022-42898 CVE-2022-42919 \n CVE-2022-46146 CVE-2022-47629 CVE-2023-0056 \n CVE-2023-0215 CVE-2023-0216 CVE-2023-0217 \n CVE-2023-0229 CVE-2023-0286 CVE-2023-0361 \n CVE-2023-0401 CVE-2023-0620 CVE-2023-0665 \n CVE-2023-0778 CVE-2023-25000 CVE-2023-25165 \n CVE-2023-25173 CVE-2023-25577 CVE-2023-25725 \n CVE-2023-25809 CVE-2023-27561 CVE-2023-28642 \n CVE-2023-30570 CVE-2023-30841 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.13.0 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nThis release includes a security update for Red Hat OpenShift Container\nPlatform 4.13. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.13.0. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2023:1325\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.13/release_notes/ocp-4-13-release-notes.html\n\nSecurity Fix(es):\n\n* goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as\nrandom as they should be (CVE-2021-4238)\n\n* go-yaml: Denial of Service in go-yaml (CVE-2021-4235)\n\n* mongo-go-driver: specific cstrings input may not be properly validated\n(CVE-2021-20329)\n\n* golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n(CVE-2021-38561)\n\n* prometheus/client_golang: Denial of service using\nInstrumentHandlerCounter (CVE-2022-21698)\n\n* helm: Denial of service through through repository index file\n(CVE-2022-23525)\n\n* helm: Denial of service through schema file (CVE-2022-23526)\n\n* golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)\n\n* vault: insufficient certificate revocation list checking (CVE-2022-41316)\n\n* golang: net/http: excessive memory growth in a Go server accepting HTTP/2\nrequests (CVE-2022-41717)\n\n* x/net/http2/h2c: request smuggling (CVE-2022-41721)\n\n* net/http, golang.org/x/net/http2: avoid quadratic complexity in HPACK\ndecoding (CVE-2022-41723)\n\n* golang: crypto/tls: large handshake records may cause panics\n(CVE-2022-41724)\n\n* golang: net/http, mime/multipart: denial of service from excessive\nresource consumption (CVE-2022-41725)\n\n* exporter-toolkit: authentication bypass via cache poisoning\n(CVE-2022-46146)\n\n* vault: Vault\u2019s Microsoft SQL Database Storage Backend Vulnerable to SQL\nInjection Via Configuration File (CVE-2023-0620)\n\n* hashicorp/vault: Vault\u2019s PKI Issuer Endpoint Did Not Correctly Authorize\nAccess to Issuer Metadata (CVE-2023-0665)\n\n* hashicorp/vault: Cache-Timing Attacks During Seal and Unseal Operations\n(CVE-2023-25000)\n\n* helm: getHostByName Function Information Disclosure (CVE-2023-25165)\n\n* containerd: Supplementary groups are not set up properly (CVE-2023-25173)\n\n* runc: volume mount race condition (regression of CVE-2019-19921)\n(CVE-2023-27561)\n\n* runc: AppArmor can be bypassed when `/proc` inside the container is\nsymlinked with a specific mount configuration (CVE-2023-28642)\n\n* baremetal-operator: plain-text username and hashed password readable by\nanyone having a cluster-wide read-access (CVE-2023-30841)\n\n* runc: Rootless runc makes `/sys/fs/cgroup` writable (CVE-2023-25809)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAll OpenShift Container Platform 4.13 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift CLI (oc)\nor web console. Instructions for upgrading a cluster are available at\nhttps://docs.openshift.com/container-platform/4.13/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.13 see the following documentation,\nwhich will be updated shortly for this release, for important instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.13/release_notes/ocp-4-13-release-notes.html\n\nYou may download the oc tool and use it to inspect release image metadata\nfor x86_64, s390x, ppc64le, and aarch64 architectures. The image digests\nmay be found at\nhttps://quay.io/repository/openshift-release-dev/ocp-release?tab=tags\n\nThe sha values for the release are:\n\n(For x86_64 architecture)\nThe image digest is\nsha256:74b23ed4bbb593195a721373ed6693687a9b444c97065ce8ac653ba464375711\n\n(For s390x architecture)\nThe image digest is\nsha256:a32d509d960eb3e889a22c4673729f95170489789c85308794287e6e9248fb79\n\n(For ppc64le architecture)\nThe image digest is\nsha256:bca0e4a4ed28b799e860e302c4f6bb7e11598f7c136c56938db0bf9593fb76f8\n\n(For aarch64 architecture)\nThe image digest is\nsha256:e07e4075c07fca21a1aed9d7f9c165696b1d0fa4940a219a000894e5683d846c\n\nAll OpenShift Container Platform 4.13 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.13/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1770297 - console odo download link needs to go to an official location or have caveats [openshift-4.4]\n1853264 - Metrics produce high unbound cardinality\n1877261 - [RFE] Mounted volume size issue when restore a larger size pvc than snapshot\n1904573 - OpenShift: containers modify /etc/passwd group writable\n1943194 - when using gpus, more nodes than needed are created by the node autoscaler\n1948666 - After entering valid git repo url on Import from git page, throwing warning message instead Validated\n1971033 - CVE-2021-20329 mongo-go-driver: specific cstrings input may not be properly validated\n2005232 - Pods list page should only show Create Pod button to user has sufficient permission\n2016006 - Repositories list does not show the running pipelinerun as last pipelinerun\n2027000 - The user is ignored when we create a new file using a MachineConfig\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2047299 - nodeport not reachable port connection timeout\n2050230 - Implement LIST call chunking in openshift-sdn\n2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server\n2065166 - GCP - Less privileged service accounts are created with Service Account User role\n2066388 - Wrong Error generates when https is missing in the value of `regionEndpoint` in `configs.imageregistry.operator.openshift.io/cluster`\n2066664 - [cluster-storage-operator] - Minimize wildcard/privilege Usage in Cluster and Local Roles\n2070744 - openshift-install destroy in us-gov-west-1 results in infinite loop - AWS govcloud\n2075548 - Support AllocateLoadBalancerNodePorts=False with ETP=local, LGW mode\n2076619 - Could not create deployment with an unknown git repo and builder image build strategy\n2078222 - egressIPs behave inconsistently towards in-cluster traffic (hosts and services backed by host-networked pods)\n2079981 - PVs not deleting on azure (or very slow to delete) since CSI migration to azuredisk\n2081858 - OVN-Kubernetes: SyncServices for nodePortWatcherIptables should propagate failures back to caller\n2083087 - \"Delete dependent objects of this resource\" might cause confusions\n2084452 - PodDisruptionBudgets help message should be semantic\n2087043 - Cluster API components should use K8s 1.24 dependencies\n2087553 - No rhcos-4.11/x86_64 images in the 2 new regions on alibabacloud, \"ap-northeast-2 (South Korea (Seoul))\" and \"ap-southeast-7 (Thailand (Bangkok))\"\n2089093 - CVO hotloops on OperatorGroup due to the diff of \"upgradeStrategy\": string(\"Default\")\n2089138 - CVO hotloops on ValidatingWebhookConfiguration /performance-addon-operator\n2090680 - upgrade for a disconnected cluster get hang on retrieving and verifying payload\n2092567 - Network policy is not being applied as expected\n2092811 - Datastore name is too long\n2093339 - [rebase v1.24] Only known images used by tests\n2095719 - serviceaccounts are not updated after upgrade from 4.10 to 4.11\n2100181 - WebScale: configure-ovs.sh fails because it picks the wrong default interface\n2100429 - [apiserver-auth] default SCC restricted allow volumes don\u0027t have \"ephemeral\" caused deployment with Generic Ephemeral Volumes stuck at Pending\n2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n2104978 - MCD degrades are not overwrite-able by subsequent errors\n2110565 - PDB: Remove add/edit/remove actions in Pod resource action menu\n2110570 - Topology sidebar: Edit pod count shows not the latest replicas value when edit the count again\n2110982 - On GCP, need to check load balancer health check IPs required for restricted installation\n2113973 - operator scc is nor fixed when we define a custom scc with readOnlyRootFilesystem: true\n2114515 - Getting critical NodeFilesystemAlmostOutOfSpace alert for 4K tmpfs\n2115265 - Search page: LazyActionMenus are shown below Add/Remove from navigation button\n2116686 - [capi] Cluster kind should be valid\n2117374 - Improve Pod Admission failure for restricted-v2 denials that pass with restricted\n2135339 - CVE-2022-41316 vault: insufficient certificate revocation list checking\n2149436 - CVE-2022-46146 exporter-toolkit: authentication bypass via cache poisoning\n2154196 - CVE-2022-23526 helm: Denial of service through schema file\n2154202 - CVE-2022-23525 helm: Denial of service through through repository index file\n2156727 - CVE-2021-4235 go-yaml: Denial of Service in go-yaml\n2156729 - CVE-2021-4238 goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be\n2161274 - CVE-2022-41717 golang: net/http: excessive memory growth in a Go server accepting HTTP/2 requests\n2162182 - CVE-2022-41721 x/net/http2/h2c: request smuggling\n2168458 - CVE-2023-25165 helm: getHostByName Function Information Disclosure\n2174485 - CVE-2023-25173 containerd: Supplementary groups are not set up properly\n2175721 - CVE-2023-27561 runc: volume mount race condition (regression of CVE-2019-19921)\n2178358 - CVE-2022-41723 net/http, golang.org/x/net/http2: avoid quadratic complexity in HPACK decoding\n2178488 - CVE-2022-41725 golang: net/http, mime/multipart: denial of service from excessive resource consumption\n2178492 - CVE-2022-41724 golang: crypto/tls: large handshake records may cause panics\n2182883 - CVE-2023-28642 runc: AppArmor can be bypassed when `/proc` inside the container is symlinked with a specific mount configuration\n2182884 - CVE-2023-25809 runc: Rootless runc makes `/sys/fs/cgroup` writable\n2182972 - CVE-2023-25000 hashicorp/vault: Cache-Timing Attacks During Seal and Unseal Operations\n2182981 - CVE-2023-0665 hashicorp/vault: Vault?s PKI Issuer Endpoint Did Not Correctly Authorize Access to Issuer Metadata\n2184663 - CVE-2023-0620 vault: Vault?s Microsoft SQL Database Storage Backend Vulnerable to SQL Injection Via Configuration File\n2190116 - CVE-2023-30841 baremetal-operator: plain-text username and hashed password readable by anyone having a cluster-wide read-access\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nOCPBUGS-10036 - Enable aesgcm encryption provider by default in openshift/api\nOCPBUGS-10038 - Enable aesgcm encryption provider by default in openshift/cluster-config-operator\nOCPBUGS-10042 - Enable aesgcm encryption provider by default in openshift/cluster-kube-apiserver-operator\nOCPBUGS-10043 - Enable aesgcm encryption provider by default in openshift/cluster-openshift-apiserver-operator\nOCPBUGS-10044 - Enable aesgcm encryption provider by default in openshift/cluster-authentication-operator\nOCPBUGS-10047 - oc-mirror print log: unable to parse reference oci://mno/redhat-operator-index:v4.12\nOCPBUGS-10057 - With WPC card configured as GM or BC, phc2sys clock lock state is shown as FREERUN in ptp metrics while it should be LOCKED\nOCPBUGS-10213 - aws: mismatch between RHCOS and AWS SDK regions\nOCPBUGS-10220 - Newly provisioned machines unable to join cluster\nOCPBUGS-10221 - Risk cache warming takes too long on channel changes\nOCPBUGS-10237 - Limit the nested repository path while mirroring the images using oc-mirror for those who cant have nested paths in their container registry\nOCPBUGS-10239 - [release-4.13] Fix of ServiceAccounts gathering\nOCPBUGS-10249 - PollConsoleUpdates won\u0027t fire toast if one or more manifests errors when plugins change\nOCPBUGS-10267 - NetworkManager TUI quits regardless of a detected unsupported configuration\nOCPBUGS-10271 - [4.13] Netflink overflow alert\nOCPBUGS-10278 - Graph-data is not mounted on graph-builder correctly while install using graph-data image built by oc-mirror\nOCPBUGS-10281 - Openshift Ansible OVS version out of sync with RHCOS\nOCPBUGS-10291 - Broken link for Ansible tagging\nOCPBUGS-10298 - TenantID is ignored in some cases\nOCPBUGS-10320 - Catalogs should not be included in the ImageContentSourcePolicy.yaml\nOCPBUGS-10321 - command cannot be worked after chroot /host for oc debug pod\nOCPBUGS-1033 - Multiple extra manifests in the same file are not applied correctly\nOCPBUGS-10334 - Nutanix cloud-controller-manager pod not have permission to get/list ConfigMap\nOCPBUGS-10353 - kube-apiserver not receiving or processing shutdown signal after coreos 9.2 bump\nOCPBUGS-10367 - Pausing pools in OCP 4.13 will cause critical alerts to fire\nOCPBUGS-10377 - [gcp] IPI installation with Shielded VMs enabled failed on restarting the master machines\nOCPBUGS-10404 - Workload annotation missing from deployments\nOCPBUGS-10421 - RHCOS 4.13 live iso x84_64 contains restrictive policy.json\nOCPBUGS-10426 - node-topology is not exported due to kubelet.sock: connect: permission denied \nOCPBUGS-10427 - 4.1 born cluster fails to scale-up due to podman run missing `--authfile` flag\nOCPBUGS-10432 - CSI Inline Volume admission plugin does not log object name correctly\nOCPBUGS-10440 - OVN IPSec - does not create IPSec tunnels\nOCPBUGS-10474 - OpenShift pipeline TaskRun(s) column Duration is not present as column in UI\nOCPBUGS-10476 - Disable netlink mode of netclass collector in Node Exporter. \nOCPBUGS-1048 - if tag categories don\u0027t exist, the installation will fail to bootstrap\nOCPBUGS-10483 - [4.13 arm64 image][AWS EFS] Driver fails to get installed/exec format error\nOCPBUGS-10558 - MAPO failing to retrieve flavour information after rotating credentials\nOCPBUGS-10585 - [4.13] Request to update RHCOS installer bootimage metadata \nOCPBUGS-10586 - Console shows x509 error when requesting token from oauth endpoint\nOCPBUGS-10597 - The agent-tui shows again during the installation\nOCPBUGS-1061 - administrator console, monitoring-alertmanager-edit user list or create silence, \"Observe - Alerting - Silences\" page is pending\nOCPBUGS-10645 - 4.13: Operands running management side missing affinity, tolerations, node selector and priority rules than the operator\nOCPBUGS-10656 - create image command erroneously logs that Base ISO was obtained from release\nOCPBUGS-10657 - When releaseImage is a digest the create image command generates spurious warning\nOCPBUGS-10658 - Wrong PrimarySubnet in OpenstackProviderSpec when using Failure Domains\nOCPBUGS-10661 - machine API operator failing with No Major.Minor.Patch elements found\nOCPBUGS-10678 - Developer catalog shows ImageStreams as samples which has no sampleRepo\nOCPBUGS-10679 - Show type of sample on the samples view\nOCPBUGS-10689 - [IPI on BareMetal]: Workers failing inspection when installing with proxy\nOCPBUGS-10697 - [release-4.13] User is allowed to create IP Address pool with duplicate entries for namespace and matchExpression for serviceSelector and namespaceSelector\nOCPBUGS-10698 - [release-4.13] Already assigned IP address is removed from a service on editing the ip address pool. \nOCPBUGS-10710 - Metal virtual media job permafails during early bootstrap\nOCPBUGS-10716 - Image Registry default to Removed on IBM cloud after 4.13.0-ec.3\nOCPBUGS-10739 - [4.13] Bootimage bump tracker\nOCPBUGS-10744 - [4.13] EgressFirewall status disappeared \nOCPBUGS-10746 - Downstream Operator-SDK v1.22.2 to OCP 4.13\nOCPBUGS-10771 - upgrade test failure with \"Cluster operator control-plane-machine-set is not available\"\nOCPBUGS-10773 - TestNewAppRun unit test failing\nOCPBUGS-10792 - Hypershift namespace servicemonitor has wrong API group\nOCPBUGS-10793 - Ignore device list missing in Node Exporter \nOCPBUGS-10796 - [4.13] Egress firewall is not retried on error\nOCPBUGS-10799 - Network policy perf improvements\nOCPBUGS-10801 - [4.13] Upgrade to 4.10 stalled on timeout completing syncEgressFirewall\nOCPBUGS-10811 - Missing vCenter build number in telemetry\nOCPBUGS-10813 - SCOS bootstrap should skip pivot when root is not writable\nOCPBUGS-10826 - RHEL 9.2 doesn\u0027t contain the `kernel-abi-whitelists` package. \nOCPBUGS-10832 - Edit Deployment (and DC) form doesn\u0027t enable Save button when changing strategy type\nOCPBUGS-10833 - update the default pipelineRun template name\nOCPBUGS-10834 - [OVNK] [IC] Having only one leader election in the master process\nOCPBUGS-10873 - OVN to OVN-H migration seems broken\nOCPBUGS-10888 - oauth-server fails to invalidate cache, causing non existing groups being referenced\nOCPBUGS-10890 - Hypershift replace upgrade: node in NotReady after upgrading from a 4.14 image to another 4.14 image\nOCPBUGS-10891 - Cluster Autoscaler balancing similar nodes test fails randomly\nOCPBUGS-10892 - Passwords printed in log messages\nOCPBUGS-10893 - Remove unsupported warning in oc-mirror when using the --skip-pruning flag\nOCPBUGS-10902 - [IBMCloud] destroyed the private cluster, fail to cleanup the dns records\nOCPBUGS-10903 - [IBMCloud] fail to ssh to master/bootstrap/worker nodes from the bastion inside a customer vpc. \nOCPBUGS-10907 - move to rhel9 in DTK for 4.13\nOCPBUGS-10914 - Node healthz server: return unhealthy when pod is to be deleted\nOCPBUGS-10919 - Update Samples Operator to use latest jenkins 4.12 release\nOCPBUGS-10923 - Cluster bootstrap waits for only one master to join before finishing \nOCPBUGS-10929 - Kube 1.26 for ovn-k\nOCPBUGS-10946 - For IPv6-primary dual-stack cluster, kubelet.service renders only single node-ip\nOCPBUGS-10951 - When imagesetconfigure without OCI FBC format config, but command with use-oci-feature flag, the oc-mirror command should check the imagesetconfigure firstly and print error immediately\nOCPBUGS-10953 - ovnkube-node does not close up correctly\nOCPBUGS-10955 - [release-4.13] NMstate complains about ping not working when adding multiple routing tables with different gateways\nOCPBUGS-10960 - [4.13] Vertical Scaling: do not trigger inadvertent machine deletion during bootstrap\nOCPBUGS-10965 - The network-tools image stream is missing in the cluster samples\nOCPBUGS-10982 - [4.13] nodeSelector in EgressFirewall doesn\u0027t work in dualstack cluster\nOCPBUGS-10989 - Agent create sub-command is returning fatal error\nOCPBUGS-10990 - EgressIP doesn\u0027t work in GCP XPN cluster\nOCPBUGS-11004 - Bootstrap kubelet client cert should include system:serviceaccounts group\nOCPBUGS-11010 - [vsphere] zone cluster installation fails if vSphere Cluster is embedded in Folder\nOCPBUGS-11022 - [4.13][scale] all egressfirewalls will be updated on every node update\nOCPBUGS-11023 - [4.13][scale] Ingress network policy creates more flows than before\nOCPBUGS-11031 - SNO OCP upgrade from 4.12 to 4.13 failed due to node-tuning operator is not available - tuned pod stuck at Terminating\nOCPBUGS-11032 - Update the validation interval for the cluster transfer to 12 hours\nOCPBUGS-11040 - --container-runtime is being removed in k8s 1.27\nOCPBUGS-11054 - GCP: add europe-west12 region to the survey as supported region\nOCPBUGS-11055 - APIServer service isn\u0027t selected correctly for PublicAndPrivate cluster when external-dns is not configured\nOCPBUGS-11058 - [4.13] Conmon leaks symbolic links in /var/run/crio when pods are deleted\nOCPBUGS-11068 - nodeip-configuration not enabled for VSphere UPI\nOCPBUGS-11107 - Alerts display incorrect source when adding external alert sources\nOCPBUGS-11117 - The provided gcc RPM inside DTK does not match the gcc used to build the kernel\nOCPBUGS-11120 - DTK docs should mention the ubi9 base image instead of ubi8\nOCPBUGS-11213 - BMH moves to deleting before all finalizers are processed\nOCPBUGS-11218 - \"pipelines-as-code-pipelinerun-go\" configMap is not been used for the Go repository \nOCPBUGS-11222 - kube-controller-manager cluster operator is degraded due connection refused while querying rules\nOCPBUGS-11227 - Relax CSR check due to k8s 1.27 changes\nOCPBUGS-11232 - All projects options shows as undefined after selection in Dev perspective Pipelines page \nOCPBUGS-11248 - Secret name variable get renders in Create Image pull secret alert\nOCPBUGS-1125 - Fix disaster recovery test [sig-etcd][Feature:DisasterRecovery][Disruptive] [Feature:EtcdRecovery] Cluster should restore itself after quorum loss [Serial]\nOCPBUGS-11257 - egressip cannot be assigned on hypershift hosted cluster node\nOCPBUGS-11261 - [AWS][4.13] installer get stuck if BYO private hosted zone is configured\nOCPBUGS-11263 - PTP KPI version 4.13 RC2 WPC - offset jumps to huge numbers \nOCPBUGS-11307 - Egress firewall node selector test missing\nOCPBUGS-11333 - startupProbe for UWM prometheus is still 15m\nOCPBUGS-11339 - ose-ansible-operator base image version is still 4.12 in the operators that generated by operator-sdk 4.13\nOCPBUGS-11340 - ose-helm-operator base image version is still 4.12 in the operators that generated by operator-sdk 4.13\nOCPBUGS-11341 - openshift-manila-csi-driver is missing the workload.openshift.io/allowed label\nOCPBUGS-11354 - CPMS: node readiness transitions not always trigger reconcile \nOCPBUGS-11384 - Switching from enabling realTime to disabling Realtime Workloadhint causes stalld to be enabled\nOCPBUGS-11390 - Service Binding Operator installation fails: \"A subscription for this operator already exists in namespace ...\"\nOCPBUGS-11424 - [release-4.13] new whereabouts reconciler relies on HOSTNAME which != spec.nodeName\nOCPBUGS-11427 - [release-4.13] whereabouts reads wrong annotation \"k8s.v1.cni.cncf.io/networks-status\", should be \"k8s.v1.cni.cncf.io/network-status\"\nOCPBUGS-11456 - PTP - When GM and downstream slaves are configured on same server, ptp metrics show slaves as FREERUN\nOCPBUGS-11458 - Ingress Takes 40s on Average Downtime During GCP OVN Upgrades\nOCPBUGS-11460 - CPMS doesn\u0027t always generate configurations for AWS\nOCPBUGS-11468 - Community operator cannot be mirrored due to malformed image address\nOCPBUGS-11469 - [release4.13] \"exclude bundles with `olm.deprecated` property when rendering\" not backport\nOCPBUGS-11473 - NS autolabeler requires RoleBinding subject namespace to be set when using ServiceAccount\nOCPBUGS-11485 - [4.13] NVMe disk by-id rename breaks LSO/ODF\nOCPBUGS-11503 - Update 4.13 cluster-network-operator image in Dockerfile to be consistent with ART\nOCPBUGS-11506 - CPMS e2e periodics tests timeout failures\nOCPBUGS-11507 - Potential 4.12 to 4.13 upgrade failure due to NIC rename\nOCPBUGS-11510 - Setting cpu-quota.crio.io to `disable` with crun causes container creation to fail\nOCPBUGS-11511 - [4.13] static container pod cannot be running due to CNI request failed with status 400\nOCPBUGS-11529 - [Azure] fail to collect the vm serial log with ?gather bootstrap?\nOCPBUGS-11536 - Cluster monitoring operator runs node-exporter with btrfs collector\nOCPBUGS-11545 - multus-admission-controller should not run as root under Hypershift-managed CNO\nOCPBUGS-11558 - multus-admission-controller should not run as root under Hypershift-managed CNO\nOCPBUGS-11589 - Ensure systemd is compatible with rhel8 journalctl\nOCPBUGS-11598 - openshift-azure-routes triggered continously on rhel9\nOCPBUGS-11606 - User configured In-cluster proxy configuration squashed in hypershift\nOCPBUGS-11643 - Updating kube-rbac-proxy images to be consistent with ART\nOCPBUGS-11657 - [4.13] Static IPv6 LACP bonding is randomly failing in RHCOS 413.92\nOCPBUGS-11659 - Error extracting libnmstate.so.1.3.3 when create image\nOCPBUGS-11661 - AWS s3 policy changes block all OCP installs on AWS\nOCPBUGS-11669 - Bump to kubernetes 1.26.3\nOCPBUGS-11683 - [4.13] Add Controller health to CEO liveness probe\nOCPBUGS-11694 - [4.13] Update legacy toolbox to use registry.redhat.io/rhel9/support-tools\nOCPBUGS-11706 - ccoctl cannot create STS documents in 4.10-4.13 due to s3 policy changes\nOCPBUGS-11750 - TuningCNI cnf-test failure: sysctl allowlist update\nOCPBUGS-11765 - [4.13] Keep current OpenSSH default config in RHCOS 9\nOCPBUGS-11776 - [4.13] VSphereStorageDriver does not document the platform default\nOCPBUGS-11778 - Upgrade SNO: no resolv.conf caused by failure in forcedns dispatcher script\nOCPBUGS-11787 - Update 4.14 ose-vmware-vsphere-csi-driver image to be consistent with ART\nOCPBUGS-11789 - [4.13] Bootimage bump tracker\nOCPBUGS-11799 - [4.13] Bootimage bump tracker\nOCPBUGS-11823 - [Reliability]kube-apiserver\u0027s memory usage keep increasing to max 3GB in 7 days\nOCPBUGS-11848 - PtpOperatorsConfig not applying correctly\nOCPBUGS-11866 - Pipeline is not removed when Deployment/DC/Knative Service or Application is deleted\nOCPBUGS-11870 - [4.13] Nodes in Ironic are created without namespaces initially\nOCPBUGS-11876 - oc-mirror generated file-based catalogs crashloop\nOCPBUGS-11908 - Got the `file exists` error when different digest direct to the same tag\nOCPBUGS-11917 - the warn message won\u0027t disappear in co/node-tuning when scale down machineset\nOCPBUGS-11919 - Console metrics could have a high cardinality (4.13)\nOCPBUGS-11950 - fail to create vSphere IPI cluster as apiVIP and ingressVIP are not in machine networks\nOCPBUGS-11955 - NTP config not applied\nOCPBUGS-11968 - Instance shouldn\u0027t be moved back from f to a\nOCPBUGS-11985 - [4.13] Ironic inspector service should be proxied\nOCPBUGS-12172 - Users don\u0027t know what type of resource is being created by Import from Git or Deploy Image flows\nOCPBUGS-12179 - agent-tui is failing to start when using libnmstate.2\nOCPBUGS-12186 - Pipeline doesn\u0027t render correctly when displayed but looks fine in edit mode\nOCPBUGS-12198 - create hosted cluster failed with aws s3 access issue\nOCPBUGS-12212 - cluster failed to convert from dualstack to ipv4 single stack\nOCPBUGS-12225 - Add new OCP 4.13 storage admission plugin\nOCPBUGS-12257 - Catalogs rebuilt by oc-mirror are in crashloop : cache is invalid\nOCPBUGS-12259 - oc-mirror fails to complete with heads only complaining about devworkspace-operator\nOCPBUGS-12271 - Hypershift conformance test fails new cpu partitioning tests\nOCPBUGS-12272 - Importing a kn Service shows a non-working Open URL decorator also when the Add Route checkbox was unselected\nOCPBUGS-12273 - When Creating Sample Devfile from the Samples Page, Topology Icon is not set\nOCPBUGS-12450 - [4.13] Fix Flake TestAttemptToScaleDown/scale_down_only_by_one_machine_at_a_time\nOCPBUGS-12465 - --use-oci-feature leads to confusion and needs to be better named\nOCPBUGS-12478 - CSI driver + operator containers are not pinned to mgmt cores\nOCPBUGS-1264 - e2e-vsphere-zones failing due to unable to parse cloud-config\nOCPBUGS-12698 - redfish-virtualmedia mount not working \nOCPBUGS-12703 - redfish-virtualmedia mount not working \nOCPBUGS-12708 - [4.13] Changing a PreprovisioningImage ImageURL and/or ExtraKernelParams should reboot the host\nOCPBUGS-1272 - \"opm alpha render-veneer basic\" doesn\u0027t support pipe stdin\nOCPBUGS-12737 - Multus admission controller must have \"hypershift.openshift.io/release-image\" annotation when CNO is managed by Hypershift\nOCPBUGS-12786 - OLM CatalogSources in guest cluster cannot pull images if pre-GA\nOCPBUGS-12804 - Dual stack VIPs incompatible with EnableUnicast setting\nOCPBUGS-12854 - `cluster-reader` role cannot access \"k8s.ovn.org\" API Group resources\nOCPBUGS-12862 - IPv6 ingress VIP not configured in keepalived on vSphere Dual-stack\nOCPBUGS-12865 - Kubernetes-NMState CI is perma-failing\nOCPBUGS-12933 - Node Tuning Operator crashloops when in Hypershift mode\nOCPBUGS-12994 - TCP DNS Local Preference is not working for Openshift SDN\nOCPBUGS-12999 - Backport owners through 4.13, 4.12\nOCPBUGS-13029 - Update Cluster Sample Operator dependencies and libraries for OCP 4.13\nOCPBUGS-13057 - ppc64le releases don\u0027t install because ovs fails to start (invalid permissions)\nOCPBUGS-13069 - [whereabouts-cni] CNO must use reconciliation controller in order to support dual stack in 4.12 [4.13 dependency]\nOCPBUGS-13071 - CI fails on TestClientTLS\nOCPBUGS-13072 - Capture tests don\u0027t work in OVNK\nOCPBUGS-13076 - Load balancers/ Ingress controller removal race condition\nOCPBUGS-13157 - CI fails on TestRouterCompressionOperation\nOCPBUGS-13254 - Nutanix cloud provider should use Kubernetes 1.26 dependencies\nOCPBUGS-1327 - [IBMCloud] Worker machines unreachable during initial bring up\nOCPBUGS-1352 - OVN silently failing in case of a stuck pod\nOCPBUGS-1427 - Ignore non-ready endpoints when processing endpointslices\nOCPBUGS-1428 - service account token secret reference\nOCPBUGS-1435 - [Ingress Node Firewall Operator] [Web Console] Allow user to override namespace where the operator is installed, currently user can install it only in openshift-operators ns\nOCPBUGS-1443 - Unable to get ClusterVersion error while upgrading 4.11 to 4.12\nOCPBUGS-1453 - TargetDown alert expression is NOT correctly joining kube-state-metrics metric\nOCPBUGS-1458 - cvo pod crashloop during bootstrap: featuregates: connection refused\nOCPBUGS-1486 - Avoid re-metric\u0027ing the pods that are already setup when ovnkube-master disrupts/reinitializes/restarts/goes through leader election\nOCPBUGS-1557 - Default to floating automaticRestart for new GCP instances\nOCPBUGS-1560 - [vsphere] installation fails when only configure single zone in install-config\nOCPBUGS-1565 - Possible split brain with keepalived unicast\nOCPBUGS-1566 - Automation Offline CPUs Test cases\nOCPBUGS-1577 - Incorrect network configuration in worker node with two interfaces\nOCPBUGS-1604 - Common resources out-of-date when using multicluster switcher\nOCPBUGS-1606 - Multi-cluster: We should not filter OLM catalog by console pod architecture and OS on managed clusters \nOCPBUGS-1612 - [vsphere] installation errors out when missing topology in a failure domain\nOCPBUGS-1617 - Remove unused node.kubernetes.io/not-reachable toleration\nOCPBUGS-1627 - [vsphere] installation fails when setting user-defined folder in failure domain\nOCPBUGS-1646 - [osp][octavia lb] LBs type svcs not updated until all the LBs are created\nOCPBUGS-166 - 4.11 SNOs fail to complete install because of \"failed to get pod annotation: timed out waiting for annotations: context deadline exceeded\"\nOCPBUGS-1665 - Scorecard failed because of the request of PodSecurity\nOCPBUGS-1671 - Creating a statefulset with the example image from the UI on ARM64 leads to a Pod in crashloopbackoff due to the only-amd64 image provided\nOCPBUGS-1704 - [gcp] when the optional Service Usage API is disabled, IPI installation cannot succeed\nOCPBUGS-1725 - Affinity rule created in router deployment for single-replica infrastructure and \"NodePortService\" endpoint publishing strategy\nOCPBUGS-1741 - Can\u0027t load additional Alertmanager templates with latest 4.12 OpenShift\nOCPBUGS-1748 - PipelineRun templates must be fetched from OpenShift namespace\nOCPBUGS-1761 - osImages that cannot be pulled do not set the node as Degraded properly\nOCPBUGS-1769 - gracefully fail when iam:GetRole is denied\nOCPBUGS-1778 - Can\u0027t install clusters with schedulable masters\nOCPBUGS-1791 - Wait-for install-complete did not exit upon completion. \nOCPBUGS-1805 - [vsphere-csi-driver-operator] CSI cloud.conf doesn\u0027t list multiple datacenters when specified \nOCPBUGS-1807 - Ingress Operator startup bad log message formatting\nOCPBUGS-1844 - Ironic dnsmasq doesn\u0027t include existing DNS settings during iPXE boot\nOCPBUGS-1852 - [RHOCP 4.10] Subscription tab for operator doesn\u0027t land on correct URL\nOCPBUGS-186 - PipelineRun task status overlaps status text\nOCPBUGS-1998 - Cluster monitoring fails to achieve new level during upgrade w/ unavailable node\nOCPBUGS-2015 - TestCertRotationTimeUpgradeable failing consistently in kube-apiserver-operator\nOCPBUGS-2083 - OCP 4.10.33 uses a weak 3DES cipher in the VMWare CSI Operator for communication and provides no method to disable it\nOCPBUGS-2088 - User can set rendezvous host to be a worker\nOCPBUGS-2141 - doc link in PrometheusDataPersistenceNotConfigured message is 4.8\nOCPBUGS-2145 - \u0027maxUnavailable\u0027 and \u0027minAvailable\u0027 on PDB creation page - i18n misses\nOCPBUGS-2209 - Hard eviction thresholds is different with k8s default when PAO is enabled\nOCPBUGS-2248 - [alibabacloud] IPI installation failed with master nodes being NotReady and CCM error \"alicloud: unable to split instanceid and region from providerID\"\nOCPBUGS-2260 - KubePodNotReady - Increase Tolerance During Master Node Restarts\nOCPBUGS-2306 - On Make Serverless page, to change values of the inputs minpod, maxpod and concurrency fields, we need to click the ? + ? or ? - \u0027, it can\u0027t be changed by typing in it. \nOCPBUGS-2319 - metal-ipi upgrade success rate dropped 30+% in last week\nOCPBUGS-2384 - [2035720] [IPI on Alibabacloud] deploying a private cluster by \u0027publish: Internal\u0027 failed due to \u0027dns_public_record\u0027\nOCPBUGS-2440 - unknown field logs in prometheus-operator\nOCPBUGS-2471 - BareMetalHost is available without cleaning if the cleaning attempt fails\nOCPBUGS-2479 - Right border radius is 0 for the pipeline visualization wrapper in dark mode\nOCPBUGS-2500 - Developer Topology always blanks with large contents when first rendering\nOCPBUGS-2513 - Disconnected cluster installation fails with pull secret must contain auth for \"registry.ci.openshift.org\" \nOCPBUGS-2525 - [CI Watcher] Ongoing timeout failures associated with multiple CRD-extensions tests\nOCPBUGS-2532 - Upgrades from 4.11.9 to latest 4.12.x Nightly builds do not succeed\nOCPBUGS-2551 - \"Error loading\" when normal user check operands on All namespaces\nOCPBUGS-2569 - ovn-k network policy races\nOCPBUGS-2579 - Helm Charts and Samples are not disabled in topology actions if actions are disabled in customization\nOCPBUGS-266 - Project Access tab cannot differentiate between users and groups\nOCPBUGS-2666 - `create a project` link not backed by RBAC check\nOCPBUGS-272 - Getting duplicate word \"find\" when kube-apiserver degraded=true if webhook matches a virtual resource\nOCPBUGS-2727 - ClusterVersionRecommendedUpdate condition blocks explicitly allowed upgrade which is not in the available updates\nOCPBUGS-2729 - should ignore enP.* NICs from node-exporter on Azure cluster\nOCPBUGS-2735 - Operand List Page Layout Incorrect on small screen size. \nOCPBUGS-2738 - CVE-2022-26945 CVE-2022-30321 CVE-2022-30322 CVE-2022-30323 ose-baremetal-installer-container: various flaws [openshift-4.13.z]\nOCPBUGS-2824 - The dropdown list component will be covered by deployment details page on Topology page\nOCPBUGS-2827 - OVNK: NAT issue for packets exceeding check_pkt_larger() for NodePort services that route to hostNetworked pods\nOCPBUGS-2841 - Need validation rule for supported arch\nOCPBUGS-2845 - Unable to use application credentials for Cinder CSI after OpenStack credentials update\nOCPBUGS-2847 - GCP XPN should only be available with Tech Preview\nOCPBUGS-2851 - [OCI feature] registries.conf support in oc mirror\nOCPBUGS-2852 - etcd failure: failed to make etcd client for endpoints [https://[2620:52:0:1eb:367x:5axx:xxx:xxx]:2379]: context deadline exceeded \nOCPBUGS-2868 - Container networking pods cannot be access hosted network pods on another node in ipv6 single stack cluster\nOCPBUGS-2873 - Prometheus doesn\u0027t reload TLS certificate and key files on disk\nOCPBUGS-2886 - The LoadBalaner section shouldn\u0027t be set when using Kuryr on cloud-provider\nOCPBUGS-2891 - AWS Deprovision Fails with unrecognized elastic load balancing resource type listener \nOCPBUGS-2895 - [RFE] 4.11 Azure DiskEncryptionSet static validation does not support upper-case letters\nOCPBUGS-2904 - If all the actions are disabled in add page, Details on/off toggle switch to be disabled\nOCPBUGS-2907 - provisioning of baremetal nodes fails when using multipath device as rootDeviceHints\nOCPBUGS-2921 - br-ex interface not configured makes ovnkube-node Pod to crashloop \nOCPBUGS-2922 - \u0027Status\u0027 column sorting doesn\u0027t work as expected\nOCPBUGS-2926 - Unable to gather OpenStack console logs since kernel cmd line has no console args\nOCPBUGS-2934 - Ingress node firewall pod \u0027s events container on the node causing pod in CrashLoopBackOff state when sctp module is loaded on node\nOCPBUGS-2941 - CIRO unable to detect swift when content-type is omitted in 204-responses\nOCPBUGS-2946 - [AWS] curl network Loadbalancer always get \"Connection time out\"\nOCPBUGS-2948 - Whereabouts CNI timesout while iterating exclude range\nOCPBUGS-2988 - apiserver pods cannot reach etcd on single node IPv6 cluster: transport: authentication handshake failed: x509: certificate is valid for ::1, 127.0.0.1, ::1, fd69::2, not 2620:52:0:198::10\"\nOCPBUGS-2991 - CI jobs are failing with: admission webhook \"validation.csi.vsphere.vmware.com\" denied the request\nOCPBUGS-2992 - metal3 pod crashloops on OKD in BareMetal IPI or assisted-installer bare metal installations\nOCPBUGS-2994 - Keepalived monitor stuck for long period of time on kube-api call while installing\nOCPBUGS-2996 - [4.13] Bootimage bump tracker\nOCPBUGS-3018 - panic in WaitForBootstrapComplete\nOCPBUGS-3021 - GCP: missing me-west1 region\nOCPBUGS-3024 - Service list shows undefined:80 when type is ExternalName or LoadBalancer\nOCPBUGS-3027 - Metrics are not available when running console in development mode\nOCPBUGS-3029 - BareMetalHost CR fails to delete on cluster cleanup\nOCPBUGS-3033 - Clicking the logo in the masthead goes to `/dashboards`, even if metrics are disabled\nOCPBUGS-3041 - Guard Pod Hostnames Too Long and Truncated Down Into Collisions With Other Masters\nOCPBUGS-3069 - Should show information on page if the upgrade to a target version doesn\u0027t take effect. \nOCPBUGS-3072 - Operator-sdk run bundle with old sqllite index image failed \nOCPBUGS-3079 - RPS hook only sets the first queue, but there are now many\nOCPBUGS-3085 - [IPI-BareMetal]: Dual stack deployment failed on BootStrap stage \nOCPBUGS-3093 - The control plane should tag AWS security groups at creation\nOCPBUGS-3096 - The terraform binaries shipped by the installer are not statically linked\nOCPBUGS-3109 - Change text colour for ConsoleNotification that notifies user that the cluster is being \nOCPBUGS-3114 - CNO reporting incorrect status\nOCPBUGS-3123 - Operator attempts to render both GA and Tech Preview API Extensions\nOCPBUGS-3127 - nodeip-configuration retries forever on network failure, blocking ovs-configuration, spamming syslog\nOCPBUGS-3168 - Add Capacity button does not exist after upgrade OCP version [OCP4.11-\u003eOCP4.12]\nOCPBUGS-3172 - Console shouldn\u0027t try to install dynamic plugins if permissions aren\u0027t available\nOCPBUGS-3180 - Regression in ptp-operator conformance tests\nOCPBUGS-3186 - [ibmcloud] unclear error msg when zones is not match with the Subnets in BYON install\nOCPBUGS-3192 - [4.8][OVN] RHEL 7.9 DHCP worker ovs-configuration fails \nOCPBUGS-3195 - Service-ca controller exits immediately with an error on sigterm\nOCPBUGS-3206 - [sdn2ovn] Migration failed in vsphere cluster\nOCPBUGS-3207 - SCOS build fails due to pinned kernel\nOCPBUGS-3214 - Installer does not always add router CA to kubeconfig\nOCPBUGS-3228 - Broken secret created while starting a Pipeline\nOCPBUGS-3235 - Topology gets stuck loading\nOCPBUGS-3245 - ovn-kubernetes ovnkube-master containers crashlooping after 4.11.0-0.okd-2022-10-15-073651 update\nOCPBUGS-3248 - CVE-2022-27191 ose-installer-container: golang: crash in a golang.org/x/crypto/ssh server [openshift-4]\nOCPBUGS-3253 - No warning when using wait-for vs. agent wait-for commands\nOCPBUGS-3272 - Unhealthy Readiness probe failed message failing CI when ovnkube DBs are still coming up\nOCPBUGS-3275 - No-op: Unable to retrieve machine from node \"xxx\": expecting one machine for node xxx got: []\nOCPBUGS-3277 - Install failure in create-cluster-and-infraenv.service\nOCPBUGS-3278 - Shouldn\u0027t need to put host data in platform baremetal section in installconfig\nOCPBUGS-3280 - Install ends in preparing-failed due to container-images-available validation\nOCPBUGS-3283 - remove unnecessary RBAC in KCM\nOCPBUGS-3292 - DaemonSet \"/openshift-network-diagnostics/network-check-target\" is not available\nOCPBUGS-3314 - \u0027gitlab.secretReference\u0027 disappears when the buildconfig is edited on ?From View?\nOCPBUGS-3316 - Branch name should sanitised to match actual github branch name in repository plr list\nOCPBUGS-3320 - New master will be created if add duplicated failuredomains in controlplanemachineset\nOCPBUGS-3331 - Update dependencies in CMO release 4.13\nOCPBUGS-3334 - Console should be using v1 apiVersion for ConsolePlugin model\nOCPBUGS-3337 - revert \"force cert rotation every couple days for development\" in 4.12\nOCPBUGS-3338 - Environment cannot find Python\nOCPBUGS-3358 - Revert BUILD-407\nOCPBUGS-3372 - error message is too generic when creating a silence with end time before start\nOCPBUGS-3373 - cluster-monitoring-view user can not list servicemonitors on \"Observe -\u003e Targets\" page\nOCPBUGS-3377 - CephCluster and StorageCluster resources use the same paths\nOCPBUGS-3381 - Make ovnkube-trace work on hypershift deployments\nOCPBUGS-3382 - Unable to configure cluster-wide proxy\nOCPBUGS-3391 - seccomp profile unshare.json missing from nodes\nOCPBUGS-3395 - Event Source is visible without even creating knative-eventing and knative-serving. \nOCPBUGS-3404 - IngressController.spec.nodePlacement.nodeSelector.matchExpressions does not work\nOCPBUGS-3414 - Missing \u0027ImageContentSourcePolicy\u0027 and \u0027CatalogSource\u0027 in the oci fbc feature implementation\nOCPBUGS-3424 - Azure Disk CSI Driver Operator gets degraded without \"CSISnapshot\" capability\nOCPBUGS-3426 - Update Cluster Sample Operator dependencies and libraries for OCP 4.13\nOCPBUGS-3427 - Skip broken [sig-devex][Feature:ImageEcosystem] tests\nOCPBUGS-3438 - cloud-network-config-controller not using proxy settings of the management cluster\nOCPBUGS-3440 - Authentication operator doesn\u0027t respond to console being enabled\nOCPBUGS-3441 - Update cluster-authentication-operator not to go degraded without console\nOCPBUGS-3444 - [4.13] Descheduler pod is OOM killed when using descheduler-operator profiles on big clusters\nOCPBUGS-3456 - track `rhcos-4.12` branch for fedora-coreos-config submodule\nOCPBUGS-3458 - Surface ClusterVersion RetrievedUpdates condition messages\nOCPBUGS-3465 - IBM operator needs deployment manifest fixes\nOCPBUGS-3473 - Allow listing crio and kernel versions in machine-os components\nOCPBUGS-3476 - Show Tag label and tag name if tag is detected in repository PipelineRun list and details page\nOCPBUGS-3480 - Baremetal Provisioning fails on HP Gen9 systems due to eTag handling\nOCPBUGS-3499 - Route CRD validation behavior must be the same as openshift-apiserver behavior\nOCPBUGS-3501 - Route CRD host-assignment behavior must be the same as openshift-apiserver behavior\nOCPBUGS-3502 - CRD-based and openshift-apiserver-based Route validation/defaulting must use the shared implementation\nOCPBUGS-3508 - masters repeatedly losing connection to API and going NotReady\nOCPBUGS-3524 - The storage account for the CoreOS image is publicly accessible when deploying fully private cluster on Azure\nOCPBUGS-3526 - oc fails to extract layers that set xattr on Darwin\nOCPBUGS-3539 - [OVN-provider]loadBalancer svc with monitors not working\nOCPBUGS-3612 - [IPI] Baremetal ovs-configure.sh script fails to start secondary bridge br-ex1\nOCPBUGS-3621 - EUS upgrade stuck on worker pool update: error running skopeo inspect --no-tags\nOCPBUGS-3648 - Container security operator Image Manifest Vulnerabilities encounters runtime errors under some circumstances\nOCPBUGS-3659 - Expose AzureDisk metrics port over HTTPS\nOCPBUGS-3662 - don\u0027t enforce PSa in 4.12\nOCPBUGS-3667 - PTP 4.12 Regression - CLOCK REALTIME status is locked when physical interface is down\nOCPBUGS-3668 - 4.12.0-rc.0 fails to deploy on VMware IPI\nOCPBUGS-3676 - After node\u0027s reboot some pods fail to start - deleteLogicalPort failed for pod cannot delete GR SNAT for pod\nOCPBUGS-3693 - Router e2e: drop template.openshift.io apigroup dependency\nOCPBUGS-3709 - Special characters in subject name breaks prefilling role binding form\nOCPBUGS-3713 - [vsphere-problem-detector] fully qualified username must be used when checking permissions\nOCPBUGS-3714 - \u0027oc adm upgrade ...\u0027 should expose ClusterVersion Failing=True\nOCPBUGS-3739 - Pod stuck in containerCreating state when the node on which it is running is Terminated\nOCPBUGS-3744 - Egress router POD creation is failing while using openshift-sdn network plugin\nOCPBUGS-3755 - Create Alertmanager silence form does not explain the new \"Negative matcher\" option\nOCPBUGS-3761 - Consistent e2e test failure:Events.Events: event view displays created pod\nOCPBUGS-3765 - [RFE] Add kernel-rpm-macros to DTK image\nOCPBUGS-3771 - contrib/multicluster-environment.sh needs to be updated to work with ACM cluster proxy\nOCPBUGS-3776 - Manage columns tooltip remains displayed after dialog is closed\nOCPBUGS-3777 - [Dual Stack] ovn-ipsec crashlooping due to cert signing issues\nOCPBUGS-3797 - [4.13] Bump OVS control plane to get \"ovsdb/transaction.c: Refactor assess_weak_refs.\"\nOCPBUGS-3822 - Cluster-admin cannot know whether operator is fully deleted or not after normal user trigger \"Delete CSV\"\nOCPBUGS-3827 - CCM not able to remove a LB in ERROR state\nOCPBUGS-3877 - RouteTargetReference missing default for \"weight\" in Route CRD v1 schema\nOCPBUGS-3880 - [Ingress Node Firewall] Change the logo used for ingress node firewall operator\nOCPBUGS-3883 - Hosted ovnkubernetes pods are not being spread among workers evenly\nOCPBUGS-3896 - Console nav toggle button reports expanded in both expanded and not expanded states\nOCPBUGS-3904 - Delete/Add a failureDomain in CPMS to trigger update cannot work right on GCP\nOCPBUGS-3909 - Node is degraded when a machine config deploys a unit with content and mask=true\nOCPBUGS-3916 - expr for SDNPodNotReady is wrong due to there is not node label for kube_pod_status_ready\nOCPBUGS-3919 - Azure: unable to configure EgressIP if an ASG is set\nOCPBUGS-3921 - Openshift-install bootstrap operation cannot find a cloud defined in clouds.yaml in the current directory\nOCPBUGS-3923 - [CI] cluster-monitoring-operator produces more watch requests than expected\nOCPBUGS-3924 - Remove autoscaling/v2beta2 in 4.12 and later\nOCPBUGS-3929 - Use flowcontrol/v1beta2 for apf manifests in 4.13\nOCPBUGS-3931 - When all extensions are installed, \"libkadm5\" rpm package is duplicated in the `rpm -q` command\nOCPBUGS-3933 - Fails to deprovision cluster when swift omits \u0027content-type\u0027\nOCPBUGS-3945 - Handle 0600 kubeconfig\nOCPBUGS-3951 - Dynamic plugin extensions disappear from the UI when a codeRef fails to load\nOCPBUGS-3960 - Use kernel-rt from ose repo\nOCPBUGS-3965 - must-gather namespace should have ?privileged? warn and audit pod security labels besides enforce\nOCPBUGS-3973 - [SNO] csi-snapshot-controller CO is degraded when upgrade from 4.12 to 4.13 and reports permissions issue. \nOCPBUGS-3974 - CIRO panics when suspended flag is nil\nOCPBUGS-3975 - \"Failed to open directory, disabling udev device properties\" in node-exporter logs\nOCPBUGS-3978 - AWS EBS CSI driver operator is degraded without \"CSISnapshot\" capability\nOCPBUGS-3985 - Allow PSa enforcement in 4.13 by using featuresets\nOCPBUGS-3987 - Some nmstate validations are skipped when NM config is in agent-config.yaml\nOCPBUGS-3990 - HyperShift control plane operators have wrong priorityClass\nOCPBUGS-3993 - egressIP annotation including two interfaces when multiple networks\nOCPBUGS-4000 - fix operator naming convention \nOCPBUGS-4008 - Console deployment does not roll out when managed cluster configmap is updated\nOCPBUGS-4012 - Disabled Serverless add actions should not be displayed in topology menu\nOCPBUGS-4026 - Endless rerender loop and a stuck browser on the add and topology page when SBO is installed\nOCPBUGS-4047 - [CI-Watcher] e2e test flake: Create key/value secrets Validate a key/value secret\nOCPBUGS-4049 - MCO reconcile fails if user replace the pull secret to empty one\nOCPBUGS-4052 - [ALBO] OpenShift Load Balancer Operator does not properly support cluster wide proxy\nOCPBUGS-4054 - cluster-ingress-operator\u0027s configurable-route controller\u0027s startup is noisy\nOCPBUGS-4089 - Kube-State-metrics pod fails to start due to panic\nOCPBUGS-4090 - OCP on OSP - Image registry is deployed with cinder instead of swift storage backend \nOCPBUGS-4101 - Empty/missing node-sizing SYSTEM_RESERVED_ES parameter can result in kubelet not starting\nOCPBUGS-4110 - Form footer buttons are misaligned in web terminal form\nOCPBUGS-4119 - Random SYN drops in OVS bridges of OVN-Kubernetes\nOCPBUGS-4166 - Update Cluster Sample Operator dependencies and libraries for OCP 4.13\nOCPBUGS-4168 - Prometheus continuously restarts due to slow WAL replay\nOCPBUGS-4173 - vsphere-problem-detector should re-check passwords after change\nOCPBUGS-4181 - Prometheus and Alertmanager incorrect ExternalURL configured\nOCPBUGS-4184 - Use mTLS authentication for all monitoring components instead of bearer token\nOCPBUGS-4203 - Unnecessary padding around alert atop debug pod terminal\nOCPBUGS-4206 - getContainerStateValue contains incorrectly internationalized text\nOCPBUGS-4207 - Remove debug level logging on openshift-config-operator\nOCPBUGS-4219 - Add runbook link to PrometheusRuleFailures\nOCPBUGS-4225 - [4.13] boot sequence override request fails with Base.1.8.PropertyNotWritable on Lenovo SE450\nOCPBUGS-4232 - CNCC: Wrong log format for Azure locking\nOCPBUGS-4245 - L2 does not work if a metallb is not able to listen to arp requests on a single interface\nOCPBUGS-4252 - Node Terminal tab results in error\nOCPBUGS-4253 - Add PodNetworkConnectivityCheck for must-gather\nOCPBUGS-4266 - crio.service should use a more safe restart policy to provide recoverability against concurrency issues\nOCPBUGS-4279 - Custom Victory-Core components in monitoring ui code causing build issues \nOCPBUGS-4280 - Return 0 when `oc import-image` failed\nOCPBUGS-4282 - [IR-269]Can\u0027t pull sub-manifest image using imagestream of manifest list\nOCPBUGS-4291 - [OVN]Sometimes after reboot egress node, egress IP cannot be applied anymore. \nOCPBUGS-4293 - Specify resources.requests for operator pod\nOCPBUGS-4298 - Specify resources.requests for operator pod\nOCPBUGS-4302 - Specify resources.requests for operator pod\nOCPBUGS-4305 - [4.13] Improve ironic logging configuration in metal3\nOCPBUGS-4317 - [IBM][4.13][Snapshot] restore size in snapshot is not the same size of pvc request size \nOCPBUGS-4328 - Update installer images to be consistent with ART\nOCPBUGS-434 - After FIPS enabled in S390X, ingress controller in degraded state\nOCPBUGS-4343 - Use flowcontrol/v1beta3 for apf manifests in 4.13\nOCPBUGS-4347 - set TLS cipher suites in Kube RBAC sidecars\nOCPBUGS-4350 - CNO in HyperShift reports upgrade complete in clusteroperator prematurely\nOCPBUGS-4352 - [RHOCP] HPA shows different API versions in web console\nOCPBUGS-4357 - Bump samples operator k8s dep to 1.25.2\nOCPBUGS-4359 - cluster-dns-operator corrupts /etc/hosts when fs full\nOCPBUGS-4367 - Debug log messages missing from output and Info messages malformed\nOCPBUGS-4377 - Service name search ability while creating the Route from console\nOCPBUGS-4401 - limit cluster-policy-controller RBAC permissions\nOCPBUGS-4411 - ovnkube node pod crashed after converting to a dual-stack cluster network\nOCPBUGS-4417 - ip-reconciler removes the overlappingrangeipreservations whether the pod is alive or not\nOCPBUGS-4425 - Egress FW ACL rules are invalid in dualstack mode\nOCPBUGS-4447 - [MetalLB Operator] The CSV needs an update to reflect the correct version of operator\nOCPBUGS-446 - Cannot Add a project from DevConsole in airgap mode using git importing\nOCPBUGS-4483 - apply retry logic to ovnk-node controllers\nOCPBUGS-4490 - hypershift: csi-snapshot-controller uses wrong kubeconfig\nOCPBUGS-4491 - hypershift: aws-ebs-csi-driver-operator uses wrong kubeconfig\nOCPBUGS-4492 - [4.13] The property TransferProtocolType is required for VirtualMedia.InsertMedia\nOCPBUGS-4502 - [4.13] [OVNK] Add support for service session affinity timeout\nOCPBUGS-4516 - `oc-mirror` does not work as expected relative path for OCI format copy \nOCPBUGS-4517 - Better to detail the --command-os of mac for `oc adm release extract` command\nOCPBUGS-4521 - all kubelet targets are down after a few hours\nOCPBUGS-4524 - Hold lock when deleting completed pod during update event\nOCPBUGS-4525 - Don\u0027t log in iterateRetryResources when there are no retry entries\nOCPBUGS-4535 - There is no 4.13 gcp-filestore-csi-driver-operator version for test\nOCPBUGS-4536 - Image registry panics while deploying OCP in eu-south-2 AWS region\nOCPBUGS-4537 - Image registry panics while deploying OCP in eu-central-2 AWS region\nOCPBUGS-4538 - Image registry panics while deploying OCP in ap-south-2 AWS region\nOCPBUGS-4541 - Azure: remove deprecated ADAL\nOCPBUGS-4546 - CVE-2021-38561 ose-installer-container: golang: out-of-bounds read in golang.org/x/text/language leads to DoS [openshift-4]\nOCPBUGS-4549 - Azure: replace deprecated AD Graph API\nOCPBUGS-4550 - [CI] console-operator produces more watch requests than expected\nOCPBUGS-4571 - The operator recommended namespace is incorrect after change installation mode to \"A specific namespace on the cluster\"\nOCPBUGS-4574 - Machine stuck in no phase when creating in a nonexistent zone and stuck in Deleting when deleting on GCP\nOCPBUGS-463 - OVN-Kubernetes should not send IPs with leading zeros to OVN\nOCPBUGS-4630 - Bump documentationBaseURL to 4.13\nOCPBUGS-4635 - [OCP 4.13] ironic container images have old packages\nOCPBUGS-4638 - Support RHOBS monitoring for HyperShift in CNO\nOCPBUGS-4652 - Fixes for RHCOS 9 based on RHEL 9.0\nOCPBUGS-4654 - Azure: UPI: Fix storage arm template to work with Galleries and MAO\nOCPBUGS-4659 - Network Policy executes duplicate transactions for every pod update\nOCPBUGS-4684 - In DeploymentConfig both the Form view and Yaml view are not in sync\nOCPBUGS-4689 - SNO not able to bring up Provisioning resource in 4.11.17\nOCPBUGS-4691 - Topology sidebar actions doesn\u0027t show the latest resource data\nOCPBUGS-4692 - PTP operator: Use priority class node critical\nOCPBUGS-4700 - read-only update UX: confusing \"Update blocked\" pop-up\nOCPBUGS-4701 - read-only update UX: confusing \"Control plane is hosted\" banner\nOCPBUGS-4703 - Router can migrate to use LivenessProbe.TerminationGracePeriodSeconds\nOCPBUGS-4712 - ironic-proxy daemonset not deleted when provisioningNetwork is changed from Disabled to Managed/Unmanaged\nOCPBUGS-4724 - [4.13] egressIP annotations not present on OpenShift on Openstack multiAZ installation\nOCPBUGS-4725 - mapi_machinehealthcheck_short_circuit not properly reconciling causing MachineHealthCheckUnterminatedShortCircuit alert to fire\nOCPBUGS-4746 - Removal of detection of host kubelet kubeconfig breaks IBM Cloud ROKS\nOCPBUGS-4756 - OLM generates invalid component selector labels\nOCPBUGS-4757 - Revert Catalog PSA decisions for 4.13 (OLM)\nOCPBUGS-4758 - Revert Catalog PSA decisions for 4.13 (Marketplace)\nOCPBUGS-4769 - Old AWS boot images vs. 4.12: unknown provider \u0027ec2\u0027\nOCPBUGS-4780 - Update openshift/builder release-4.13 to go1.19\nOCPBUGS-4781 - Get Helm Release seems to be using List Releases api\nOCPBUGS-4793 - CMO may generate Kubernetes events with a wrong object reference\nOCPBUGS-4802 - Update formatting with gofmt for go1.19\nOCPBUGS-4825 - Pods completed + deleted may leak\nOCPBUGS-4827 - Ingress Controller is missing a required AWS resource permission for SC2S region us-isob-east-1\nOCPBUGS-4873 - openshift-marketplace namespace missing \"audit-version\" and \"warn-version\" PSA label\nOCPBUGS-4874 - Baremetal host data is still sometimes required\nOCPBUGS-4883 - Default Git type to other info alert should get remove after changing the git type\nOCPBUGS-4894 - Disabled Serverless add actions should not be displayed for Knative Service\nOCPBUGS-4899 - coreos-installer output not available in the logs\nOCPBUGS-4900 - Volume limits test broken on AWS and GCP TechPreview clusters\nOCPBUGS-4906 - Cross-namespace template processing is not being tested\nOCPBUGS-4909 - Can\u0027t reach own service when egress netpol are enabled\nOCPBUGS-4913 - Need to wait longer for VM to obtain IP from DHCP\nOCPBUGS-4941 - Fails to deprovision cluster when swift omits \u0027content-type\u0027 and there are empty containers\nOCPBUGS-4950 - OLM K8s Dependencies should be at 1.25\nOCPBUGS-4954 - [IBMCloud] COS Reclamation prevents ResourceGroup cleanup\nOCPBUGS-4955 - Bundle Unpacker Using \"Always\" ImagePullPolicy for digests\nOCPBUGS-4969 - ROSA Machinepool EgressIP Labels Not Discovered\nOCPBUGS-4975 - Missing translation in ceph storage plugin\nOCPBUGS-4986 - precondition: Do not claim warnings would have blocked\nOCPBUGS-4997 - Agent ISO does not respect proxy settings\nOCPBUGS-5001 - MachineConfigControllerPausedPoolKubeletCA should have a working runbook URI\nOCPBUGS-501 - oc get dc fails when AllRequestBodies audit-profile is set in apiserver\nOCPBUGS-5010 - Should always delete the must-gather pod when run the must-gather\nOCPBUGS-5016 - Editing Pipeline in the ocp console to get information error\nOCPBUGS-5018 - Upgrade from 4.11 to 4.12 with Windows machine workers (Spot Instances) failing due to: hcnCreateEndpoint failed in Win32: The object already exists. \nOCPBUGS-5036 - Cloud Controller Managers do not react to changes in configuration leading to assorted errors\nOCPBUGS-5045 - unit test data race with egress ip tests\nOCPBUGS-5068 - [4.13] virtual media provisioning fails when iLO Ironic driver is used\nOCPBUGS-5073 - Connection reset by peer issue with SSL OAuth Proxy when route objects are created more than 80. \nOCPBUGS-5079 - [CI Watcher] pull-ci-openshift-console-master-e2e-gcp-console jobs: Process did not finish before 4h0m0s timeout\nOCPBUGS-5085 - Should only show the selected catalog when after apply the ICSP and catalogsource\nOCPBUGS-5101 - [GCP] [capi] Deletion of cluster is happening , it shouldn\u0027t be allowed\nOCPBUGS-5116 - machine.openshift.io API is not supported in Machine API webhooks\nOCPBUGS-512 - Permission denied when write data to mounted gcp filestore volume instance\nOCPBUGS-5124 - kubernetes-nmstate does not pass CVP tests in 4.12\nOCPBUGS-5136 - provisioning on ilo4-virtualmedia BMC driver fails with error: \"Creating vfat image failed: Unexpected error while running command\"\nOCPBUGS-5140 - [alibabacloud] IPI install got bootstrap failure and without any node ready, due to enforced EIP bandwidth 5 Mbit/s\nOCPBUGS-5151 - Installer - provisioning interface on master node not getting ipv4 dhcp ip address from bootstrap dhcp server on OCP IPI BareMetal install\nOCPBUGS-5164 - Add support for API version v1beta1 for knativeServing and knativeEventing\nOCPBUGS-5165 - Dev Sandbox clusters uses clusterType OSD and there is no way to enforce DEVSANDBOX\nOCPBUGS-5182 - [azure] Fail to create master node with vm size in family ECIADSv5 and ECIASv5\nOCPBUGS-5184 - [azure] Fail to create master node with vm size in standardNVSv4Family\nOCPBUGS-5188 - Wrong message in MCCDrainError alert\nOCPBUGS-5234 - [azure] Azure Stack Hub (wwt) UPI installation failed to scale up worker nodes using machinesets \nOCPBUGS-5235 - mapi_instance_create_failed metric cannot work when set acceleratedNetworking: true on Azure\nOCPBUGS-5269 - remove unnecessary RBAC in KCM: file removal\nOCPBUGS-5275 - remove unnecessary RBAC in OCM\nOCPBUGS-5287 - Bug with Red Hat Integration - 3scale - Managed Application Services causes operator-install-single-namespace.spec.ts to fail\nOCPBUGS-5292 - Multus: Interface name contains an invalid character / [ocp 4.13]\nOCPBUGS-5300 - WriteRequestBodies audit profile records routes/status events at RequestResponse level\nOCPBUGS-5306 - One old machine stuck in Deleting and many co get degraded when doing master replacement on the cluster with OVN network\nOCPBUGS-5346 - Reported vSphere Connection status is misleading\nOCPBUGS-5347 - Clusteroperator Available condition is updated every 2 mins when operator is disabled\nOCPBUGS-5353 - Dashboard graph should not be stacked - Kubernetes / Compute Resources / Pod Dashboard\nOCPBUGS-5410 - [AWS-EBS-CSI-Driver] provision volume using customer kms key couldn\u0027t restore its snapshot successfully\nOCPBUGS-5423 - openshift-marketplace pods cause PodSecurityViolation alert to fire\nOCPBUGS-5428 - Many plugin SDK extension docs are missing descriptions\nOCPBUGS-5432 - Downstream Operator-SDK v1.25.1 to OCP 4.13\nOCPBUGS-5458 - wal: max entry size limit exceeded\nOCPBUGS-5465 - Context Deadline exceeded when PTP service is disabled from the switch\nOCPBUGS-5466 - Default CatalogSource aren\u0027t always reverted to default settings\nOCPBUGS-5492 - CI \"[Feature:bond] should create a pod with bond interface\" fail for MTU migration jobs\nOCPBUGS-5497 - MCDRebootError alarm disappears after 15 minutes\nOCPBUGS-5498 - Host inventory quick start for OCP\nOCPBUGS-5505 - Upgradeability check is throttled too much and with unnecessary non-determinism\nOCPBUGS-5508 - Report topology usage in vSphere environment via telemetry\nOCPBUGS-5517 - [Azure/ARO] Update Azure SDK to v63.1.0+incompatible \nOCPBUGS-5520 - MCDPivotError alert fires due temporary transient failures \nOCPBUGS-5523 - Catalog, fatal error: concurrent map read and map write\nOCPBUGS-5524 - Disable vsphere intree tests that exercise multiple tests\nOCPBUGS-5534 - [UI] When OCP and ODF are upgraded, refresh web console pop-up doesn\u0027t appear after ODF upgrade resulting in dashboard crash\nOCPBUGS-5540 - Typo in WTO for Milliseconds\nOCPBUGS-5542 - Project dropdown order is not as smart as project list page order\nOCPBUGS-5546 - Machine API Provider Azure should not modify the Machine spec\nOCPBUGS-5547 - Webhook Secret (1 of 2) is not removed when Knative Service is deleted\nOCPBUGS-5559 - add default noProxy config for Azure\nOCPBUGS-5733 - [Openshift Pipelines] Description of parameters are not shown in pipelinerun description page\nOCPBUGS-5734 - Azure: VIP 168.63.129.16 should be noProxy to all clouds except Public\nOCPBUGS-5736 - The main section of the page will keep loading after normal user login\nOCPBUGS-5759 - Deletion of BYOH Windows node hangs in Ready,SchedulingDisabled\nOCPBUGS-5802 - update sprig to v3 in cno\nOCPBUGS-5836 - Incorrect redirection when user try to download windows oc binary\nOCPBUGS-5842 - executes /host/usr/bin/oc\nOCPBUGS-5851 - [CI-Watcher]: Using OLM descriptor components deletes operand \nOCPBUGS-5873 - etcd_object_counts is deprecated and replaced with apiserver_storage_objects, causing \"etcd Object Count\" dashboard to only show OpenShift resources\nOCPBUGS-5888 - Failed to install 4.13 ocp on SNO with \"error during syncRequiredMachineConfigPools\"\nOCPBUGS-5891 - oc-mirror heads-only does not work with target name\nOCPBUGS-5903 - gather default ingress controller definition\nOCPBUGS-5922 - [2047299 Jira placeholder] nodeport not reachable port connection timeout\nOCPBUGS-5939 - revert \"force cert rotation every couple days for development\" in 4.13\nOCPBUGS-5948 - Runtime error using API Explorer with AdmissionReview resource\nOCPBUGS-5949 - oc --icsp mapping scope does not match openshift icsp mapping scope\nOCPBUGS-5959 - [4.13] Bootimage bump tracker\nOCPBUGS-5988 - Degraded etcd on assisted-installer installation- bootstrap etcd is not removed properly\nOCPBUGS-5991 - Kube APIServer panics in admission controller\nOCPBUGS-5997 - Add Git Repository form shows empty permission content and non-working help link until a git url is entered\nOCPBUGS-6004 - apiserver pods cannot reach etcd on single node IPv6 cluster: transport: authentication handshake failed: x509: certificate is valid for ::1, 127.0.0.1, ::1, fd69::2, not 2620:52:0:198::10\"\nOCPBUGS-6011 - openshift-client package has wrong version of kubectl bundled\nOCPBUGS-6018 - The MCO can generate a rendered config with old KubeletConfig contents, blocking upgrades\nOCPBUGS-6026 - cannot change /etc folder ownership inside pod\nOCPBUGS-6033 - metallb 4.12.0-202301042354 (OCP 4.12) refers to external image\nOCPBUGS-6049 - Do not show UpdateInProgress when status is Failing\nOCPBUGS-6053 - `availableUpdates: null` results in run-time error on Cluster Settings page\nOCPBUGS-6055 - thanos-ruler-user-workload-1 pod is getting repeatedly re-created after upgrade do 4.10.41\nOCPBUGS-6063 - PVs(vmdk) get deleted when scaling down machineSet with vSphere IPI\nOCPBUGS-6089 - Unnecessary event reprocessing\nOCPBUGS-6092 - ovs-configuration.service fails - Error: Connection activation failed: No suitable device found for this connection\nOCPBUGS-6097 - CVO hotloops on ImageStream and logs the information incorrectly\nOCPBUGS-6098 - Show Git icon and URL in repository link in PLR details page should be based on the git provider\nOCPBUGS-6101 - Daemonset is not upgraded after operator upgrade\nOCPBUGS-6175 - Image registry Operator does not use Proxy when connecting to openstack\nOCPBUGS-6185 - Update 4.13 ose-cluster-config-operator image to be consistent with ART\nOCPBUGS-6187 - Update 4.13 openshift-state-metrics image to be consistent with ART\nOCPBUGS-6189 - Update 4.13 ose-cluster-authentication-operator image to be consistent with ART\nOCPBUGS-6191 - Update 4.13 ose-network-metrics-daemon image to be consistent with ART\nOCPBUGS-6197 - Update 4.13 ose-openshift-apiserver image to be consistent with ART\nOCPBUGS-6201 - Update 4.13 openshift-enterprise-pod image to be consistent with ART\nOCPBUGS-6202 - Update 4.13 ose-cluster-kube-apiserver-operator image to be consistent with ART\nOCPBUGS-6213 - Update 4.13 ose-machine-config-operator image to be consistent with ART\nOCPBUGS-6222 - Update 4.13 ose-alibaba-cloud-csi-driver image to be consistent with ART\nOCPBUGS-6228 - Update 4.13 coredns image to be consistent with ART\nOCPBUGS-6231 - Update 4.13 ose-kube-storage-version-migrator image to be consistent with ART\nOCPBUGS-6232 - Update 4.13 marketplace-operator image to be consistent with ART\nOCPBUGS-6233 - Update 4.13 ose-cluster-openshift-apiserver-operator image to be consistent with ART\nOCPBUGS-6234 - Update 4.13 ose-cluster-bootstrap image to be consistent with ART\nOCPBUGS-6235 - Update 4.13 cluster-network-operator image to be consistent with ART\nOCPBUGS-6238 - Update 4.13 oauth-server image to be consistent with ART\nOCPBUGS-6240 - Update 4.13 ose-cluster-kube-storage-version-migrator-operator image to be consistent with ART\nOCPBUGS-6241 - Update 4.13 operator-lifecycle-manager image to be consistent with ART\nOCPBUGS-6247 - Update 4.13 ose-cluster-ingress-operator image to be consistent with ART\nOCPBUGS-6262 - Add more logs to \"oc extract\" in mco-first boot service \nOCPBUGS-6265 - When installing SNO with bootstrap in place it takes CVO 6 minutes to acquire the leader lease \nOCPBUGS-6270 - Irrelevant vsphere platform data is required\nOCPBUGS-6272 - E2E tests: Entire pipeline flow from Builder page Start the pipeline with workspace\nOCPBUGS-631 - machineconfig service is failed to start because Podman storage gets corrupted\nOCPBUGS-6486 - Image upload fails when installing cluster\nOCPBUGS-6503 - admin ack test nondeterministically does a check post-upgrade\nOCPBUGS-6504 - IPI Baremetal Master Node in DualStack getting fd69:: address randomly, OVN CrashLoopBackOff\nOCPBUGS-6507 - Don\u0027t retry network policy peer pods if ips couldn\u0027t be fetched\nOCPBUGS-6577 - Node-exporter NodeFilesystemAlmostOutOfSpace alert exception needed\nOCPBUGS-6610 - Developer - Topology : \u0027Filter by resource\u0027 drop-down i18n misses\nOCPBUGS-6621 - Image registry panics while deploying OCP in ap-southeast-4 AWS region\nOCPBUGS-6624 - Issue deploying the master node with IPI\nOCPBUGS-6634 - Let the console able to build on other architectures and compatible with prow builds\nOCPBUGS-6646 - Ingress node firewall CI is broken with latest\nOCPBUGS-6647 - User Preferences - Applications : Resource type drop-down i18n misses\nOCPBUGS-6651 - Nodes unready in PublicAndPrivate / Private Hypershift setups behind a proxy\nOCPBUGS-6660 - Uninstall Operator? modal instructions always reference optional checkbox\nOCPBUGS-6663 - Platform baremetal warnings during create image when fields not defined\nOCPBUGS-6682 - [OVN] ovs-configuration vSphere vmxnet3 allmulti workaround is now permanent\nOCPBUGS-6698 - Fix conflict error message in cluster-ingress-operator\u0027s ensureNodePortService\nOCPBUGS-6700 - Cluster-ingress-operator\u0027s updateIngressClass function logs success message when error\nOCPBUGS-6701 - The ingress-operator spuriously updates ingressClass on startup\nOCPBUGS-6714 - Traffic from egress IPs was interrupted after Cluster patch to Openshift 4.10.46\nOCPBUGS-672 - Redhat-operators are failing regularly due to startup probe timing out which in turn increases CPU/Mem usage on Master nodes\nOCPBUGS-6722 - s390x: failed to generate asset \"Image\": multiple \"disk\" artifacts found\nOCPBUGS-6730 - Pod latency spikes are observed when there is a compaction/leadership transfer\nOCPBUGS-6731 - Gathered Environment variables (HTTP_PROXY/HTTPS_PROXY) may contain sensible information and should be obfuscated\nOCPBUGS-6741 - opm fails to serve FBC if cachedir not provided\nOCPBUGS-6757 - Pipeline Repository (Pipeline-as-Code) list page shows an empty Event type column\nOCPBUGS-6760 - Couldn\u0027t update/delete cpms on gcp private cluster\nOCPBUGS-6762 - Enhance the user experience for the name-filter-input on Metrics target page\nOCPBUGS-6765 - \"Delete dependent objects of this resource\" might cause confusions\nOCPBUGS-6777 - [gcp][CORS-1988] \"create manifests\" without an existing \"install-config.yaml\" missing 4 YAML files in \"\u003cinstall dir\u003e/openshift\" which leads to \"create cluster\" failure\nOCPBUGS-6781 - gather Machine objects\nOCPBUGS-6797 - Empty IBMCOS storage config causes operator to crashloop\nOCPBUGS-6799 - Repositories list does not show the running pipelinerun as last pipelinerun\nOCPBUGS-6809 - Uploading large layers fails with \"blob upload invalid\"\nOCPBUGS-6811 - Update Cluster Sample Operator dependencies and libraries for OCP 4.13\nOCPBUGS-6821 - Update NTO images to be consistent with ART\nOCPBUGS-6832 - Include openshift_apps_deploymentconfigs_strategy_total to recent_metrics\nOCPBUGS-6893 - Dev console doesn\u0027t finish loading for users with limited access\nOCPBUGS-6902 - 4.13-e2e-metal-ipi-upgrade-ovn-ipv6 on permafail\nOCPBUGS-6917 - MultinetworkPolicy: unknown service runtime.v1alpha2.RuntimeService\nOCPBUGS-6925 - Update OWNERS_ALIASES in release-4.13 branch\nOCPBUGS-6945 - OS Release reports incorrect version ID\nOCPBUGS-6953 - ovnkube-master panic nil deref\nOCPBUGS-6955 - panic in an ovnkube-master pod\nOCPBUGS-6962 - \u0027agent_installer\u0027 invoker not showing up in telemetry\nOCPBUGS-6977 - pod-identity-webhook replicas=2 is failing single node jobs\nOCPBUGS-6978 - Index violation on IGMP_Group during upgrade from 4.12.0 to 4.12.1\nOCPBUGS-6994 - All Clusters perspective is not activated automatically when ACM is installed\nOCPBUGS-702 - The caBundle field of alertmanagerconfigs.monitoring.coreos.com crd is getting removed\nOCPBUGS-7031 - Pipelines repository list and creation form doesn\u0027t show Tech Preview status\nOCPBUGS-7090 - Add to navigation button in search result does nothing\nOCPBUGS-7102 - OLM downstream utest fails due to new release-XX+1 branch creation\nOCPBUGS-7106 - network-tools needs to be updated to give ovn-k master leader info\nOCPBUGS-7118 - OCP 4.12 does not support launching SGX enclaves\nOCPBUGS-7144 - On mobile screens, At pipeline details page the info alert on metrics tab is not showing correctly\nOCPBUGS-7149 - IPv6 multinode spoke no moving from rebooting/configuring stage\nOCPBUGS-7173 - [OVN] DHCP timeouts on Azure arm64, install fails\nOCPBUGS-7180 - [4.13] Bootimage bump tracker\nOCPBUGS-7186 - [gcp][CORS-2424] with \"secureBoot\" enabled, after deleting control-plane machine, the new machine is created with \"enableSecureBoot\" being False unexpectedly\nOCPBUGS-7195 - [CI-Watcher] e2e issue with tests: Create Samples Page Timeout Error\nOCPBUGS-7199 - [CI-Watcher] e2e issue with tests: Interacting with CatalogSource page\nOCPBUGS-7204 - Manifests generated to multiple \"results-xxx\" folders when using the oci feature with OCI and nonOCI catalogs \nOCPBUGS-7207 - MTU migration configuration is cleaned up prematurely while in progress\nOCPBUGS-723 - ClusterResourceQuota values are not reflecting. \nOCPBUGS-7268 - [4.13] Modify the PSa pod extractor to mutate pod controller pod specs\nOCPBUGS-7284 - Hypershift failing new SCC conformance tests\nOCPBUGS-7291 - ptp keeps trying to start phc2sys even if it\u0027s configured as empty string in phc2sysOpts\nOCPBUGS-7293 - RHCOS 9.2 Failing to Bootstrap on Metal, OpenStack, vSphere (all baremetal runtime platforms)\nOCPBUGS-7300 - aws-ebs-csi-driver-operator crash loops with HC proxy configured\nOCPBUGS-7301 - Not possible to use certain start addresses in whereabouts IPv6 range [Backport 4.13]\nOCPBUGS-7308 - Download kubeconfig for ServiceAccount returns error\nOCPBUGS-7354 - Installation failed on Azure SDN as network is degraded \nOCPBUGS-7356 - Default channel on OCP 4.13 should be stable-4.13\nOCPBUGS-7359 - [Azure] Replace master failed as new master did not add into lb backend \nOCPBUGS-736 - Kuryr uses default MTU for service network\nOCPBUGS-7366 - [gcp] New machine stuck in Provisioning when delete one zone from cpms on gcp with customer vpc\nOCPBUGS-7372 - fail early on missing node status envs\nOCPBUGS-7374 - set default timeouts in etcdcli\nOCPBUGS-7391 - Monitoring operator long delay reconciling extension-apiserver-authentication\nOCPBUGS-7399 - In the Edit application mode, the name of the added pipeline is not displayed anymore\nOCPBUGS-7408 - AzureDisk CSI driver does not compile with cachito\nOCPBUGS-7412 - gomod dependencies failures in 4.13-4.14 container builds\nOCPBUGS-7417 - gomod dependencies failures in 4.13-4.14 container builds\nOCPBUGS-7418 - Default values for Scaling fields is not set in Create Serverless function form\nOCPBUGS-7419 - CVO delay when setting clusterversion available status to true \nOCPBUGS-7421 - Missing i18n key for PAC section in Git import form\nOCPBUGS-7424 - Bump cluster-ingress-operator to k8s APIs v0.26.1\nOCPBUGS-7427 - dynamic-demo-plugin.spec.ts requires 10 minutes of unnecessary wait time\nOCPBUGS-7438 - Egress service does not handle invalid nodeSelectors correctly\nOCPBUGS-7482 - Fix handling of single failure-domain (non-tagged) deployments in vsphere\nOCPBUGS-7483 - Hypershift installs on \"platform: none\" are broken\nOCPBUGS-7488 - test flake: should not reconcile SC when state is Unmanaged\nOCPBUGS-7495 - Platform type is ignored\nOCPBUGS-7517 - Helm page crashes on old releases with a new Secret\nOCPBUGS-7519 - NFS Storage Tests trigger Kernel Panic on Azure and Metal\nOCPBUGS-7523 - Add new AWS regions for ROSA\nOCPBUGS-7542 - Bump router to k8s APIs v0.26.1\nOCPBUGS-7555 - Enable default sysctls for kubelet\nOCPBUGS-7558 - Rebase coredns to 1.10.1\nOCPBUGS-7563 - vSphere install can\u0027t complete with out-of-tree CCM\nOCPBUGS-7579 - [azure] failed to parse client certificate when using certificate-based Service Principal with passpharse\nOCPBUGS-7611 - PTPOperator config transportHost with AMQ is not detected \nOCPBUGS-7616 - vSphere multiple in-tree test failures (non-zonal)\nOCPBUGS-7617 - Azure Disk volume is taking time to attach/detach\nOCPBUGS-7622 - vSphere UPI jobs failing with \u0027Managed cluster should have machine resources\u0027\nOCPBUGS-7648 - Bump cluster-dns-operator to k8s APIs v0.26.1\nOCPBUGS-7689 - Project Admin is able to Label project with empty string in RHOCP 4\nOCPBUGS-7696 - [ Azure ]not able to deploy machine with publicIp:true\nOCPBUGS-7707 - /etc/NetworkManager/dispatcher.d needs to be relabeled during pivot from 8.6 to 9.2\nOCPBUGS-7719 - Update to 4.13.0-ec.3 stuck on leaked MachineConfig\nOCPBUGS-7729 - Remove ETCD liviness probe. \nOCPBUGS-7731 - Need to cancel threads when agent-tui timeout is stopped\nOCPBUGS-7733 - Afterburn fails on AWS/GCP clusters born in OCP 4.1/4.2\nOCPBUGS-7743 - SNO upgrade from 4.12 to 4.13 rhel9.2 is broken cause of dnsmasq default config\nOCPBUGS-7750 - fix gofmt check issue in network-metrics-daemon\nOCPBUGS-7754 - ART having trouble building olm images\nOCPBUGS-7774 - RawCNIConfig is printed in byte representation on failure, not human readable\nOCPBUGS-7785 - migrate to using Lease for leader election\nOCPBUGS-7806 - add \"nfs-export\" under PV details page\nOCPBUGS-7809 - sg3_utils package is missing in the assisted-installer-agent Docker file\nOCPBUGS-781 - ironic-proxy is using a deprecated field to fetch cluster VIP\nOCPBUGS-7833 - Storage tests failing in no-capabilities job\nOCPBUGS-7837 - hypershift: aws-ebs-csi-driver-operator uses guest cluster proxy causing PV provisioning failure\nOCPBUGS-7860 - [azure] message is unclear when missing clientCertificatePassword in osServicePrincipal.json\nOCPBUGS-7876 - [Descheduler] Enabling LifeCycleUtilization to test namespace filtering does not work\nOCPBUGS-7879 - Devfile isn\u0027t be processed correctly on \u0027Add from git repo\u0027\nOCPBUGS-7896 - MCO should not add keepalived pod manifests in case of VSPHERE UPI\nOCPBUGS-7899 - ODF Monitor pods failing to be bounded because timeout issue with thin-csi SC\nOCPBUGS-7903 - Pool degraded with error: rpm-ostree kargs: signal: terminated\nOCPBUGS-7909 - Baremetal runtime prepender creates /etc/resolv.conf mode 0600 and bad selinux context\nOCPBUGS-794 - OLM version rule is not clear\nOCPBUGS-7940 - apiserver panics in admission controller\nOCPBUGS-7943 - AzureFile CSI driver does not compile with cachito\nOCPBUGS-7970 - [E2E] Always close the filter dropdown in listPage.filter.by\nOCPBUGS-799 - Reply packet for DNS conversation to service IP uses pod IP as source\nOCPBUGS-8066 - Create Serverless Function form breaks if Pipeline Operator is not installed\nOCPBUGS-8086 - Visual issues with listing items\nOCPBUGS-8243 - [release 4.13] Gather Monitoring pods\u0027 Persistent Volumes\nOCPBUGS-8308 - Bump openshift/kubernetes to 1.26.2\nOCPBUGS-8312 - IPI on Power VS clusters cannot deploy MCO\nOCPBUGS-8326 - Azure cloud provider should use Kubernetes 1.26 dependencies\nOCPBUGS-8341 - Unable to set capabilities with agent installer based installation \nOCPBUGS-8342 - create cluster-manifests fails when imageContentSources is missing\nOCPBUGS-8353 - PXE support is incomplete\nOCPBUGS-8381 - Console shows x509 error when requesting token from oauth endpoint\nOCPBUGS-8401 - Bump openshift/origin to kube 1.26.2\nOCPBUGS-8424 - ControlPlaneMachineSet: Machine\u0027s Node should be Ready to consider the Machine Ready\nOCPBUGS-8445 - cgroups default setting in OCP 4.13 generates extra MachineConfig\nOCPBUGS-8463 - OpenStack Failure domains as 4.13 TechPreview\nOCPBUGS-8471 - [4.13] egress firewall only createas 1 acl for long namespace names\nOCPBUGS-8475 - TestBoundTokenSignerController causes unrecoverable disruption in e2e-gcp-operator CI job\nOCPBUGS-8481 - CAPI rebases 4.13 backports\nOCPBUGS-8490 - agent-tui: display additional checks only when primary check fails\nOCPBUGS-8498 - aws-ebs-csi-driver-operator ServiceAccount does not include the HCP pull-secret in its imagePullSecrets\nOCPBUGS-8505 - [4.13] egress firewall acls are deleted on restart\nOCPBUGS-8511 - [4.13+ ONLY] Don\u0027t use port 80 in bootstrap IPI bare metal\nOCPBUGS-855 - When setting allowedRegistries urls the openshift-samples operator is degraded\nOCPBUGS-859 - monitor not working with UDP lb when externalTrafficPolicy: Local\nOCPBUGS-860 - CSR are generated with incorrect Subject Alternate Names\nOCPBUGS-8699 - Metal IPI Install Rate Below 90%\nOCPBUGS-8701 - `oc patch project` not working with OCP 4.12\nOCPBUGS-8702 - OKD SCOS: remove workaround for rpm-ostree auth\nOCPBUGS-8703 - fails to switch to kernel-rt with rhel 9.2\nOCPBUGS-8710 - [4.13] don\u0027t enforce PSa in 4.13\nOCPBUGS-8712 - AES-GCM encryption at rest is not supported by kube-apiserver-operator\nOCPBUGS-8719 - Allow the user to scroll the content of the agent-tui details view\nOCPBUGS-8741 - [4.13] Pods in same deployment will have different ability to query services in same namespace from one another; ocp 4.10\nOCPBUGS-8742 - Origin tests should not specify `readyz` as the health check path\nOCPBUGS-881 - fail to create install-config.yaml as apiVIP and ingressVIP are not in machine networks\nOCPBUGS-8941 - Introduce tooltips for contextual information\nOCPBUGS-904 - Alerts from MCO are missing namespace\nOCPBUGS-9079 - ICMP fragmentation needed sent to pods behind a service don\u0027t seem to reach the pods\nOCPBUGS-91 - [ExtDNS] New TXT record breaks downward compatibility by retroactively limiting record length\nOCPBUGS-9132 - WebSCale: ovn logical router polices incorrect/l3 gw config not updated after IP change\nOCPBUGS-9185 - Pod latency spikes are observed when there is a compaction/leadership transfer\nOCPBUGS-9233 - ConsoleQuickStart {{copy}} and {{execute}} features do not work in some cases\nOCPBUGS-931 - [osp][octavia lb] NodePort allocation cannot be disabled for LB type svcs\nOCPBUGS-9338 - editor toggle radio input doesn\u0027t have distinguishable attributes\nOCPBUGS-9389 - Detach code in vsphere csi driver is failing\nOCPBUGS-948 - OLM sets invalid SCC label on its namespaces\nOCPBUGS-95 - NMstate removes egressip in OpenShift cluster with SDN plugin\nOCPBUGS-9913 - bacport tests for PDBUnhealthyPodEvictionPolicy as Tech Preview\nOCPBUGS-9924 - Remove unsupported warning in oc-mirror when using the --skip-pruning flag\nOCPBUGS-9926 - Enable node healthz server for ovnk in CNO \nOCPBUGS-9951 - fails to reconcile to RT kernel on interrupted updates\nOCPBUGS-9957 - Garbage collect grafana-dashboard-etcd\nOCPBUGS-996 - Control Plane Machine Set Operator OnDelete update should cause an error when more than one machine is ready in an index\nOCPBUGS-9963 - Better to change the error information more clearly to help understand \nOCPBUGS-9968 - Operands running management side missing affinity, tolerations, node selector and priority rules than the operator\n\n6. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-4235\nhttps://access.redhat.com/security/cve/CVE-2021-4238\nhttps://access.redhat.com/security/cve/CVE-2021-20329\nhttps://access.redhat.com/security/cve/CVE-2021-38561\nhttps://access.redhat.com/security/cve/CVE-2021-43519\nhttps://access.redhat.com/security/cve/CVE-2021-44964\nhttps://access.redhat.com/security/cve/CVE-2022-1271\nhttps://access.redhat.com/security/cve/CVE-2022-1586\nhttps://access.redhat.com/security/cve/CVE-2022-1587\nhttps://access.redhat.com/security/cve/CVE-2022-1785\nhttps://access.redhat.com/security/cve/CVE-2022-1897\nhttps://access.redhat.com/security/cve/CVE-2022-1927\nhttps://access.redhat.com/security/cve/CVE-2022-2509\nhttps://access.redhat.com/security/cve/CVE-2022-2990\nhttps://access.redhat.com/security/cve/CVE-2022-3080\nhttps://access.redhat.com/security/cve/CVE-2022-3259\nhttps://access.redhat.com/security/cve/CVE-2022-4203\nhttps://access.redhat.com/security/cve/CVE-2022-4304\nhttps://access.redhat.com/security/cve/CVE-2022-4450\nhttps://access.redhat.com/security/cve/CVE-2022-21698\nhttps://access.redhat.com/security/cve/CVE-2022-23525\nhttps://access.redhat.com/security/cve/CVE-2022-23526\nhttps://access.redhat.com/security/cve/CVE-2022-26280\nhttps://access.redhat.com/security/cve/CVE-2022-27191\nhttps://access.redhat.com/security/cve/CVE-2022-29154\nhttps://access.redhat.com/security/cve/CVE-2022-29824\nhttps://access.redhat.com/security/cve/CVE-2022-34903\nhttps://access.redhat.com/security/cve/CVE-2022-38023\nhttps://access.redhat.com/security/cve/CVE-2022-38177\nhttps://access.redhat.com/security/cve/CVE-2022-38178\nhttps://access.redhat.com/security/cve/CVE-2022-40674\nhttps://access.redhat.com/security/cve/CVE-2022-41316\nhttps://access.redhat.com/security/cve/CVE-2022-41717\nhttps://access.redhat.com/security/cve/CVE-2022-41721\nhttps://access.redhat.com/security/cve/CVE-2022-41723\nhttps://access.redhat.com/security/cve/CVE-2022-41724\nhttps://access.redhat.com/security/cve/CVE-2022-41725\nhttps://access.redhat.com/security/cve/CVE-2022-42010\nhttps://access.redhat.com/security/cve/CVE-2022-42011\nhttps://access.redhat.com/security/cve/CVE-2022-42012\nhttps://access.redhat.com/security/cve/CVE-2022-42898\nhttps://access.redhat.com/security/cve/CVE-2022-42919\nhttps://access.redhat.com/security/cve/CVE-2022-46146\nhttps://access.redhat.com/security/cve/CVE-2022-47629\nhttps://access.redhat.com/security/cve/CVE-2023-0056\nhttps://access.redhat.com/security/cve/CVE-2023-0215\nhttps://access.redhat.com/security/cve/CVE-2023-0216\nhttps://access.redhat.com/security/cve/CVE-2023-0217\nhttps://access.redhat.com/security/cve/CVE-2023-0229\nhttps://access.redhat.com/security/cve/CVE-2023-0286\nhttps://access.redhat.com/security/cve/CVE-2023-0361\nhttps://access.redhat.com/security/cve/CVE-2023-0401\nhttps://access.redhat.com/security/cve/CVE-2023-0620\nhttps://access.redhat.com/security/cve/CVE-2023-0665\nhttps://access.redhat.com/security/cve/CVE-2023-0778\nhttps://access.redhat.com/security/cve/CVE-2023-25000\nhttps://access.redhat.com/security/cve/CVE-2023-25165\nhttps://access.redhat.com/security/cve/CVE-2023-25173\nhttps://access.redhat.com/security/cve/CVE-2023-25577\nhttps://access.redhat.com/security/cve/CVE-2023-25725\nhttps://access.redhat.com/security/cve/CVE-2023-25809\nhttps://access.redhat.com/security/cve/CVE-2023-27561\nhttps://access.redhat.com/security/cve/CVE-2023-28642\nhttps://access.redhat.com/security/cve/CVE-2023-30570\nhttps://access.redhat.com/security/cve/CVE-2023-30841\nhttps://access.redhat.com/security/updates/classification/#important\nhttps://docs.openshift.com/container-platform/4.13/release_notes/ocp-4-13-release-notes.html\n\n7. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2023 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBZGVrhNzjgjWX9erEAQjD7BAAihZ8nlrasEU8QISGjHMUkUXKPHgV6LlZ\nIT2h0MLam8ICSCDdZ8PUVXhWP+CTTIYYdpEPTaIdKdB16iecRXm2ML8GtQ38zSjC\nLpCB4NUmAdoH91FbT2oazgrCgg+2hizfufLYk/8nNm9yVR0zT5kZbuXMFZH/PbCb\ndYYyRsXsNt4+MpaWGf1q3jS7OX8l5UXbfO+nnKHWoow5/PeclygxFbRclr7o62Dy\ntnfgs+OwbroI6L0nohsUTk4Es1koyD8FaGdo28ViLcgVH1VDhBqzHXSAe1P+XmAc\nPSG6slSRIrgJpARfN8OEI89wfI+ttyqEi4yAdoKjCo/pbshhLw3JZQcavmQc8XEK\no1afTtx0XFHJsAdZRjvq+7zExqnDANRLbtkkYG2gYuc8LgGmh6P0ZlhxQFMS3f/T\ncTLSLaP6XSnHQaJyc0kqULHcWBZRzepcIDPYkmTCbCVCwLjXuIoF6eMQvo7eRXCy\n4qN3nT0+M90jWxf/uQzo9NpeWFB7y2cccHMvaPzZ8cAAxpwM3Rphutu9lzRfJCl8\nTMincIMIFq3vLmrfxHX5YOKfgH/Kjc06TbtnzxtucFQVNFxyKIWKgJB/hl1mGDTJ\n8cibppoX+mLmUirPuu+5JwaAmq7skX5HKX3r3t8sajmij17nS2Ff8q52ZLgdZQ6H\nXbiJN3SZj5U=\n=WGO2\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.7.3 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html/release_notes/\n\nSecurity fix(es)\n* CVE-2022-25881 http-cache-semantics: Regular Expression Denial of Service\n(ReDoS) vulnerability\n* CVE-2022-3841 RHACM: unauthenticated SSRF in console API endpoint\n* CVE-2023-29017 vm2: Sandbox Escape\n* CVE-2023-29199 vm2: Sandbox Escape\n* CVE-2023-30547 vm2: Sandbox Escape when exception sanitization\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2139426 - CVE-2022-3841 RHACM: unauthenticated SSRF in console API endpoint\n2165824 - CVE-2022-25881 http-cache-semantics: Regular Expression Denial of Service (ReDoS) vulnerability\n2185374 - CVE-2023-29017 vm2: sandbox escape\n2187409 - CVE-2023-29199 vm2: Sandbox Escape\n2187608 - CVE-2023-30547 vm2: Sandbox Escape when exception sanitization\n\n5. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 9) - noarch\nRed Hat Enterprise Linux CRB (v. 9) - aarch64, noarch, x86_64\n\n3. Description:\n\nEDK (Embedded Development Kit) is a project to enable UEFI support for\nVirtual Machines. This package contains a sample 64-bit UEFI firmware for\nQEMU and KVM. \n\nSecurity Fix(es):\n\n* openssl: X.400 address type confusion in X.509 GeneralName\n(CVE-2023-0286)\n\n* edk2: integer underflow in SmmEntryPoint function leads to potential SMM\nprivilege escalation (CVE-2021-38578)\n\n* openssl: timing attack in RSA Decryption implementation (CVE-2022-4304)\n\n* openssl: double free after calling PEM_read_bio_ex (CVE-2022-4450)\n\n* openssl: use-after-free following BIO_new_NDEF (CVE-2023-0215)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 9.2 Release Notes linked from the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1960321 - CVE-2021-38578 edk2: integer underflow in SmmEntryPoint function leads to potential SMM privilege escalation\n1983086 - Assertion failure when creating 1024 VCPU VM: [...]UefiCpuPkg/CpuMpPei/CpuBist.c(186): !EFI_ERROR (Status)\n2125336 - Please add edk2-aarch64 and edk2-tools to CRB in RHEL 9\n2132951 - edk2: Sort traditional virtualization builds before Confidential Computing builds\n2157656 - [edk2] [aarch64] Unable to initialize EFI firmware when using edk2-aarch64-20221207gitfff6d81270b5-1.el9 in some hardwares\n2162307 - Broken GRUB output on a serial console\n2164440 - CVE-2023-0286 openssl: X.400 address type confusion in X.509 GeneralName\n2164487 - CVE-2022-4304 openssl: timing attack in RSA Decryption implementation\n2164492 - CVE-2023-0215 openssl: use-after-free following BIO_new_NDEF\n2164494 - CVE-2022-4450 openssl: double free after calling PEM_read_bio_ex\n2168046 - [edk2] BIOS Release Date string is unexpected length\n2174605 - [EDK2] disable dynamic mmio window\n\n6. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 9):\n\nSource:\nedk2-20221207gitfff6d81270b5-9.el9_2.src.rpm\n\nnoarch:\nedk2-aarch64-20221207gitfff6d81270b5-9.el9_2.noarch.rpm\nedk2-ovmf-20221207gitfff6d81270b5-9.el9_2.noarch.rpm\n\nRed Hat Enterprise Linux CRB (v. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.7.9 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):\n\n2174485 - CVE-2023-25173 containerd: Supplementary groups are not set up properly\n2178488 - CVE-2022-41725 golang: net/http, mime/multipart: denial of service from excessive resource consumption\n2178492 - CVE-2022-41724 golang: crypto/tls: large handshake records may cause panics\n\n5. Description:\n\nRed Hat JBoss Web Server is a fully integrated and certified set of\ncomponents for hosting Java web applications. It is comprised of the Apache\nTomcat Servlet container, JBoss HTTP Connector (mod_cluster), the\nPicketLink Vault extension for Apache Tomcat, and the Tomcat Native\nlibrary. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. JIRA issues fixed (https://issues.redhat.com/):\n\nJWS-2933 - Update openssl from JBCS to versions from 2.4.51-SP2\n\n7. Bugs fixed (https://bugzilla.redhat.com/):\n\n2139896 - Requested TSC frequency outside tolerance range \u0026 TSC scaling not supported\n2145146 - CDI operator is not creating PrometheusRule resource with alerts if CDI resource is incorrect\n2148383 - Migration metrics values are not sum up values from all VMIs\n2149409 - HPP mounter deployment can\u0027t mount as unprivileged\n2168489 - Overview -\u003e Migrations - The ?Bandwidth consumption? Graph display with wrong values\n2184435 - [cnv-4.12] virt-handler should not delete any pre-configured mediated devices i these are provided by an external provider\n2222191 - [cnv-4.12] manually increasing the number of virt-api pods does not work\n\n5", "sources": [ { "db": "NVD", "id": "CVE-2022-4450" }, { "db": "JVNDB", "id": "JVNDB-2022-003616" }, { "db": "PACKETSTORM", "id": "173547" }, { "db": "PACKETSTORM", "id": "172441" }, { "db": "PACKETSTORM", "id": "171957" }, { "db": "PACKETSTORM", "id": "172460" }, { "db": "PACKETSTORM", "id": "172238" }, { "db": "PACKETSTORM", "id": "172147" }, { "db": "PACKETSTORM", "id": "172733" }, { "db": "PACKETSTORM", "id": "174517" } ], "trust": 2.34 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2022-4450", "trust": 3.5 }, { "db": "ICS CERT", "id": "ICSA-23-075-04", "trust": 0.8 }, { "db": "ICS CERT", "id": "ICSA-24-165-10", "trust": 0.8 }, { "db": "ICS CERT", "id": "ICSA-24-165-11", "trust": 0.8 }, { "db": "ICS CERT", "id": "ICSA-24-046-15", "trust": 0.8 }, { "db": "ICS CERT", "id": "ICSA-23-194-04", "trust": 0.8 }, { "db": "ICS CERT", "id": "ICSA-24-102-08", "trust": 0.8 }, { "db": "ICS CERT", "id": "ICSA-24-165-06", "trust": 0.8 }, { "db": "ICS CERT", "id": "ICSA-23-320-08", "trust": 0.8 }, { "db": "ICS CERT", "id": "ICSA-23-255-01", "trust": 0.8 }, { "db": "ICS CERT", "id": "ICSA-23-166-11", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU91213144", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU99464755", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU95292697", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU99752892", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU97200253", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU92598492", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU93250330", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU99836374", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU91198149", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2022-003616", "trust": 0.8 }, { "db": "VULMON", "id": "CVE-2022-4450", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "173547", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "172441", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "171957", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "172460", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "172238", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "172147", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "172733", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "174517", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-4450" }, { "db": "JVNDB", "id": "JVNDB-2022-003616" }, { "db": "PACKETSTORM", "id": "173547" }, { "db": "PACKETSTORM", "id": "172441" }, { "db": "PACKETSTORM", "id": "171957" }, { "db": "PACKETSTORM", "id": "172460" }, { "db": "PACKETSTORM", "id": "172238" }, { "db": "PACKETSTORM", "id": "172147" }, { "db": "PACKETSTORM", "id": "172733" }, { "db": "PACKETSTORM", "id": "174517" }, { "db": "NVD", "id": "CVE-2022-4450" } ] }, "id": "VAR-202302-0195", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.2376099833333333 }, "last_update_date": "2024-07-23T19:21:02.492000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "hitachi-sec-2024-111", "trust": 0.8, "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=63bcf189be73a9cc1264059bed6f57974be74a83" }, { "title": "", "trust": 0.1, "url": "https://github.com/waugustus/carpetfuzz " } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-4450" }, { "db": "JVNDB", "id": "JVNDB-2022-003616" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-415", "trust": 1.0 }, { "problemtype": "Double release (CWE-415) [NVD evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-003616" }, { "db": "NVD", "id": "CVE-2022-4450" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-4450" }, { "trust": 1.0, "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=63bcf189be73a9cc1264059bed6f57974be74a83" }, { "trust": 1.0, "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=bbcf509bd046b34cca19c766bbddc31683d0858b" }, { "trust": 1.0, "url": "https://security.gentoo.org/glsa/202402-08" }, { "trust": 1.0, "url": "https://www.openssl.org/news/secadv/20230207.txt" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu91213144/" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu99752892/" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu99464755/index.html" }, { "trust": 0.8, "url": "http://jvn.jp/vu/jvnvu95292697/index.html" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu97200253/index.html" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu92598492/index.html" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu91198149/index.html" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu99836374/index.html" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu93250330/index.html" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-075-04" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-166-11" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-194-04" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-255-01" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-320-08" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-24-046-15" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-24-102-08" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-24-165-06" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-24-165-10" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-24-165-11" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2023-0215" }, { "trust": 0.8, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2022-4450" }, { "trust": 0.8, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2022-4304" }, { "trust": 0.8, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.6, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-4304" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2023-0361" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-0215" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2023-0286" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-0361" }, { "trust": 0.4, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-0286" }, { "trust": 0.3, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-41725" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-41724" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2023-23916" }, { "trust": 0.2, "url": "https://issues.redhat.com/):" }, { "trust": 0.2, "url": "https://issues.jboss.org/):" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1586" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2023-25173" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-41717" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-34903" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-42898" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-47629" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1586" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23916" }, { "trust": 0.2, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.1, "url": "https://github.com/waugustus/carpetfuzz" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-26604" }, { "trust": 0.1, "url": "https://access.redhat.com/security/vulnerabilities/rhsb-2023-001" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:4114" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-1667" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-2283" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24736" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-24329" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-3089" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-2283" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-1667" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24736" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-3089" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-26604" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-24329" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20329" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-38023" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26280" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-0620" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1587" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4235" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21698" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-0665" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-0778" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-46146" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-41721" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-25725" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-38177" }, { "trust": 0.1, "url": "https://[2620:52:0:1eb:367x:5axx:xxx:xxx]:2379]:" }, { "trust": 0.1, "url": "https://quay.io/repository/openshift-release-dev/ocp-release?tab=tags" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1271" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27191" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-38178" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4238" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1587" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-38561" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-28642" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-3259" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23526" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1927" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-41316" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-25577" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-30570" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:1325" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43519" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2990" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43519" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23525" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2509" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-42012" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-0056" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-30841" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20329" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-41723" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-40674" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-42919" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-38561" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.13/updating/updating-cluster-cli.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1927" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21698" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-0229" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-27561" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-23525" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44964" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-25000" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4238" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-42011" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.13/release_notes/ocp-4-13-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:1326" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-25165" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-0217" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1271" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-0401" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44964" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-42010" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-0216" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1897" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4235" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29154" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1785" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1897" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1785" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-4203" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-25809" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29824" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-3080" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3841" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html/release_notes/" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html-single/install/index#installing" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-3841" }, { "trust": 0.1, "url": "https://access.redhat.com/security/updates/classification/#critical" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-29199" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-29017" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-25881" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-29017" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30547" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25881" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-30547" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-29199" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:1888" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22662" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26700" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-41715" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-35737" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27664" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26719" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:0584" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26719" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22629" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22624" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-46848" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22628" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22624" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22662" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26709" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-32190" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1304" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26710" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26716" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26717" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30293" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26709" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-4415" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22628" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26710" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-40304" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26717" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1304" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-40303" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-32189" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2880" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26700" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22629" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26716" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-27664" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-46848" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-38578" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-38578" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/9.2_release_notes/index" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:2165" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-28617" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-25173" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-41725" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-28617" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-41724" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:2107" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:3420" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-34969" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-38408" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-3899" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-2602" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-32681" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-29469" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2016-3709" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-28321" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-34969" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-29469" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-27536" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32681" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-3709" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-28321" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-28484" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-27536" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-28484" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:4982" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-2603" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-2602" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-2603" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-38408" } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-4450" }, { "db": "JVNDB", "id": "JVNDB-2022-003616" }, { "db": "PACKETSTORM", "id": "173547" }, { "db": "PACKETSTORM", "id": "172441" }, { "db": "PACKETSTORM", "id": "171957" }, { "db": "PACKETSTORM", "id": "172460" }, { "db": "PACKETSTORM", "id": "172238" }, { "db": "PACKETSTORM", "id": "172147" }, { "db": "PACKETSTORM", "id": "172733" }, { "db": "PACKETSTORM", "id": "174517" }, { "db": "NVD", "id": "CVE-2022-4450" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2022-4450" }, { "db": "JVNDB", "id": "JVNDB-2022-003616" }, { "db": "PACKETSTORM", "id": "173547" }, { "db": "PACKETSTORM", "id": "172441" }, { "db": "PACKETSTORM", "id": "171957" }, { "db": "PACKETSTORM", "id": "172460" }, { "db": "PACKETSTORM", "id": "172238" }, { "db": "PACKETSTORM", "id": "172147" }, { "db": "PACKETSTORM", "id": "172733" }, { "db": "PACKETSTORM", "id": "174517" }, { "db": "NVD", "id": "CVE-2022-4450" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-02-28T00:00:00", "db": "JVNDB", "id": "JVNDB-2022-003616" }, { "date": "2023-07-18T13:35:08", "db": "PACKETSTORM", "id": "173547" }, { "date": "2023-05-18T13:46:17", "db": "PACKETSTORM", "id": "172441" }, { "date": "2023-04-20T16:14:17", "db": "PACKETSTORM", "id": "171957" }, { "date": "2023-05-19T14:41:19", "db": "PACKETSTORM", "id": "172460" }, { "date": "2023-05-09T15:23:44", "db": "PACKETSTORM", "id": "172238" }, { "date": "2023-05-04T14:45:01", "db": "PACKETSTORM", "id": "172147" }, { "date": "2023-06-06T16:30:13", "db": "PACKETSTORM", "id": "172733" }, { "date": "2023-09-06T16:39:54", "db": "PACKETSTORM", "id": "174517" }, { "date": "2023-02-08T20:15:23.973000", "db": "NVD", "id": "CVE-2022-4450" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2024-06-17T07:09:00", "db": "JVNDB", "id": "JVNDB-2022-003616" }, { "date": "2024-02-04T09:15:08.733000", "db": "NVD", "id": "CVE-2022-4450" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "PACKETSTORM", "id": "172441" } ], "trust": 0.1 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "OpenSSL\u00a0 Double release vulnerability in", "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-003616" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "sql injection", "sources": [ { "db": "PACKETSTORM", "id": "172441" } ], "trust": 0.1 } }
- Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
- Confirmed: The vulnerability is confirmed from an analyst perspective.
- Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
- Patched: This vulnerability was successfully patched by the user reporting the sighting.
- Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
- Not confirmed: The user expresses doubt about the veracity of the vulnerability.
- Not patched: This vulnerability was not successfully patched by the user reporting the sighting.