var-202205-1990
Vulnerability from variot

Buffer Over-read in GitHub repository vim/vim prior to 8.2. Vim is a cross-platform text editor. Vim versions prior to 8.2 have a security vulnerability caused by buffer overreading. Solution:

Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):

2031228 - CVE-2021-43813 grafana: directory traversal vulnerability 2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources 2115198 - build ceph containers for RHCS 5.2 release

  1. Bugs fixed (https://bugzilla.redhat.com/):

2041540 - RHACM 2.4 using deprecated APIs in managed clusters 2074766 - vSphere network name doesn't allow entering spaces and doesn't reflect YAML changes 2079418 - cluster update status is stuck, also update is not even visible 2088486 - Policy that creates cluster role is showing as not compliant due to Request entity too large message 2089490 - Upgraded from RHACM 2.2-->2.3-->2.4 and cannot create cluster 2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add 2097464 - ACM Console Becomes Unusable After a Time 2100613 - RHACM 2.4.6 images 2102436 - Cluster Pools with conflicting name of existing clusters in same namespace fails creation and deletes existing cluster 2102495 - ManagedClusters in Pending import state after ACM hub migration 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS 2109354 - CVE-2022-31150 nodejs16: CRLF injection in node-undici 2121396 - CVE-2022-31151 nodejs/undici: Cookie headers uncleared on cross-origin redirect 2124794 - CVE-2022-36067 vm2: Sandbox Escape in vm2

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Important: Logging Subsystem 5.5.0 - Red Hat OpenShift security update Advisory ID: RHSA-2022:6051-01 Product: RHOL Advisory URL: https://access.redhat.com/errata/RHSA-2022:6051 Issue date: 2022-08-18 CVE Names: CVE-2021-38561 CVE-2022-0759 CVE-2022-1012 CVE-2022-1292 CVE-2022-1586 CVE-2022-1785 CVE-2022-1897 CVE-2022-1927 CVE-2022-2068 CVE-2022-2097 CVE-2022-21698 CVE-2022-30631 CVE-2022-32250 ==================================================================== 1. Summary:

An update is now available for RHOL-5.5-RHEL-8.

Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

Logging Subsystem 5.5.0 - Red Hat OpenShift

Security Fix(es):

  • kubeclient: kubeconfig parsing error can lead to MITM attacks (CVE-2022-0759)

  • golang: compress/gzip: stack exhaustion in Reader.Read (CVE-2022-30631)

  • golang: out-of-bounds read in golang.org/x/text/language leads to DoS (CVE-2021-38561)

  • prometheus/client_golang: Denial of service using InstrumentHandlerCounter (CVE-2022-21698)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

  1. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Bugs fixed (https://bugzilla.redhat.com/):

2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter 2058404 - CVE-2022-0759 kubeclient: kubeconfig parsing error can lead to MITM attacks 2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS 2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read

  1. JIRA issues fixed (https://issues.jboss.org/):

LOG-1415 - Allow users to tune fluentd LOG-1539 - Events and CLO csv are not collected after running oc adm must-gather --image=$downstream-clo-image LOG-1713 - Reduce Permissions granted for prometheus-k8s service account LOG-2063 - Collector pods fail to start when a Vector only Cluster Logging instance is created. LOG-2134 - The infra logs are sent to app-xx indices LOG-2159 - Cluster Logging Pods in CrashLoopBackOff LOG-2165 - [Vector] Default log level debug makes it hard to find useful error/failure messages. LOG-2167 - [Vector] Collector pods fails to start with configuration error when using Kafka SASL over SSL LOG-2169 - [Vector] Logs not being sent to Kafka with SASL plaintext. LOG-2172 - [vector]The openshift-apiserver and ovn audit logs can not be collected. LOG-2242 - Log file metric exporter is still following /var/log/containers files. LOG-2243 - grafana-dashboard-cluster-logging should be deleted once clusterlogging/instance was removed LOG-2264 - Logging link should contain an icon LOG-2274 - [Logging 5.5] EO doesn't recreate secrets kibana and kibana-proxy after removing them. LOG-2276 - Fluent config format is hard to read via configmap LOG-2290 - ClusterLogging Instance status in not getting updated in UI LOG-2291 - [release-5.5] Events listing out of order in Kibana 6.8.1 LOG-2294 - [Vector] Vector internal metrics are not exposed via HTTPS due to which OpenShift Monitoring Prometheus service cannot scrape the metrics endpoint. LOG-2300 - [Logging 5.5]ES pods can't be ready after removing secret/signing-elasticsearch LOG-2303 - [Logging 5.5] Elasticsearch cluster upgrade stuck LOG-2308 - configmap grafana-dashboard-elasticsearch is being created and deleted continously LOG-2333 - Journal logs not reaching Elasticsearch output LOG-2337 - [Vector] Missing @ prefix from the timestamp field in log record. LOG-2342 - [Logging 5.5] Kibana pod can't connect to ES cluster after removing secret/signing-elasticsearch: "x509: certificate signed by unknown authority" LOG-2384 - Provide a method to get authenticated from GCP LOG-2411 - [Vector] Audit logs forwarding not working. LOG-2412 - CLO's loki output url is parsed wrongly LOG-2413 - PriorityClass cluster-logging is deleted if provide an invalid log type LOG-2418 - EO supported time units don't match the units specified in CRDs. LOG-2439 - Telemetry: the managedStatus&healthStatus&version values are wrong LOG-2440 - [loki-operator] Live tail of logs does not work on OpenShift LOG-2444 - The write index is removed when the size of the index > diskThresholdPercent% * total size. LOG-2460 - [Vector] Collector pods fail to start on a FIPS enabled cluster. LOG-2461 - [Vector] Vector auth config not generated when user provided bearer token is used in a secret for connecting to LokiStack. LOG-2463 - Elasticsearch operator repeatedly prints error message when checking indices LOG-2474 - EO shouldn't grant cluster-wide permission to system:serviceaccount:openshift-monitoring:prometheus-k8s when ES cluster is deployed. [openshift-logging 5.5] LOG-2522 - CLO supported time units don't match the units specified in CRDs. LOG-2525 - The container's logs are not sent to separate index if the annotation is added after the pod is ready. LOG-2546 - TLS handshake error on loki-gateway for FIPS cluster LOG-2549 - [Vector] [master] Journald logs not sent to the Log store when using Vector as collector. LOG-2554 - [Vector] [master] Fallback index is not used when structuredTypeKey is missing from JSON log data LOG-2588 - FluentdQueueLengthIncreasing rule failing to be evaluated. LOG-2596 - [vector]the condition in [transforms.route_container_logs] is inaccurate LOG-2599 - Supported values for level field don't match documentation LOG-2605 - $labels.instance is empty in the message when firing FluentdNodeDown alert LOG-2609 - fluentd and vector are unable to ship logs to elasticsearch when cluster-wide proxy is in effect LOG-2619 - containers violate PodSecurity -- Log Exporation LOG-2627 - containers violate PodSecurity -- Loki LOG-2649 - Level Critical should match the beginning of the line as the other levels LOG-2656 - Logging uses deprecated v1beta1 apis LOG-2664 - Deprecated Feature logs causing too much noise LOG-2665 - [Logging 5.5] Sometimes collector fails to push logs to Elasticsearch cluster LOG-2693 - Integration with Jaeger fails for ServiceMonitor LOG-2700 - [Vector] vector container can't start due to "unknown field pod_annotation_fields" . LOG-2703 - Collector DaemonSet is not removed when CLF is deleted for fluentd/vector only CL instance LOG-2725 - Upgrade logging-eventrouter Golang version and tags LOG-2731 - CLO keeps reporting Reconcile ServiceMonitor retry error and Reconcile Service retry error after creating clusterlogging. LOG-2732 - Prometheus Operator pod throws 'skipping servicemonitor' error on Jaeger integration LOG-2742 - unrecognized outputs when use the sts role secret LOG-2746 - CloudWatch forwarding rejecting large log events, fills tmpfs LOG-2749 - OpenShift Logging Dashboard for Elastic Shards shows "active_primary" instead of "active" shards. LOG-2753 - Update Grafana configuration for LokiStack integration on grafana/loki repo LOG-2763 - [Vector]{Master} Vector's healthcheck fails when forwarding logs to Lokistack. LOG-2764 - ElasticSearch operator does not respect referencePolicy when selecting oauth-proxy image LOG-2765 - ingester pod can not be started in IPv6 cluster LOG-2766 - [vector] failed to parse cluster url: invalid authority IPv6 http-proxy LOG-2772 - arn validation failed when role_arn=arn:aws-us-gov:xxx LOG-2773 - No cluster-logging-operator-metrics service in logging 5.5 LOG-2778 - [Vector] [OCP 4.11] SA token not added to Vector config when connecting to LokiStack instance without CLF creds secret required by LokiStack. LOG-2784 - Japanese log messages are garbled at Kibana LOG-2793 - [Vector] OVN audit logs are missing the level field. LOG-2864 - [vector] Can not sent logs to default when loki is the default output in CLF LOG-2867 - [fluentd] All logs are sent to application tenant when loki is used as default logstore in CLF. LOG-2873 - [Vector] Cannot configure CPU/Memory requests/limits when using Vector as collector. LOG-2875 - Seeing a black rectangle box on the graph in Logs view LOG-2876 - The link to the 'Container details' page on the 'Logs' screen throws error LOG-2877 - When there is no query entered, seeing error message on the Logs view LOG-2882 - RefreshIntervalDropdown and TimeRangeDropdown always set back to its original values when switching between pages in 'Logs' screen

  1. References:

https://access.redhat.com/security/cve/CVE-2021-38561 https://access.redhat.com/security/cve/CVE-2022-0759 https://access.redhat.com/security/cve/CVE-2022-1012 https://access.redhat.com/security/cve/CVE-2022-1292 https://access.redhat.com/security/cve/CVE-2022-1586 https://access.redhat.com/security/cve/CVE-2022-1785 https://access.redhat.com/security/cve/CVE-2022-1897 https://access.redhat.com/security/cve/CVE-2022-1927 https://access.redhat.com/security/cve/CVE-2022-2068 https://access.redhat.com/security/cve/CVE-2022-2097 https://access.redhat.com/security/cve/CVE-2022-21698 https://access.redhat.com/security/cve/CVE-2022-30631 https://access.redhat.com/security/cve/CVE-2022-32250 https://access.redhat.com/security/updates/classification/#important

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYv5/w9zjgjWX9erEAQhRBBAAiZe24VtCQruCG/MvGEOowBvHf/YNANlR N6WAw2VezEfvFkG7z599MWZVWz2jnZO6cn9i+CoNDanAmItPJ8ljK4sitrP2ywrG OKwqIa4DPrywFFTSMxemB604ewE0cvXifuqG5bQDn+GvndiV/u/XaVTYZseY1P5X 8ZIJ20cxROOE9pg0/3eya27edZxDrgWx6BtzSEZw47ReV3Dogqy+KzRCAAoN+pE5 g2t/E0u0Ypmjil9Ttsop/ejUg/iz8UTGtua4m1nzhZrsoE84p5xIgvCEkYlh3OrD tfawpj1r9Avcjk4zbZkAe/enSQZQv0iWD792SoP7/ddX5tIu05ArvPWj/NvN/rI4 dFzMe2UmezuS2EQpzaWOug2xSQUbR1hI+Y4cy0YOHuwzeaMeoHSbNYTJmOxKR0v1 44a9oSBku+Xfk8nUNqS+9oq0z3DlAWt2BjbfrJCbSjZQdOUOIGM95L3ClrXY9LYF PT5v+h2W4myonj6HVhkv+Wy7aRbYQ7Qhk/3AaN7Dz5soBSNK4exvOzWXGuf/BdSf XFef6O87ipZveHQYmTfH+t8aJV1plEVTrm8pyz2EfzCv1Fnhjn0rvbGZAFBlvqW+ vhxoj505RQBBhcno16V1zczdd8KsiqY7aZniTuh2DQAVvNhqsHgn8rvQ7HJlExun eIFVKOxx310=ynB/ -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:

Red Hat Advanced Cluster Management for Kubernetes 2.3.12 images

Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. Bugs fixed (https://bugzilla.redhat.com/):

2076856 - [doc] Remove 1.9.1 from Proxy Patch Documentation 2101411 - RHACM 2.3.12 images 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS

  1. 2109205 - HTTPS_PROXY ENV missing in some CSI driver operators 2109270 - Kube controllers crash when nodes are shut off in OpenStack 2109489 - Reply to arp requests on interfaces with no ip 2109709 - Namespace value is missing on the list when selecting "All namespaces" for operators 2109731 - alertmanager-main pods failing to start due to startupprobe timeout 2109866 - Cannot delete a Machine if a VM got stuck in ERROR 2109977 - storageclass should not be created for unsupported vsphere version 2110482 - [vsphere] failed to create cluster if datacenter is embedded in a Folder 2110723 - openshift-tests: allow -f to match tests for any test suite 2110737 - Master node in SchedulingDisabled after upgrade from 4.10.24 -> 4.11.0-rc.4 2111037 - Affinity rule created in console deployment for single-replica infrastructure 2111347 - dummy bug for 4.10.z bz2111335 2111471 - Node internal DNS address is not set for machine 2111475 - Fetch internal IPs of vms from dhcp server 2111587 - [4.11] Export OVS metrics 2111619 - Pods are unable to reach clusterIP services, ovn-controller isn't installing the group mod flows correctly 2111992 - OpenShift controller manager needs permissions to get/create/update leases for leader election 2112297 - bond-cni: Backport "mac duplicates" 4.11 2112353 - lifecycle.posStart hook does not have network connectivity. 2112908 - Search resource "virtualmachine" in "Home -> Search" crashes the console 2112912 - sum_irate doesn't work in OCP 4.8 2113926 - hypershift cluster deployment hang due to nil pointer dereference for hostedControlPlane.Spec.Etcd.Managed 2113938 - Fix e2e tests for [reboots][machine_config_labels] (tsc=nowatchdog) 2114574 - can not upgrade. Incorrect reading of olm.maxOpenShiftVersion 2114602 - Upgrade failing because restrictive scc is injected into version pod 2114964 - kola dhcp.propagation test failing 2115315 - README file for helm charts coded in Chinese shows messy characters when viewing in developer perspective. 2115435 - [4.11] INIT container stuck forever 2115564 - ClusterVersion availableUpdates is stale: PromQL conditional risks vs. slow/stuck Thanos 2115817 - Updates / config metrics are not available in 4.11 2116009 - Node Tuning Operator(NTO) - OCP upgrade failed due to node-tuning CO still progressing 2116557 - Order of config attributes are not maintained during conversion of PT4l from ptpconfig to ptp4l.0.config file 2117223 - kubernetes-nmstate-operator fails to install with error "no channel heads (entries not replaced by another entry) found in channel" 2117324 - catalog-operator fatal error: concurrent map writes 2117353 - kola dhcp.propagation test out of memory 2117370 - Migrate openshift-ansible to ansible-core 2117746 - Bump to latest k8s.io 1.24 release 2118214 - dummy bug for 4.10.z bz2118209 2118375 - pass the "--quiet" option via the buildconfig for s2i

  2. JIRA issues fixed (https://issues.jboss.org/):

OCPBUGS-1 - Test Bug

  1. Summary:

Red Hat OpenShift Container Platform release 4.13.0 is now available with updates to packages and images that fix several bugs and add enhancements. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

This advisory contains the container images for Red Hat OpenShift Container Platform 4.13.0. See the following advisory for the RPM packages for this release:

https://access.redhat.com/errata/RHSA-2023:1325

Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:

https://docs.openshift.com/container-platform/4.13/release_notes/ocp-4-13-release-notes.html

Security Fix(es):

  • goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be (CVE-2021-4238)

  • go-yaml: Denial of Service in go-yaml (CVE-2021-4235)

  • mongo-go-driver: specific cstrings input may not be properly validated (CVE-2021-20329)

  • golang: out-of-bounds read in golang.org/x/text/language leads to DoS (CVE-2021-38561)

  • prometheus/client_golang: Denial of service using InstrumentHandlerCounter (CVE-2022-21698)

  • helm: Denial of service through through repository index file (CVE-2022-23525)

  • helm: Denial of service through schema file (CVE-2022-23526)

  • golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)

  • vault: insufficient certificate revocation list checking (CVE-2022-41316)

  • golang: net/http: excessive memory growth in a Go server accepting HTTP/2 requests (CVE-2022-41717)

  • x/net/http2/h2c: request smuggling (CVE-2022-41721)

  • net/http, golang.org/x/net/http2: avoid quadratic complexity in HPACK decoding (CVE-2022-41723)

  • golang: crypto/tls: large handshake records may cause panics (CVE-2022-41724)

  • golang: net/http, mime/multipart: denial of service from excessive resource consumption (CVE-2022-41725)

  • exporter-toolkit: authentication bypass via cache poisoning (CVE-2022-46146)

  • vault: Vault’s Microsoft SQL Database Storage Backend Vulnerable to SQL Injection Via Configuration File (CVE-2023-0620)

  • hashicorp/vault: Vault’s PKI Issuer Endpoint Did Not Correctly Authorize Access to Issuer Metadata (CVE-2023-0665)

  • hashicorp/vault: Cache-Timing Attacks During Seal and Unseal Operations (CVE-2023-25000)

  • helm: getHostByName Function Information Disclosure (CVE-2023-25165)

  • containerd: Supplementary groups are not set up properly (CVE-2023-25173)

  • runc: volume mount race condition (regression of CVE-2019-19921) (CVE-2023-27561)

  • runc: AppArmor can be bypassed when /proc inside the container is symlinked with a specific mount configuration (CVE-2023-28642)

  • baremetal-operator: plain-text username and hashed password readable by anyone having a cluster-wide read-access (CVE-2023-30841)

  • runc: Rootless runc makes /sys/fs/cgroup writable (CVE-2023-25809)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

All OpenShift Container Platform 4.13 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift CLI (oc) or web console. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.13/updating/updating-cluster-cli.html

  1. Solution:

For OpenShift Container Platform 4.13 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.13/release_notes/ocp-4-13-release-notes.html

You may download the oc tool and use it to inspect release image metadata for x86_64, s390x, ppc64le, and aarch64 architectures. The image digests may be found at https://quay.io/repository/openshift-release-dev/ocp-release?tab=tags

The sha values for the release are:

(For x86_64 architecture) The image digest is sha256:74b23ed4bbb593195a721373ed6693687a9b444c97065ce8ac653ba464375711

(For s390x architecture) The image digest is sha256:a32d509d960eb3e889a22c4673729f95170489789c85308794287e6e9248fb79

(For ppc64le architecture) The image digest is sha256:bca0e4a4ed28b799e860e302c4f6bb7e11598f7c136c56938db0bf9593fb76f8

(For aarch64 architecture) The image digest is sha256:e07e4075c07fca21a1aed9d7f9c165696b1d0fa4940a219a000894e5683d846c

All OpenShift Container Platform 4.13 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.13/updating/updating-cluster-cli.html

  1. Bugs fixed (https://bugzilla.redhat.com/):

1770297 - console odo download link needs to go to an official location or have caveats [openshift-4.4] 1853264 - Metrics produce high unbound cardinality 1877261 - [RFE] Mounted volume size issue when restore a larger size pvc than snapshot 1904573 - OpenShift: containers modify /etc/passwd group writable 1943194 - when using gpus, more nodes than needed are created by the node autoscaler 1948666 - After entering valid git repo url on Import from git page, throwing warning message instead Validated 1971033 - CVE-2021-20329 mongo-go-driver: specific cstrings input may not be properly validated 2005232 - Pods list page should only show Create Pod button to user has sufficient permission 2016006 - Repositories list does not show the running pipelinerun as last pipelinerun 2027000 - The user is ignored when we create a new file using a MachineConfig 2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter 2047299 - nodeport not reachable port connection timeout 2050230 - Implement LIST call chunking in openshift-sdn 2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server 2065166 - GCP - Less privileged service accounts are created with Service Account User role 2066388 - Wrong Error generates when https is missing in the value of regionEndpoint in configs.imageregistry.operator.openshift.io/cluster 2066664 - [cluster-storage-operator] - Minimize wildcard/privilege Usage in Cluster and Local Roles 2070744 - openshift-install destroy in us-gov-west-1 results in infinite loop - AWS govcloud 2075548 - Support AllocateLoadBalancerNodePorts=False with ETP=local, LGW mode 2076619 - Could not create deployment with an unknown git repo and builder image build strategy 2078222 - egressIPs behave inconsistently towards in-cluster traffic (hosts and services backed by host-networked pods) 2079981 - PVs not deleting on azure (or very slow to delete) since CSI migration to azuredisk 2081858 - OVN-Kubernetes: SyncServices for nodePortWatcherIptables should propagate failures back to caller 2083087 - "Delete dependent objects of this resource" might cause confusions 2084452 - PodDisruptionBudgets help message should be semantic 2087043 - Cluster API components should use K8s 1.24 dependencies 2087553 - No rhcos-4.11/x86_64 images in the 2 new regions on alibabacloud, "ap-northeast-2 (South Korea (Seoul))" and "ap-southeast-7 (Thailand (Bangkok))" 2089093 - CVO hotloops on OperatorGroup due to the diff of "upgradeStrategy": string("Default") 2089138 - CVO hotloops on ValidatingWebhookConfiguration /performance-addon-operator 2090680 - upgrade for a disconnected cluster get hang on retrieving and verifying payload 2092567 - Network policy is not being applied as expected 2092811 - Datastore name is too long 2093339 - [rebase v1.24] Only known images used by tests 2095719 - serviceaccounts are not updated after upgrade from 4.10 to 4.11 2100181 - WebScale: configure-ovs.sh fails because it picks the wrong default interface 2100429 - [apiserver-auth] default SCC restricted allow volumes don't have "ephemeral" caused deployment with Generic Ephemeral Volumes stuck at Pending 2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS 2104978 - MCD degrades are not overwrite-able by subsequent errors 2110565 - PDB: Remove add/edit/remove actions in Pod resource action menu 2110570 - Topology sidebar: Edit pod count shows not the latest replicas value when edit the count again 2110982 - On GCP, need to check load balancer health check IPs required for restricted installation 2113973 - operator scc is nor fixed when we define a custom scc with readOnlyRootFilesystem: true 2114515 - Getting critical NodeFilesystemAlmostOutOfSpace alert for 4K tmpfs 2115265 - Search page: LazyActionMenus are shown below Add/Remove from navigation button 2116686 - [capi] Cluster kind should be valid 2117374 - Improve Pod Admission failure for restricted-v2 denials that pass with restricted 2135339 - CVE-2022-41316 vault: insufficient certificate revocation list checking 2149436 - CVE-2022-46146 exporter-toolkit: authentication bypass via cache poisoning 2154196 - CVE-2022-23526 helm: Denial of service through schema file 2154202 - CVE-2022-23525 helm: Denial of service through through repository index file 2156727 - CVE-2021-4235 go-yaml: Denial of Service in go-yaml 2156729 - CVE-2021-4238 goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be 2161274 - CVE-2022-41717 golang: net/http: excessive memory growth in a Go server accepting HTTP/2 requests 2162182 - CVE-2022-41721 x/net/http2/h2c: request smuggling 2168458 - CVE-2023-25165 helm: getHostByName Function Information Disclosure 2174485 - CVE-2023-25173 containerd: Supplementary groups are not set up properly 2175721 - CVE-2023-27561 runc: volume mount race condition (regression of CVE-2019-19921) 2178358 - CVE-2022-41723 net/http, golang.org/x/net/http2: avoid quadratic complexity in HPACK decoding 2178488 - CVE-2022-41725 golang: net/http, mime/multipart: denial of service from excessive resource consumption 2178492 - CVE-2022-41724 golang: crypto/tls: large handshake records may cause panics 2182883 - CVE-2023-28642 runc: AppArmor can be bypassed when /proc inside the container is symlinked with a specific mount configuration 2182884 - CVE-2023-25809 runc: Rootless runc makes /sys/fs/cgroup writable 2182972 - CVE-2023-25000 hashicorp/vault: Cache-Timing Attacks During Seal and Unseal Operations 2182981 - CVE-2023-0665 hashicorp/vault: Vault?s PKI Issuer Endpoint Did Not Correctly Authorize Access to Issuer Metadata 2184663 - CVE-2023-0620 vault: Vault?s Microsoft SQL Database Storage Backend Vulnerable to SQL Injection Via Configuration File 2190116 - CVE-2023-30841 baremetal-operator: plain-text username and hashed password readable by anyone having a cluster-wide read-access

  1. JIRA issues fixed (https://issues.jboss.org/):

OCPBUGS-10036 - Enable aesgcm encryption provider by default in openshift/api OCPBUGS-10038 - Enable aesgcm encryption provider by default in openshift/cluster-config-operator OCPBUGS-10042 - Enable aesgcm encryption provider by default in openshift/cluster-kube-apiserver-operator OCPBUGS-10043 - Enable aesgcm encryption provider by default in openshift/cluster-openshift-apiserver-operator OCPBUGS-10044 - Enable aesgcm encryption provider by default in openshift/cluster-authentication-operator OCPBUGS-10047 - oc-mirror print log: unable to parse reference oci://mno/redhat-operator-index:v4.12 OCPBUGS-10057 - With WPC card configured as GM or BC, phc2sys clock lock state is shown as FREERUN in ptp metrics while it should be LOCKED OCPBUGS-10213 - aws: mismatch between RHCOS and AWS SDK regions OCPBUGS-10220 - Newly provisioned machines unable to join cluster OCPBUGS-10221 - Risk cache warming takes too long on channel changes OCPBUGS-10237 - Limit the nested repository path while mirroring the images using oc-mirror for those who cant have nested paths in their container registry OCPBUGS-10239 - [release-4.13] Fix of ServiceAccounts gathering OCPBUGS-10249 - PollConsoleUpdates won't fire toast if one or more manifests errors when plugins change OCPBUGS-10267 - NetworkManager TUI quits regardless of a detected unsupported configuration OCPBUGS-10271 - [4.13] Netflink overflow alert OCPBUGS-10278 - Graph-data is not mounted on graph-builder correctly while install using graph-data image built by oc-mirror OCPBUGS-10281 - Openshift Ansible OVS version out of sync with RHCOS OCPBUGS-10291 - Broken link for Ansible tagging OCPBUGS-10298 - TenantID is ignored in some cases OCPBUGS-10320 - Catalogs should not be included in the ImageContentSourcePolicy.yaml OCPBUGS-10321 - command cannot be worked after chroot /host for oc debug pod OCPBUGS-1033 - Multiple extra manifests in the same file are not applied correctly OCPBUGS-10334 - Nutanix cloud-controller-manager pod not have permission to get/list ConfigMap OCPBUGS-10353 - kube-apiserver not receiving or processing shutdown signal after coreos 9.2 bump OCPBUGS-10367 - Pausing pools in OCP 4.13 will cause critical alerts to fire OCPBUGS-10377 - [gcp] IPI installation with Shielded VMs enabled failed on restarting the master machines OCPBUGS-10404 - Workload annotation missing from deployments OCPBUGS-10421 - RHCOS 4.13 live iso x84_64 contains restrictive policy.json OCPBUGS-10426 - node-topology is not exported due to kubelet.sock: connect: permission denied OCPBUGS-10427 - 4.1 born cluster fails to scale-up due to podman run missing --authfile flag OCPBUGS-10432 - CSI Inline Volume admission plugin does not log object name correctly OCPBUGS-10440 - OVN IPSec - does not create IPSec tunnels OCPBUGS-10474 - OpenShift pipeline TaskRun(s) column Duration is not present as column in UI OCPBUGS-10476 - Disable netlink mode of netclass collector in Node Exporter. OCPBUGS-1048 - if tag categories don't exist, the installation will fail to bootstrap OCPBUGS-10483 - [4.13 arm64 image][AWS EFS] Driver fails to get installed/exec format error OCPBUGS-10558 - MAPO failing to retrieve flavour information after rotating credentials OCPBUGS-10585 - [4.13] Request to update RHCOS installer bootimage metadata OCPBUGS-10586 - Console shows x509 error when requesting token from oauth endpoint OCPBUGS-10597 - The agent-tui shows again during the installation OCPBUGS-1061 - administrator console, monitoring-alertmanager-edit user list or create silence, "Observe - Alerting - Silences" page is pending OCPBUGS-10645 - 4.13: Operands running management side missing affinity, tolerations, node selector and priority rules than the operator OCPBUGS-10656 - create image command erroneously logs that Base ISO was obtained from release OCPBUGS-10657 - When releaseImage is a digest the create image command generates spurious warning OCPBUGS-10658 - Wrong PrimarySubnet in OpenstackProviderSpec when using Failure Domains OCPBUGS-10661 - machine API operator failing with No Major.Minor.Patch elements found OCPBUGS-10678 - Developer catalog shows ImageStreams as samples which has no sampleRepo OCPBUGS-10679 - Show type of sample on the samples view OCPBUGS-10689 - [IPI on BareMetal]: Workers failing inspection when installing with proxy OCPBUGS-10697 - [release-4.13] User is allowed to create IP Address pool with duplicate entries for namespace and matchExpression for serviceSelector and namespaceSelector OCPBUGS-10698 - [release-4.13] Already assigned IP address is removed from a service on editing the ip address pool. OCPBUGS-10710 - Metal virtual media job permafails during early bootstrap OCPBUGS-10716 - Image Registry default to Removed on IBM cloud after 4.13.0-ec.3 OCPBUGS-10739 - [4.13] Bootimage bump tracker OCPBUGS-10744 - [4.13] EgressFirewall status disappeared OCPBUGS-10746 - Downstream Operator-SDK v1.22.2 to OCP 4.13 OCPBUGS-10771 - upgrade test failure with "Cluster operator control-plane-machine-set is not available" OCPBUGS-10773 - TestNewAppRun unit test failing OCPBUGS-10792 - Hypershift namespace servicemonitor has wrong API group OCPBUGS-10793 - Ignore device list missing in Node Exporter OCPBUGS-10796 - [4.13] Egress firewall is not retried on error OCPBUGS-10799 - Network policy perf improvements OCPBUGS-10801 - [4.13] Upgrade to 4.10 stalled on timeout completing syncEgressFirewall OCPBUGS-10811 - Missing vCenter build number in telemetry OCPBUGS-10813 - SCOS bootstrap should skip pivot when root is not writable OCPBUGS-10826 - RHEL 9.2 doesn't contain the kernel-abi-whitelists package. OCPBUGS-10832 - Edit Deployment (and DC) form doesn't enable Save button when changing strategy type OCPBUGS-10833 - update the default pipelineRun template name OCPBUGS-10834 - [OVNK] [IC] Having only one leader election in the master process OCPBUGS-10873 - OVN to OVN-H migration seems broken OCPBUGS-10888 - oauth-server fails to invalidate cache, causing non existing groups being referenced OCPBUGS-10890 - Hypershift replace upgrade: node in NotReady after upgrading from a 4.14 image to another 4.14 image OCPBUGS-10891 - Cluster Autoscaler balancing similar nodes test fails randomly OCPBUGS-10892 - Passwords printed in log messages OCPBUGS-10893 - Remove unsupported warning in oc-mirror when using the --skip-pruning flag OCPBUGS-10902 - [IBMCloud] destroyed the private cluster, fail to cleanup the dns records OCPBUGS-10903 - [IBMCloud] fail to ssh to master/bootstrap/worker nodes from the bastion inside a customer vpc. OCPBUGS-10907 - move to rhel9 in DTK for 4.13 OCPBUGS-10914 - Node healthz server: return unhealthy when pod is to be deleted OCPBUGS-10919 - Update Samples Operator to use latest jenkins 4.12 release OCPBUGS-10923 - Cluster bootstrap waits for only one master to join before finishing OCPBUGS-10929 - Kube 1.26 for ovn-k OCPBUGS-10946 - For IPv6-primary dual-stack cluster, kubelet.service renders only single node-ip OCPBUGS-10951 - When imagesetconfigure without OCI FBC format config, but command with use-oci-feature flag, the oc-mirror command should check the imagesetconfigure firstly and print error immediately OCPBUGS-10953 - ovnkube-node does not close up correctly OCPBUGS-10955 - [release-4.13] NMstate complains about ping not working when adding multiple routing tables with different gateways OCPBUGS-10960 - [4.13] Vertical Scaling: do not trigger inadvertent machine deletion during bootstrap OCPBUGS-10965 - The network-tools image stream is missing in the cluster samples OCPBUGS-10982 - [4.13] nodeSelector in EgressFirewall doesn't work in dualstack cluster OCPBUGS-10989 - Agent create sub-command is returning fatal error OCPBUGS-10990 - EgressIP doesn't work in GCP XPN cluster OCPBUGS-11004 - Bootstrap kubelet client cert should include system:serviceaccounts group OCPBUGS-11010 - [vsphere] zone cluster installation fails if vSphere Cluster is embedded in Folder OCPBUGS-11022 - [4.13][scale] all egressfirewalls will be updated on every node update OCPBUGS-11023 - [4.13][scale] Ingress network policy creates more flows than before OCPBUGS-11031 - SNO OCP upgrade from 4.12 to 4.13 failed due to node-tuning operator is not available - tuned pod stuck at Terminating OCPBUGS-11032 - Update the validation interval for the cluster transfer to 12 hours OCPBUGS-11040 - --container-runtime is being removed in k8s 1.27 OCPBUGS-11054 - GCP: add europe-west12 region to the survey as supported region OCPBUGS-11055 - APIServer service isn't selected correctly for PublicAndPrivate cluster when external-dns is not configured OCPBUGS-11058 - [4.13] Conmon leaks symbolic links in /var/run/crio when pods are deleted OCPBUGS-11068 - nodeip-configuration not enabled for VSphere UPI OCPBUGS-11107 - Alerts display incorrect source when adding external alert sources OCPBUGS-11117 - The provided gcc RPM inside DTK does not match the gcc used to build the kernel OCPBUGS-11120 - DTK docs should mention the ubi9 base image instead of ubi8 OCPBUGS-11213 - BMH moves to deleting before all finalizers are processed OCPBUGS-11218 - "pipelines-as-code-pipelinerun-go" configMap is not been used for the Go repository OCPBUGS-11222 - kube-controller-manager cluster operator is degraded due connection refused while querying rules OCPBUGS-11227 - Relax CSR check due to k8s 1.27 changes OCPBUGS-11232 - All projects options shows as undefined after selection in Dev perspective Pipelines page OCPBUGS-11248 - Secret name variable get renders in Create Image pull secret alert OCPBUGS-1125 - Fix disaster recovery test [sig-etcd][Feature:DisasterRecovery][Disruptive] [Feature:EtcdRecovery] Cluster should restore itself after quorum loss [Serial] OCPBUGS-11257 - egressip cannot be assigned on hypershift hosted cluster node OCPBUGS-11261 - [AWS][4.13] installer get stuck if BYO private hosted zone is configured OCPBUGS-11263 - PTP KPI version 4.13 RC2 WPC - offset jumps to huge numbers OCPBUGS-11307 - Egress firewall node selector test missing OCPBUGS-11333 - startupProbe for UWM prometheus is still 15m OCPBUGS-11339 - ose-ansible-operator base image version is still 4.12 in the operators that generated by operator-sdk 4.13 OCPBUGS-11340 - ose-helm-operator base image version is still 4.12 in the operators that generated by operator-sdk 4.13 OCPBUGS-11341 - openshift-manila-csi-driver is missing the workload.openshift.io/allowed label OCPBUGS-11354 - CPMS: node readiness transitions not always trigger reconcile OCPBUGS-11384 - Switching from enabling realTime to disabling Realtime Workloadhint causes stalld to be enabled OCPBUGS-11390 - Service Binding Operator installation fails: "A subscription for this operator already exists in namespace ..." OCPBUGS-11424 - [release-4.13] new whereabouts reconciler relies on HOSTNAME which != spec.nodeName OCPBUGS-11427 - [release-4.13] whereabouts reads wrong annotation "k8s.v1.cni.cncf.io/networks-status", should be "k8s.v1.cni.cncf.io/network-status" OCPBUGS-11456 - PTP - When GM and downstream slaves are configured on same server, ptp metrics show slaves as FREERUN OCPBUGS-11458 - Ingress Takes 40s on Average Downtime During GCP OVN Upgrades OCPBUGS-11460 - CPMS doesn't always generate configurations for AWS OCPBUGS-11468 - Community operator cannot be mirrored due to malformed image address OCPBUGS-11469 - [release4.13] "exclude bundles with olm.deprecated property when rendering" not backport OCPBUGS-11473 - NS autolabeler requires RoleBinding subject namespace to be set when using ServiceAccount OCPBUGS-11485 - [4.13] NVMe disk by-id rename breaks LSO/ODF OCPBUGS-11503 - Update 4.13 cluster-network-operator image in Dockerfile to be consistent with ART OCPBUGS-11506 - CPMS e2e periodics tests timeout failures OCPBUGS-11507 - Potential 4.12 to 4.13 upgrade failure due to NIC rename OCPBUGS-11510 - Setting cpu-quota.crio.io to disable with crun causes container creation to fail OCPBUGS-11511 - [4.13] static container pod cannot be running due to CNI request failed with status 400 OCPBUGS-11529 - [Azure] fail to collect the vm serial log with ?gather bootstrap? OCPBUGS-11536 - Cluster monitoring operator runs node-exporter with btrfs collector OCPBUGS-11545 - multus-admission-controller should not run as root under Hypershift-managed CNO OCPBUGS-11558 - multus-admission-controller should not run as root under Hypershift-managed CNO OCPBUGS-11589 - Ensure systemd is compatible with rhel8 journalctl OCPBUGS-11598 - openshift-azure-routes triggered continously on rhel9 OCPBUGS-11606 - User configured In-cluster proxy configuration squashed in hypershift OCPBUGS-11643 - Updating kube-rbac-proxy images to be consistent with ART OCPBUGS-11657 - [4.13] Static IPv6 LACP bonding is randomly failing in RHCOS 413.92 OCPBUGS-11659 - Error extracting libnmstate.so.1.3.3 when create image OCPBUGS-11661 - AWS s3 policy changes block all OCP installs on AWS OCPBUGS-11669 - Bump to kubernetes 1.26.3 OCPBUGS-11683 - [4.13] Add Controller health to CEO liveness probe OCPBUGS-11694 - [4.13] Update legacy toolbox to use registry.redhat.io/rhel9/support-tools OCPBUGS-11706 - ccoctl cannot create STS documents in 4.10-4.13 due to s3 policy changes OCPBUGS-11750 - TuningCNI cnf-test failure: sysctl allowlist update OCPBUGS-11765 - [4.13] Keep current OpenSSH default config in RHCOS 9 OCPBUGS-11776 - [4.13] VSphereStorageDriver does not document the platform default OCPBUGS-11778 - Upgrade SNO: no resolv.conf caused by failure in forcedns dispatcher script OCPBUGS-11787 - Update 4.14 ose-vmware-vsphere-csi-driver image to be consistent with ART OCPBUGS-11789 - [4.13] Bootimage bump tracker OCPBUGS-11799 - [4.13] Bootimage bump tracker OCPBUGS-11823 - [Reliability]kube-apiserver's memory usage keep increasing to max 3GB in 7 days OCPBUGS-11848 - PtpOperatorsConfig not applying correctly OCPBUGS-11866 - Pipeline is not removed when Deployment/DC/Knative Service or Application is deleted OCPBUGS-11870 - [4.13] Nodes in Ironic are created without namespaces initially OCPBUGS-11876 - oc-mirror generated file-based catalogs crashloop OCPBUGS-11908 - Got the file exists error when different digest direct to the same tag OCPBUGS-11917 - the warn message won't disappear in co/node-tuning when scale down machineset OCPBUGS-11919 - Console metrics could have a high cardinality (4.13) OCPBUGS-11950 - fail to create vSphere IPI cluster as apiVIP and ingressVIP are not in machine networks OCPBUGS-11955 - NTP config not applied OCPBUGS-11968 - Instance shouldn't be moved back from f to a OCPBUGS-11985 - [4.13] Ironic inspector service should be proxied OCPBUGS-12172 - Users don't know what type of resource is being created by Import from Git or Deploy Image flows OCPBUGS-12179 - agent-tui is failing to start when using libnmstate.2 OCPBUGS-12186 - Pipeline doesn't render correctly when displayed but looks fine in edit mode OCPBUGS-12198 - create hosted cluster failed with aws s3 access issue OCPBUGS-12212 - cluster failed to convert from dualstack to ipv4 single stack OCPBUGS-12225 - Add new OCP 4.13 storage admission plugin OCPBUGS-12257 - Catalogs rebuilt by oc-mirror are in crashloop : cache is invalid OCPBUGS-12259 - oc-mirror fails to complete with heads only complaining about devworkspace-operator OCPBUGS-12271 - Hypershift conformance test fails new cpu partitioning tests OCPBUGS-12272 - Importing a kn Service shows a non-working Open URL decorator also when the Add Route checkbox was unselected OCPBUGS-12273 - When Creating Sample Devfile from the Samples Page, Topology Icon is not set OCPBUGS-12450 - [4.13] Fix Flake TestAttemptToScaleDown/scale_down_only_by_one_machine_at_a_time OCPBUGS-12465 - --use-oci-feature leads to confusion and needs to be better named OCPBUGS-12478 - CSI driver + operator containers are not pinned to mgmt cores OCPBUGS-1264 - e2e-vsphere-zones failing due to unable to parse cloud-config OCPBUGS-12698 - redfish-virtualmedia mount not working OCPBUGS-12703 - redfish-virtualmedia mount not working OCPBUGS-12708 - [4.13] Changing a PreprovisioningImage ImageURL and/or ExtraKernelParams should reboot the host OCPBUGS-1272 - "opm alpha render-veneer basic" doesn't support pipe stdin OCPBUGS-12737 - Multus admission controller must have "hypershift.openshift.io/release-image" annotation when CNO is managed by Hypershift OCPBUGS-12786 - OLM CatalogSources in guest cluster cannot pull images if pre-GA OCPBUGS-12804 - Dual stack VIPs incompatible with EnableUnicast setting OCPBUGS-12854 - cluster-reader role cannot access "k8s.ovn.org" API Group resources OCPBUGS-12862 - IPv6 ingress VIP not configured in keepalived on vSphere Dual-stack OCPBUGS-12865 - Kubernetes-NMState CI is perma-failing OCPBUGS-12933 - Node Tuning Operator crashloops when in Hypershift mode OCPBUGS-12994 - TCP DNS Local Preference is not working for Openshift SDN OCPBUGS-12999 - Backport owners through 4.13, 4.12 OCPBUGS-13029 - Update Cluster Sample Operator dependencies and libraries for OCP 4.13 OCPBUGS-13057 - ppc64le releases don't install because ovs fails to start (invalid permissions) OCPBUGS-13069 - [whereabouts-cni] CNO must use reconciliation controller in order to support dual stack in 4.12 [4.13 dependency] OCPBUGS-13071 - CI fails on TestClientTLS OCPBUGS-13072 - Capture tests don't work in OVNK OCPBUGS-13076 - Load balancers/ Ingress controller removal race condition OCPBUGS-13157 - CI fails on TestRouterCompressionOperation OCPBUGS-13254 - Nutanix cloud provider should use Kubernetes 1.26 dependencies OCPBUGS-1327 - [IBMCloud] Worker machines unreachable during initial bring up OCPBUGS-1352 - OVN silently failing in case of a stuck pod OCPBUGS-1427 - Ignore non-ready endpoints when processing endpointslices OCPBUGS-1428 - service account token secret reference OCPBUGS-1435 - [Ingress Node Firewall Operator] [Web Console] Allow user to override namespace where the operator is installed, currently user can install it only in openshift-operators ns OCPBUGS-1443 - Unable to get ClusterVersion error while upgrading 4.11 to 4.12 OCPBUGS-1453 - TargetDown alert expression is NOT correctly joining kube-state-metrics metric OCPBUGS-1458 - cvo pod crashloop during bootstrap: featuregates: connection refused OCPBUGS-1486 - Avoid re-metric'ing the pods that are already setup when ovnkube-master disrupts/reinitializes/restarts/goes through leader election OCPBUGS-1557 - Default to floating automaticRestart for new GCP instances OCPBUGS-1560 - [vsphere] installation fails when only configure single zone in install-config OCPBUGS-1565 - Possible split brain with keepalived unicast OCPBUGS-1566 - Automation Offline CPUs Test cases OCPBUGS-1577 - Incorrect network configuration in worker node with two interfaces OCPBUGS-1604 - Common resources out-of-date when using multicluster switcher OCPBUGS-1606 - Multi-cluster: We should not filter OLM catalog by console pod architecture and OS on managed clusters OCPBUGS-1612 - [vsphere] installation errors out when missing topology in a failure domain OCPBUGS-1617 - Remove unused node.kubernetes.io/not-reachable toleration OCPBUGS-1627 - [vsphere] installation fails when setting user-defined folder in failure domain OCPBUGS-1646 - [osp][octavia lb] LBs type svcs not updated until all the LBs are created OCPBUGS-166 - 4.11 SNOs fail to complete install because of "failed to get pod annotation: timed out waiting for annotations: context deadline exceeded" OCPBUGS-1665 - Scorecard failed because of the request of PodSecurity OCPBUGS-1671 - Creating a statefulset with the example image from the UI on ARM64 leads to a Pod in crashloopbackoff due to the only-amd64 image provided OCPBUGS-1704 - [gcp] when the optional Service Usage API is disabled, IPI installation cannot succeed OCPBUGS-1725 - Affinity rule created in router deployment for single-replica infrastructure and "NodePortService" endpoint publishing strategy OCPBUGS-1741 - Can't load additional Alertmanager templates with latest 4.12 OpenShift OCPBUGS-1748 - PipelineRun templates must be fetched from OpenShift namespace OCPBUGS-1761 - osImages that cannot be pulled do not set the node as Degraded properly OCPBUGS-1769 - gracefully fail when iam:GetRole is denied OCPBUGS-1778 - Can't install clusters with schedulable masters OCPBUGS-1791 - Wait-for install-complete did not exit upon completion. OCPBUGS-1805 - [vsphere-csi-driver-operator] CSI cloud.conf doesn't list multiple datacenters when specified OCPBUGS-1807 - Ingress Operator startup bad log message formatting OCPBUGS-1844 - Ironic dnsmasq doesn't include existing DNS settings during iPXE boot OCPBUGS-1852 - [RHOCP 4.10] Subscription tab for operator doesn't land on correct URL OCPBUGS-186 - PipelineRun task status overlaps status text OCPBUGS-1998 - Cluster monitoring fails to achieve new level during upgrade w/ unavailable node OCPBUGS-2015 - TestCertRotationTimeUpgradeable failing consistently in kube-apiserver-operator OCPBUGS-2083 - OCP 4.10.33 uses a weak 3DES cipher in the VMWare CSI Operator for communication and provides no method to disable it OCPBUGS-2088 - User can set rendezvous host to be a worker OCPBUGS-2141 - doc link in PrometheusDataPersistenceNotConfigured message is 4.8 OCPBUGS-2145 - 'maxUnavailable' and 'minAvailable' on PDB creation page - i18n misses OCPBUGS-2209 - Hard eviction thresholds is different with k8s default when PAO is enabled OCPBUGS-2248 - [alibabacloud] IPI installation failed with master nodes being NotReady and CCM error "alicloud: unable to split instanceid and region from providerID" OCPBUGS-2260 - KubePodNotReady - Increase Tolerance During Master Node Restarts OCPBUGS-2306 - On Make Serverless page, to change values of the inputs minpod, maxpod and concurrency fields, we need to click the ? + ? or ? - ', it can't be changed by typing in it. OCPBUGS-2319 - metal-ipi upgrade success rate dropped 30+% in last week OCPBUGS-2384 - [2035720] [IPI on Alibabacloud] deploying a private cluster by 'publish: Internal' failed due to 'dns_public_record' OCPBUGS-2440 - unknown field logs in prometheus-operator OCPBUGS-2471 - BareMetalHost is available without cleaning if the cleaning attempt fails OCPBUGS-2479 - Right border radius is 0 for the pipeline visualization wrapper in dark mode OCPBUGS-2500 - Developer Topology always blanks with large contents when first rendering OCPBUGS-2513 - Disconnected cluster installation fails with pull secret must contain auth for "registry.ci.openshift.org" OCPBUGS-2525 - [CI Watcher] Ongoing timeout failures associated with multiple CRD-extensions tests OCPBUGS-2532 - Upgrades from 4.11.9 to latest 4.12.x Nightly builds do not succeed OCPBUGS-2551 - "Error loading" when normal user check operands on All namespaces OCPBUGS-2569 - ovn-k network policy races OCPBUGS-2579 - Helm Charts and Samples are not disabled in topology actions if actions are disabled in customization OCPBUGS-266 - Project Access tab cannot differentiate between users and groups OCPBUGS-2666 - create a project link not backed by RBAC check OCPBUGS-272 - Getting duplicate word "find" when kube-apiserver degraded=true if webhook matches a virtual resource OCPBUGS-2727 - ClusterVersionRecommendedUpdate condition blocks explicitly allowed upgrade which is not in the available updates OCPBUGS-2729 - should ignore enP.* NICs from node-exporter on Azure cluster OCPBUGS-2735 - Operand List Page Layout Incorrect on small screen size. OCPBUGS-2738 - CVE-2022-26945 CVE-2022-30321 CVE-2022-30322 CVE-2022-30323 ose-baremetal-installer-container: various flaws [openshift-4.13.z] OCPBUGS-2824 - The dropdown list component will be covered by deployment details page on Topology page OCPBUGS-2827 - OVNK: NAT issue for packets exceeding check_pkt_larger() for NodePort services that route to hostNetworked pods OCPBUGS-2841 - Need validation rule for supported arch OCPBUGS-2845 - Unable to use application credentials for Cinder CSI after OpenStack credentials update OCPBUGS-2847 - GCP XPN should only be available with Tech Preview OCPBUGS-2851 - [OCI feature] registries.conf support in oc mirror OCPBUGS-2852 - etcd failure: failed to make etcd client for endpoints [https://[2620:52:0:1eb:367x:5axx:xxx:xxx]:2379]: context deadline exceeded OCPBUGS-2868 - Container networking pods cannot be access hosted network pods on another node in ipv6 single stack cluster OCPBUGS-2873 - Prometheus doesn't reload TLS certificate and key files on disk OCPBUGS-2886 - The LoadBalaner section shouldn't be set when using Kuryr on cloud-provider OCPBUGS-2891 - AWS Deprovision Fails with unrecognized elastic load balancing resource type listener OCPBUGS-2895 - [RFE] 4.11 Azure DiskEncryptionSet static validation does not support upper-case letters OCPBUGS-2904 - If all the actions are disabled in add page, Details on/off toggle switch to be disabled OCPBUGS-2907 - provisioning of baremetal nodes fails when using multipath device as rootDeviceHints OCPBUGS-2921 - br-ex interface not configured makes ovnkube-node Pod to crashloop OCPBUGS-2922 - 'Status' column sorting doesn't work as expected OCPBUGS-2926 - Unable to gather OpenStack console logs since kernel cmd line has no console args OCPBUGS-2934 - Ingress node firewall pod 's events container on the node causing pod in CrashLoopBackOff state when sctp module is loaded on node OCPBUGS-2941 - CIRO unable to detect swift when content-type is omitted in 204-responses OCPBUGS-2946 - [AWS] curl network Loadbalancer always get "Connection time out" OCPBUGS-2948 - Whereabouts CNI timesout while iterating exclude range OCPBUGS-2988 - apiserver pods cannot reach etcd on single node IPv6 cluster: transport: authentication handshake failed: x509: certificate is valid for ::1, 127.0.0.1, ::1, fd69::2, not 2620:52:0:198::10" OCPBUGS-2991 - CI jobs are failing with: admission webhook "validation.csi.vsphere.vmware.com" denied the request OCPBUGS-2992 - metal3 pod crashloops on OKD in BareMetal IPI or assisted-installer bare metal installations OCPBUGS-2994 - Keepalived monitor stuck for long period of time on kube-api call while installing OCPBUGS-2996 - [4.13] Bootimage bump tracker OCPBUGS-3018 - panic in WaitForBootstrapComplete OCPBUGS-3021 - GCP: missing me-west1 region OCPBUGS-3024 - Service list shows undefined:80 when type is ExternalName or LoadBalancer OCPBUGS-3027 - Metrics are not available when running console in development mode OCPBUGS-3029 - BareMetalHost CR fails to delete on cluster cleanup OCPBUGS-3033 - Clicking the logo in the masthead goes to /dashboards, even if metrics are disabled OCPBUGS-3041 - Guard Pod Hostnames Too Long and Truncated Down Into Collisions With Other Masters OCPBUGS-3069 - Should show information on page if the upgrade to a target version doesn't take effect. OCPBUGS-3072 - Operator-sdk run bundle with old sqllite index image failed OCPBUGS-3079 - RPS hook only sets the first queue, but there are now many OCPBUGS-3085 - [IPI-BareMetal]: Dual stack deployment failed on BootStrap stage
OCPBUGS-3093 - The control plane should tag AWS security groups at creation OCPBUGS-3096 - The terraform binaries shipped by the installer are not statically linked OCPBUGS-3109 - Change text colour for ConsoleNotification that notifies user that the cluster is being OCPBUGS-3114 - CNO reporting incorrect status OCPBUGS-3123 - Operator attempts to render both GA and Tech Preview API Extensions OCPBUGS-3127 - nodeip-configuration retries forever on network failure, blocking ovs-configuration, spamming syslog OCPBUGS-3168 - Add Capacity button does not exist after upgrade OCP version [OCP4.11->OCP4.12] OCPBUGS-3172 - Console shouldn't try to install dynamic plugins if permissions aren't available OCPBUGS-3180 - Regression in ptp-operator conformance tests OCPBUGS-3186 - [ibmcloud] unclear error msg when zones is not match with the Subnets in BYON install OCPBUGS-3192 - [4.8][OVN] RHEL 7.9 DHCP worker ovs-configuration fails OCPBUGS-3195 - Service-ca controller exits immediately with an error on sigterm OCPBUGS-3206 - [sdn2ovn] Migration failed in vsphere cluster OCPBUGS-3207 - SCOS build fails due to pinned kernel OCPBUGS-3214 - Installer does not always add router CA to kubeconfig OCPBUGS-3228 - Broken secret created while starting a Pipeline OCPBUGS-3235 - Topology gets stuck loading OCPBUGS-3245 - ovn-kubernetes ovnkube-master containers crashlooping after 4.11.0-0.okd-2022-10-15-073651 update OCPBUGS-3248 - CVE-2022-27191 ose-installer-container: golang: crash in a golang.org/x/crypto/ssh server [openshift-4] OCPBUGS-3253 - No warning when using wait-for vs. agent wait-for commands OCPBUGS-3272 - Unhealthy Readiness probe failed message failing CI when ovnkube DBs are still coming up OCPBUGS-3275 - No-op: Unable to retrieve machine from node "xxx": expecting one machine for node xxx got: [] OCPBUGS-3277 - Install failure in create-cluster-and-infraenv.service OCPBUGS-3278 - Shouldn't need to put host data in platform baremetal section in installconfig OCPBUGS-3280 - Install ends in preparing-failed due to container-images-available validation OCPBUGS-3283 - remove unnecessary RBAC in KCM OCPBUGS-3292 - DaemonSet "/openshift-network-diagnostics/network-check-target" is not available OCPBUGS-3314 - 'gitlab.secretReference' disappears when the buildconfig is edited on ?From View? OCPBUGS-3316 - Branch name should sanitised to match actual github branch name in repository plr list OCPBUGS-3320 - New master will be created if add duplicated failuredomains in controlplanemachineset OCPBUGS-3331 - Update dependencies in CMO release 4.13 OCPBUGS-3334 - Console should be using v1 apiVersion for ConsolePlugin model OCPBUGS-3337 - revert "force cert rotation every couple days for development" in 4.12 OCPBUGS-3338 - Environment cannot find Python OCPBUGS-3358 - Revert BUILD-407 OCPBUGS-3372 - error message is too generic when creating a silence with end time before start OCPBUGS-3373 - cluster-monitoring-view user can not list servicemonitors on "Observe -> Targets" page OCPBUGS-3377 - CephCluster and StorageCluster resources use the same paths OCPBUGS-3381 - Make ovnkube-trace work on hypershift deployments OCPBUGS-3382 - Unable to configure cluster-wide proxy OCPBUGS-3391 - seccomp profile unshare.json missing from nodes OCPBUGS-3395 - Event Source is visible without even creating knative-eventing and knative-serving. OCPBUGS-3404 - IngressController.spec.nodePlacement.nodeSelector.matchExpressions does not work OCPBUGS-3414 - Missing 'ImageContentSourcePolicy' and 'CatalogSource' in the oci fbc feature implementation OCPBUGS-3424 - Azure Disk CSI Driver Operator gets degraded without "CSISnapshot" capability OCPBUGS-3426 - Update Cluster Sample Operator dependencies and libraries for OCP 4.13 OCPBUGS-3427 - Skip broken [sig-devex][Feature:ImageEcosystem] tests OCPBUGS-3438 - cloud-network-config-controller not using proxy settings of the management cluster OCPBUGS-3440 - Authentication operator doesn't respond to console being enabled OCPBUGS-3441 - Update cluster-authentication-operator not to go degraded without console OCPBUGS-3444 - [4.13] Descheduler pod is OOM killed when using descheduler-operator profiles on big clusters OCPBUGS-3456 - track rhcos-4.12 branch for fedora-coreos-config submodule OCPBUGS-3458 - Surface ClusterVersion RetrievedUpdates condition messages OCPBUGS-3465 - IBM operator needs deployment manifest fixes OCPBUGS-3473 - Allow listing crio and kernel versions in machine-os components OCPBUGS-3476 - Show Tag label and tag name if tag is detected in repository PipelineRun list and details page OCPBUGS-3480 - Baremetal Provisioning fails on HP Gen9 systems due to eTag handling OCPBUGS-3499 - Route CRD validation behavior must be the same as openshift-apiserver behavior OCPBUGS-3501 - Route CRD host-assignment behavior must be the same as openshift-apiserver behavior OCPBUGS-3502 - CRD-based and openshift-apiserver-based Route validation/defaulting must use the shared implementation OCPBUGS-3508 - masters repeatedly losing connection to API and going NotReady OCPBUGS-3524 - The storage account for the CoreOS image is publicly accessible when deploying fully private cluster on Azure OCPBUGS-3526 - oc fails to extract layers that set xattr on Darwin OCPBUGS-3539 - [OVN-provider]loadBalancer svc with monitors not working OCPBUGS-3612 - [IPI] Baremetal ovs-configure.sh script fails to start secondary bridge br-ex1 OCPBUGS-3621 - EUS upgrade stuck on worker pool update: error running skopeo inspect --no-tags OCPBUGS-3648 - Container security operator Image Manifest Vulnerabilities encounters runtime errors under some circumstances OCPBUGS-3659 - Expose AzureDisk metrics port over HTTPS OCPBUGS-3662 - don't enforce PSa in 4.12 OCPBUGS-3667 - PTP 4.12 Regression - CLOCK REALTIME status is locked when physical interface is down OCPBUGS-3668 - 4.12.0-rc.0 fails to deploy on VMware IPI OCPBUGS-3676 - After node's reboot some pods fail to start - deleteLogicalPort failed for pod cannot delete GR SNAT for pod OCPBUGS-3693 - Router e2e: drop template.openshift.io apigroup dependency OCPBUGS-3709 - Special characters in subject name breaks prefilling role binding form OCPBUGS-3713 - [vsphere-problem-detector] fully qualified username must be used when checking permissions OCPBUGS-3714 - 'oc adm upgrade ...' should expose ClusterVersion Failing=True OCPBUGS-3739 - Pod stuck in containerCreating state when the node on which it is running is Terminated OCPBUGS-3744 - Egress router POD creation is failing while using openshift-sdn network plugin OCPBUGS-3755 - Create Alertmanager silence form does not explain the new "Negative matcher" option OCPBUGS-3761 - Consistent e2e test failure:Events.Events: event view displays created pod OCPBUGS-3765 - [RFE] Add kernel-rpm-macros to DTK image OCPBUGS-3771 - contrib/multicluster-environment.sh needs to be updated to work with ACM cluster proxy OCPBUGS-3776 - Manage columns tooltip remains displayed after dialog is closed OCPBUGS-3777 - [Dual Stack] ovn-ipsec crashlooping due to cert signing issues OCPBUGS-3797 - [4.13] Bump OVS control plane to get "ovsdb/transaction.c: Refactor assess_weak_refs." OCPBUGS-3822 - Cluster-admin cannot know whether operator is fully deleted or not after normal user trigger "Delete CSV" OCPBUGS-3827 - CCM not able to remove a LB in ERROR state OCPBUGS-3877 - RouteTargetReference missing default for "weight" in Route CRD v1 schema OCPBUGS-3880 - [Ingress Node Firewall] Change the logo used for ingress node firewall operator OCPBUGS-3883 - Hosted ovnkubernetes pods are not being spread among workers evenly OCPBUGS-3896 - Console nav toggle button reports expanded in both expanded and not expanded states OCPBUGS-3904 - Delete/Add a failureDomain in CPMS to trigger update cannot work right on GCP OCPBUGS-3909 - Node is degraded when a machine config deploys a unit with content and mask=true OCPBUGS-3916 - expr for SDNPodNotReady is wrong due to there is not node label for kube_pod_status_ready OCPBUGS-3919 - Azure: unable to configure EgressIP if an ASG is set OCPBUGS-3921 - Openshift-install bootstrap operation cannot find a cloud defined in clouds.yaml in the current directory OCPBUGS-3923 - [CI] cluster-monitoring-operator produces more watch requests than expected OCPBUGS-3924 - Remove autoscaling/v2beta2 in 4.12 and later OCPBUGS-3929 - Use flowcontrol/v1beta2 for apf manifests in 4.13 OCPBUGS-3931 - When all extensions are installed, "libkadm5" rpm package is duplicated in the rpm -q command OCPBUGS-3933 - Fails to deprovision cluster when swift omits 'content-type' OCPBUGS-3945 - Handle 0600 kubeconfig OCPBUGS-3951 - Dynamic plugin extensions disappear from the UI when a codeRef fails to load OCPBUGS-3960 - Use kernel-rt from ose repo OCPBUGS-3965 - must-gather namespace should have ?privileged? warn and audit pod security labels besides enforce OCPBUGS-3973 - [SNO] csi-snapshot-controller CO is degraded when upgrade from 4.12 to 4.13 and reports permissions issue. OCPBUGS-3974 - CIRO panics when suspended flag is nil OCPBUGS-3975 - "Failed to open directory, disabling udev device properties" in node-exporter logs OCPBUGS-3978 - AWS EBS CSI driver operator is degraded without "CSISnapshot" capability OCPBUGS-3985 - Allow PSa enforcement in 4.13 by using featuresets OCPBUGS-3987 - Some nmstate validations are skipped when NM config is in agent-config.yaml OCPBUGS-3990 - HyperShift control plane operators have wrong priorityClass OCPBUGS-3993 - egressIP annotation including two interfaces when multiple networks OCPBUGS-4000 - fix operator naming convention OCPBUGS-4008 - Console deployment does not roll out when managed cluster configmap is updated OCPBUGS-4012 - Disabled Serverless add actions should not be displayed in topology menu OCPBUGS-4026 - Endless rerender loop and a stuck browser on the add and topology page when SBO is installed OCPBUGS-4047 - [CI-Watcher] e2e test flake: Create key/value secrets Validate a key/value secret OCPBUGS-4049 - MCO reconcile fails if user replace the pull secret to empty one OCPBUGS-4052 - [ALBO] OpenShift Load Balancer Operator does not properly support cluster wide proxy OCPBUGS-4054 - cluster-ingress-operator's configurable-route controller's startup is noisy OCPBUGS-4089 - Kube-State-metrics pod fails to start due to panic OCPBUGS-4090 - OCP on OSP - Image registry is deployed with cinder instead of swift storage backend OCPBUGS-4101 - Empty/missing node-sizing SYSTEM_RESERVED_ES parameter can result in kubelet not starting OCPBUGS-4110 - Form footer buttons are misaligned in web terminal form OCPBUGS-4119 - Random SYN drops in OVS bridges of OVN-Kubernetes OCPBUGS-4166 - Update Cluster Sample Operator dependencies and libraries for OCP 4.13 OCPBUGS-4168 - Prometheus continuously restarts due to slow WAL replay OCPBUGS-4173 - vsphere-problem-detector should re-check passwords after change OCPBUGS-4181 - Prometheus and Alertmanager incorrect ExternalURL configured OCPBUGS-4184 - Use mTLS authentication for all monitoring components instead of bearer token OCPBUGS-4203 - Unnecessary padding around alert atop debug pod terminal OCPBUGS-4206 - getContainerStateValue contains incorrectly internationalized text OCPBUGS-4207 - Remove debug level logging on openshift-config-operator OCPBUGS-4219 - Add runbook link to PrometheusRuleFailures OCPBUGS-4225 - [4.13] boot sequence override request fails with Base.1.8.PropertyNotWritable on Lenovo SE450 OCPBUGS-4232 - CNCC: Wrong log format for Azure locking OCPBUGS-4245 - L2 does not work if a metallb is not able to listen to arp requests on a single interface OCPBUGS-4252 - Node Terminal tab results in error OCPBUGS-4253 - Add PodNetworkConnectivityCheck for must-gather OCPBUGS-4266 - crio.service should use a more safe restart policy to provide recoverability against concurrency issues OCPBUGS-4279 - Custom Victory-Core components in monitoring ui code causing build issues OCPBUGS-4280 - Return 0 when oc import-image failed OCPBUGS-4282 - [IR-269]Can't pull sub-manifest image using imagestream of manifest list OCPBUGS-4291 - [OVN]Sometimes after reboot egress node, egress IP cannot be applied anymore. OCPBUGS-4293 - Specify resources.requests for operator pod OCPBUGS-4298 - Specify resources.requests for operator pod OCPBUGS-4302 - Specify resources.requests for operator pod OCPBUGS-4305 - [4.13] Improve ironic logging configuration in metal3 OCPBUGS-4317 - [IBM][4.13][Snapshot] restore size in snapshot is not the same size of pvc request size OCPBUGS-4328 - Update installer images to be consistent with ART OCPBUGS-434 - After FIPS enabled in S390X, ingress controller in degraded state OCPBUGS-4343 - Use flowcontrol/v1beta3 for apf manifests in 4.13 OCPBUGS-4347 - set TLS cipher suites in Kube RBAC sidecars OCPBUGS-4350 - CNO in HyperShift reports upgrade complete in clusteroperator prematurely OCPBUGS-4352 - [RHOCP] HPA shows different API versions in web console OCPBUGS-4357 - Bump samples operator k8s dep to 1.25.2 OCPBUGS-4359 - cluster-dns-operator corrupts /etc/hosts when fs full OCPBUGS-4367 - Debug log messages missing from output and Info messages malformed OCPBUGS-4377 - Service name search ability while creating the Route from console OCPBUGS-4401 - limit cluster-policy-controller RBAC permissions OCPBUGS-4411 - ovnkube node pod crashed after converting to a dual-stack cluster network OCPBUGS-4417 - ip-reconciler removes the overlappingrangeipreservations whether the pod is alive or not OCPBUGS-4425 - Egress FW ACL rules are invalid in dualstack mode OCPBUGS-4447 - [MetalLB Operator] The CSV needs an update to reflect the correct version of operator OCPBUGS-446 - Cannot Add a project from DevConsole in airgap mode using git importing OCPBUGS-4483 - apply retry logic to ovnk-node controllers OCPBUGS-4490 - hypershift: csi-snapshot-controller uses wrong kubeconfig OCPBUGS-4491 - hypershift: aws-ebs-csi-driver-operator uses wrong kubeconfig OCPBUGS-4492 - [4.13] The property TransferProtocolType is required for VirtualMedia.InsertMedia OCPBUGS-4502 - [4.13] [OVNK] Add support for service session affinity timeout OCPBUGS-4516 - oc-mirror does not work as expected relative path for OCI format copy OCPBUGS-4517 - Better to detail the --command-os of mac for oc adm release extract command OCPBUGS-4521 - all kubelet targets are down after a few hours OCPBUGS-4524 - Hold lock when deleting completed pod during update event OCPBUGS-4525 - Don't log in iterateRetryResources when there are no retry entries OCPBUGS-4535 - There is no 4.13 gcp-filestore-csi-driver-operator version for test OCPBUGS-4536 - Image registry panics while deploying OCP in eu-south-2 AWS region OCPBUGS-4537 - Image registry panics while deploying OCP in eu-central-2 AWS region OCPBUGS-4538 - Image registry panics while deploying OCP in ap-south-2 AWS region OCPBUGS-4541 - Azure: remove deprecated ADAL OCPBUGS-4546 - CVE-2021-38561 ose-installer-container: golang: out-of-bounds read in golang.org/x/text/language leads to DoS [openshift-4] OCPBUGS-4549 - Azure: replace deprecated AD Graph API OCPBUGS-4550 - [CI] console-operator produces more watch requests than expected OCPBUGS-4571 - The operator recommended namespace is incorrect after change installation mode to "A specific namespace on the cluster" OCPBUGS-4574 - Machine stuck in no phase when creating in a nonexistent zone and stuck in Deleting when deleting on GCP OCPBUGS-463 - OVN-Kubernetes should not send IPs with leading zeros to OVN OCPBUGS-4630 - Bump documentationBaseURL to 4.13 OCPBUGS-4635 - [OCP 4.13] ironic container images have old packages OCPBUGS-4638 - Support RHOBS monitoring for HyperShift in CNO OCPBUGS-4652 - Fixes for RHCOS 9 based on RHEL 9.0 OCPBUGS-4654 - Azure: UPI: Fix storage arm template to work with Galleries and MAO OCPBUGS-4659 - Network Policy executes duplicate transactions for every pod update OCPBUGS-4684 - In DeploymentConfig both the Form view and Yaml view are not in sync OCPBUGS-4689 - SNO not able to bring up Provisioning resource in 4.11.17 OCPBUGS-4691 - Topology sidebar actions doesn't show the latest resource data OCPBUGS-4692 - PTP operator: Use priority class node critical OCPBUGS-4700 - read-only update UX: confusing "Update blocked" pop-up OCPBUGS-4701 - read-only update UX: confusing "Control plane is hosted" banner OCPBUGS-4703 - Router can migrate to use LivenessProbe.TerminationGracePeriodSeconds OCPBUGS-4712 - ironic-proxy daemonset not deleted when provisioningNetwork is changed from Disabled to Managed/Unmanaged OCPBUGS-4724 - [4.13] egressIP annotations not present on OpenShift on Openstack multiAZ installation OCPBUGS-4725 - mapi_machinehealthcheck_short_circuit not properly reconciling causing MachineHealthCheckUnterminatedShortCircuit alert to fire OCPBUGS-4746 - Removal of detection of host kubelet kubeconfig breaks IBM Cloud ROKS OCPBUGS-4756 - OLM generates invalid component selector labels OCPBUGS-4757 - Revert Catalog PSA decisions for 4.13 (OLM) OCPBUGS-4758 - Revert Catalog PSA decisions for 4.13 (Marketplace) OCPBUGS-4769 - Old AWS boot images vs. 4.12: unknown provider 'ec2' OCPBUGS-4780 - Update openshift/builder release-4.13 to go1.19 OCPBUGS-4781 - Get Helm Release seems to be using List Releases api OCPBUGS-4793 - CMO may generate Kubernetes events with a wrong object reference OCPBUGS-4802 - Update formatting with gofmt for go1.19 OCPBUGS-4825 - Pods completed + deleted may leak OCPBUGS-4827 - Ingress Controller is missing a required AWS resource permission for SC2S region us-isob-east-1 OCPBUGS-4873 - openshift-marketplace namespace missing "audit-version" and "warn-version" PSA label OCPBUGS-4874 - Baremetal host data is still sometimes required OCPBUGS-4883 - Default Git type to other info alert should get remove after changing the git type OCPBUGS-4894 - Disabled Serverless add actions should not be displayed for Knative Service OCPBUGS-4899 - coreos-installer output not available in the logs OCPBUGS-4900 - Volume limits test broken on AWS and GCP TechPreview clusters OCPBUGS-4906 - Cross-namespace template processing is not being tested OCPBUGS-4909 - Can't reach own service when egress netpol are enabled OCPBUGS-4913 - Need to wait longer for VM to obtain IP from DHCP OCPBUGS-4941 - Fails to deprovision cluster when swift omits 'content-type' and there are empty containers OCPBUGS-4950 - OLM K8s Dependencies should be at 1.25 OCPBUGS-4954 - [IBMCloud] COS Reclamation prevents ResourceGroup cleanup OCPBUGS-4955 - Bundle Unpacker Using "Always" ImagePullPolicy for digests OCPBUGS-4969 - ROSA Machinepool EgressIP Labels Not Discovered OCPBUGS-4975 - Missing translation in ceph storage plugin OCPBUGS-4986 - precondition: Do not claim warnings would have blocked OCPBUGS-4997 - Agent ISO does not respect proxy settings OCPBUGS-5001 - MachineConfigControllerPausedPoolKubeletCA should have a working runbook URI OCPBUGS-501 - oc get dc fails when AllRequestBodies audit-profile is set in apiserver OCPBUGS-5010 - Should always delete the must-gather pod when run the must-gather OCPBUGS-5016 - Editing Pipeline in the ocp console to get information error OCPBUGS-5018 - Upgrade from 4.11 to 4.12 with Windows machine workers (Spot Instances) failing due to: hcnCreateEndpoint failed in Win32: The object already exists. OCPBUGS-5036 - Cloud Controller Managers do not react to changes in configuration leading to assorted errors OCPBUGS-5045 - unit test data race with egress ip tests OCPBUGS-5068 - [4.13] virtual media provisioning fails when iLO Ironic driver is used OCPBUGS-5073 - Connection reset by peer issue with SSL OAuth Proxy when route objects are created more than 80. OCPBUGS-5079 - [CI Watcher] pull-ci-openshift-console-master-e2e-gcp-console jobs: Process did not finish before 4h0m0s timeout OCPBUGS-5085 - Should only show the selected catalog when after apply the ICSP and catalogsource OCPBUGS-5101 - [GCP] [capi] Deletion of cluster is happening , it shouldn't be allowed OCPBUGS-5116 - machine.openshift.io API is not supported in Machine API webhooks OCPBUGS-512 - Permission denied when write data to mounted gcp filestore volume instance OCPBUGS-5124 - kubernetes-nmstate does not pass CVP tests in 4.12 OCPBUGS-5136 - provisioning on ilo4-virtualmedia BMC driver fails with error: "Creating vfat image failed: Unexpected error while running command" OCPBUGS-5140 - [alibabacloud] IPI install got bootstrap failure and without any node ready, due to enforced EIP bandwidth 5 Mbit/s OCPBUGS-5151 - Installer - provisioning interface on master node not getting ipv4 dhcp ip address from bootstrap dhcp server on OCP IPI BareMetal install OCPBUGS-5164 - Add support for API version v1beta1 for knativeServing and knativeEventing OCPBUGS-5165 - Dev Sandbox clusters uses clusterType OSD and there is no way to enforce DEVSANDBOX OCPBUGS-5182 - [azure] Fail to create master node with vm size in family ECIADSv5 and ECIASv5 OCPBUGS-5184 - [azure] Fail to create master node with vm size in standardNVSv4Family OCPBUGS-5188 - Wrong message in MCCDrainError alert OCPBUGS-5234 - [azure] Azure Stack Hub (wwt) UPI installation failed to scale up worker nodes using machinesets OCPBUGS-5235 - mapi_instance_create_failed metric cannot work when set acceleratedNetworking: true on Azure OCPBUGS-5269 - remove unnecessary RBAC in KCM: file removal OCPBUGS-5275 - remove unnecessary RBAC in OCM OCPBUGS-5287 - Bug with Red Hat Integration - 3scale - Managed Application Services causes operator-install-single-namespace.spec.ts to fail OCPBUGS-5292 - Multus: Interface name contains an invalid character / [ocp 4.13] OCPBUGS-5300 - WriteRequestBodies audit profile records routes/status events at RequestResponse level OCPBUGS-5306 - One old machine stuck in Deleting and many co get degraded when doing master replacement on the cluster with OVN network OCPBUGS-5346 - Reported vSphere Connection status is misleading OCPBUGS-5347 - Clusteroperator Available condition is updated every 2 mins when operator is disabled OCPBUGS-5353 - Dashboard graph should not be stacked - Kubernetes / Compute Resources / Pod Dashboard OCPBUGS-5410 - [AWS-EBS-CSI-Driver] provision volume using customer kms key couldn't restore its snapshot successfully OCPBUGS-5423 - openshift-marketplace pods cause PodSecurityViolation alert to fire OCPBUGS-5428 - Many plugin SDK extension docs are missing descriptions OCPBUGS-5432 - Downstream Operator-SDK v1.25.1 to OCP 4.13 OCPBUGS-5458 - wal: max entry size limit exceeded OCPBUGS-5465 - Context Deadline exceeded when PTP service is disabled from the switch OCPBUGS-5466 - Default CatalogSource aren't always reverted to default settings OCPBUGS-5492 - CI "[Feature:bond] should create a pod with bond interface" fail for MTU migration jobs OCPBUGS-5497 - MCDRebootError alarm disappears after 15 minutes OCPBUGS-5498 - Host inventory quick start for OCP OCPBUGS-5505 - Upgradeability check is throttled too much and with unnecessary non-determinism OCPBUGS-5508 - Report topology usage in vSphere environment via telemetry OCPBUGS-5517 - [Azure/ARO] Update Azure SDK to v63.1.0+incompatible OCPBUGS-5520 - MCDPivotError alert fires due temporary transient failures OCPBUGS-5523 - Catalog, fatal error: concurrent map read and map write OCPBUGS-5524 - Disable vsphere intree tests that exercise multiple tests OCPBUGS-5534 - [UI] When OCP and ODF are upgraded, refresh web console pop-up doesn't appear after ODF upgrade resulting in dashboard crash OCPBUGS-5540 - Typo in WTO for Milliseconds OCPBUGS-5542 - Project dropdown order is not as smart as project list page order OCPBUGS-5546 - Machine API Provider Azure should not modify the Machine spec OCPBUGS-5547 - Webhook Secret (1 of 2) is not removed when Knative Service is deleted OCPBUGS-5559 - add default noProxy config for Azure OCPBUGS-5733 - [Openshift Pipelines] Description of parameters are not shown in pipelinerun description page OCPBUGS-5734 - Azure: VIP 168.63.129.16 should be noProxy to all clouds except Public OCPBUGS-5736 - The main section of the page will keep loading after normal user login OCPBUGS-5759 - Deletion of BYOH Windows node hangs in Ready,SchedulingDisabled OCPBUGS-5802 - update sprig to v3 in cno OCPBUGS-5836 - Incorrect redirection when user try to download windows oc binary OCPBUGS-5842 - executes /host/usr/bin/oc OCPBUGS-5851 - [CI-Watcher]: Using OLM descriptor components deletes operand OCPBUGS-5873 - etcd_object_counts is deprecated and replaced with apiserver_storage_objects, causing "etcd Object Count" dashboard to only show OpenShift resources OCPBUGS-5888 - Failed to install 4.13 ocp on SNO with "error during syncRequiredMachineConfigPools" OCPBUGS-5891 - oc-mirror heads-only does not work with target name OCPBUGS-5903 - gather default ingress controller definition OCPBUGS-5922 - [2047299 Jira placeholder] nodeport not reachable port connection timeout OCPBUGS-5939 - revert "force cert rotation every couple days for development" in 4.13 OCPBUGS-5948 - Runtime error using API Explorer with AdmissionReview resource OCPBUGS-5949 - oc --icsp mapping scope does not match openshift icsp mapping scope OCPBUGS-5959 - [4.13] Bootimage bump tracker OCPBUGS-5988 - Degraded etcd on assisted-installer installation- bootstrap etcd is not removed properly OCPBUGS-5991 - Kube APIServer panics in admission controller OCPBUGS-5997 - Add Git Repository form shows empty permission content and non-working help link until a git url is entered OCPBUGS-6004 - apiserver pods cannot reach etcd on single node IPv6 cluster: transport: authentication handshake failed: x509: certificate is valid for ::1, 127.0.0.1, ::1, fd69::2, not 2620:52:0:198::10" OCPBUGS-6011 - openshift-client package has wrong version of kubectl bundled OCPBUGS-6018 - The MCO can generate a rendered config with old KubeletConfig contents, blocking upgrades OCPBUGS-6026 - cannot change /etc folder ownership inside pod OCPBUGS-6033 - metallb 4.12.0-202301042354 (OCP 4.12) refers to external image OCPBUGS-6049 - Do not show UpdateInProgress when status is Failing OCPBUGS-6053 - availableUpdates: null results in run-time error on Cluster Settings page OCPBUGS-6055 - thanos-ruler-user-workload-1 pod is getting repeatedly re-created after upgrade do 4.10.41 OCPBUGS-6063 - PVs(vmdk) get deleted when scaling down machineSet with vSphere IPI OCPBUGS-6089 - Unnecessary event reprocessing OCPBUGS-6092 - ovs-configuration.service fails - Error: Connection activation failed: No suitable device found for this connection OCPBUGS-6097 - CVO hotloops on ImageStream and logs the information incorrectly OCPBUGS-6098 - Show Git icon and URL in repository link in PLR details page should be based on the git provider OCPBUGS-6101 - Daemonset is not upgraded after operator upgrade OCPBUGS-6175 - Image registry Operator does not use Proxy when connecting to openstack OCPBUGS-6185 - Update 4.13 ose-cluster-config-operator image to be consistent with ART OCPBUGS-6187 - Update 4.13 openshift-state-metrics image to be consistent with ART OCPBUGS-6189 - Update 4.13 ose-cluster-authentication-operator image to be consistent with ART OCPBUGS-6191 - Update 4.13 ose-network-metrics-daemon image to be consistent with ART OCPBUGS-6197 - Update 4.13 ose-openshift-apiserver image to be consistent with ART OCPBUGS-6201 - Update 4.13 openshift-enterprise-pod image to be consistent with ART OCPBUGS-6202 - Update 4.13 ose-cluster-kube-apiserver-operator image to be consistent with ART OCPBUGS-6213 - Update 4.13 ose-machine-config-operator image to be consistent with ART OCPBUGS-6222 - Update 4.13 ose-alibaba-cloud-csi-driver image to be consistent with ART OCPBUGS-6228 - Update 4.13 coredns image to be consistent with ART OCPBUGS-6231 - Update 4.13 ose-kube-storage-version-migrator image to be consistent with ART OCPBUGS-6232 - Update 4.13 marketplace-operator image to be consistent with ART OCPBUGS-6233 - Update 4.13 ose-cluster-openshift-apiserver-operator image to be consistent with ART OCPBUGS-6234 - Update 4.13 ose-cluster-bootstrap image to be consistent with ART OCPBUGS-6235 - Update 4.13 cluster-network-operator image to be consistent with ART OCPBUGS-6238 - Update 4.13 oauth-server image to be consistent with ART OCPBUGS-6240 - Update 4.13 ose-cluster-kube-storage-version-migrator-operator image to be consistent with ART OCPBUGS-6241 - Update 4.13 operator-lifecycle-manager image to be consistent with ART OCPBUGS-6247 - Update 4.13 ose-cluster-ingress-operator image to be consistent with ART OCPBUGS-6262 - Add more logs to "oc extract" in mco-first boot service OCPBUGS-6265 - When installing SNO with bootstrap in place it takes CVO 6 minutes to acquire the leader lease OCPBUGS-6270 - Irrelevant vsphere platform data is required OCPBUGS-6272 - E2E tests: Entire pipeline flow from Builder page Start the pipeline with workspace OCPBUGS-631 - machineconfig service is failed to start because Podman storage gets corrupted OCPBUGS-6486 - Image upload fails when installing cluster OCPBUGS-6503 - admin ack test nondeterministically does a check post-upgrade OCPBUGS-6504 - IPI Baremetal Master Node in DualStack getting fd69:: address randomly, OVN CrashLoopBackOff OCPBUGS-6507 - Don't retry network policy peer pods if ips couldn't be fetched OCPBUGS-6577 - Node-exporter NodeFilesystemAlmostOutOfSpace alert exception needed OCPBUGS-6610 - Developer - Topology : 'Filter by resource' drop-down i18n misses OCPBUGS-6621 - Image registry panics while deploying OCP in ap-southeast-4 AWS region OCPBUGS-6624 - Issue deploying the master node with IPI OCPBUGS-6634 - Let the console able to build on other architectures and compatible with prow builds OCPBUGS-6646 - Ingress node firewall CI is broken with latest OCPBUGS-6647 - User Preferences - Applications : Resource type drop-down i18n misses OCPBUGS-6651 - Nodes unready in PublicAndPrivate / Private Hypershift setups behind a proxy OCPBUGS-6660 - Uninstall Operator? modal instructions always reference optional checkbox OCPBUGS-6663 - Platform baremetal warnings during create image when fields not defined OCPBUGS-6682 - [OVN] ovs-configuration vSphere vmxnet3 allmulti workaround is now permanent OCPBUGS-6698 - Fix conflict error message in cluster-ingress-operator's ensureNodePortService OCPBUGS-6700 - Cluster-ingress-operator's updateIngressClass function logs success message when error OCPBUGS-6701 - The ingress-operator spuriously updates ingressClass on startup OCPBUGS-6714 - Traffic from egress IPs was interrupted after Cluster patch to Openshift 4.10.46 OCPBUGS-672 - Redhat-operators are failing regularly due to startup probe timing out which in turn increases CPU/Mem usage on Master nodes OCPBUGS-6722 - s390x: failed to generate asset "Image": multiple "disk" artifacts found OCPBUGS-6730 - Pod latency spikes are observed when there is a compaction/leadership transfer OCPBUGS-6731 - Gathered Environment variables (HTTP_PROXY/HTTPS_PROXY) may contain sensible information and should be obfuscated OCPBUGS-6741 - opm fails to serve FBC if cachedir not provided OCPBUGS-6757 - Pipeline Repository (Pipeline-as-Code) list page shows an empty Event type column OCPBUGS-6760 - Couldn't update/delete cpms on gcp private cluster OCPBUGS-6762 - Enhance the user experience for the name-filter-input on Metrics target page OCPBUGS-6765 - "Delete dependent objects of this resource" might cause confusions OCPBUGS-6777 - [gcp][CORS-1988] "create manifests" without an existing "install-config.yaml" missing 4 YAML files in "/openshift" which leads to "create cluster" failure OCPBUGS-6781 - gather Machine objects OCPBUGS-6797 - Empty IBMCOS storage config causes operator to crashloop OCPBUGS-6799 - Repositories list does not show the running pipelinerun as last pipelinerun OCPBUGS-6809 - Uploading large layers fails with "blob upload invalid" OCPBUGS-6811 - Update Cluster Sample Operator dependencies and libraries for OCP 4.13 OCPBUGS-6821 - Update NTO images to be consistent with ART OCPBUGS-6832 - Include openshift_apps_deploymentconfigs_strategy_total to recent_metrics OCPBUGS-6893 - Dev console doesn't finish loading for users with limited access OCPBUGS-6902 - 4.13-e2e-metal-ipi-upgrade-ovn-ipv6 on permafail OCPBUGS-6917 - MultinetworkPolicy: unknown service runtime.v1alpha2.RuntimeService OCPBUGS-6925 - Update OWNERS_ALIASES in release-4.13 branch OCPBUGS-6945 - OS Release reports incorrect version ID OCPBUGS-6953 - ovnkube-master panic nil deref OCPBUGS-6955 - panic in an ovnkube-master pod OCPBUGS-6962 - 'agent_installer' invoker not showing up in telemetry OCPBUGS-6977 - pod-identity-webhook replicas=2 is failing single node jobs OCPBUGS-6978 - Index violation on IGMP_Group during upgrade from 4.12.0 to 4.12.1 OCPBUGS-6994 - All Clusters perspective is not activated automatically when ACM is installed OCPBUGS-702 - The caBundle field of alertmanagerconfigs.monitoring.coreos.com crd is getting removed OCPBUGS-7031 - Pipelines repository list and creation form doesn't show Tech Preview status OCPBUGS-7090 - Add to navigation button in search result does nothing OCPBUGS-7102 - OLM downstream utest fails due to new release-XX+1 branch creation OCPBUGS-7106 - network-tools needs to be updated to give ovn-k master leader info OCPBUGS-7118 - OCP 4.12 does not support launching SGX enclaves OCPBUGS-7144 - On mobile screens, At pipeline details page the info alert on metrics tab is not showing correctly OCPBUGS-7149 - IPv6 multinode spoke no moving from rebooting/configuring stage OCPBUGS-7173 - [OVN] DHCP timeouts on Azure arm64, install fails OCPBUGS-7180 - [4.13] Bootimage bump tracker OCPBUGS-7186 - [gcp][CORS-2424] with "secureBoot" enabled, after deleting control-plane machine, the new machine is created with "enableSecureBoot" being False unexpectedly OCPBUGS-7195 - [CI-Watcher] e2e issue with tests: Create Samples Page Timeout Error OCPBUGS-7199 - [CI-Watcher] e2e issue with tests: Interacting with CatalogSource page OCPBUGS-7204 - Manifests generated to multiple "results-xxx" folders when using the oci feature with OCI and nonOCI catalogs OCPBUGS-7207 - MTU migration configuration is cleaned up prematurely while in progress OCPBUGS-723 - ClusterResourceQuota values are not reflecting. OCPBUGS-7268 - [4.13] Modify the PSa pod extractor to mutate pod controller pod specs OCPBUGS-7284 - Hypershift failing new SCC conformance tests OCPBUGS-7291 - ptp keeps trying to start phc2sys even if it's configured as empty string in phc2sysOpts OCPBUGS-7293 - RHCOS 9.2 Failing to Bootstrap on Metal, OpenStack, vSphere (all baremetal runtime platforms) OCPBUGS-7300 - aws-ebs-csi-driver-operator crash loops with HC proxy configured OCPBUGS-7301 - Not possible to use certain start addresses in whereabouts IPv6 range [Backport 4.13] OCPBUGS-7308 - Download kubeconfig for ServiceAccount returns error OCPBUGS-7354 - Installation failed on Azure SDN as network is degraded OCPBUGS-7356 - Default channel on OCP 4.13 should be stable-4.13 OCPBUGS-7359 - [Azure] Replace master failed as new master did not add into lb backend OCPBUGS-736 - Kuryr uses default MTU for service network OCPBUGS-7366 - [gcp] New machine stuck in Provisioning when delete one zone from cpms on gcp with customer vpc OCPBUGS-7372 - fail early on missing node status envs OCPBUGS-7374 - set default timeouts in etcdcli OCPBUGS-7391 - Monitoring operator long delay reconciling extension-apiserver-authentication OCPBUGS-7399 - In the Edit application mode, the name of the added pipeline is not displayed anymore OCPBUGS-7408 - AzureDisk CSI driver does not compile with cachito OCPBUGS-7412 - gomod dependencies failures in 4.13-4.14 container builds OCPBUGS-7417 - gomod dependencies failures in 4.13-4.14 container builds OCPBUGS-7418 - Default values for Scaling fields is not set in Create Serverless function form OCPBUGS-7419 - CVO delay when setting clusterversion available status to true
OCPBUGS-7421 - Missing i18n key for PAC section in Git import form OCPBUGS-7424 - Bump cluster-ingress-operator to k8s APIs v0.26.1 OCPBUGS-7427 - dynamic-demo-plugin.spec.ts requires 10 minutes of unnecessary wait time OCPBUGS-7438 - Egress service does not handle invalid nodeSelectors correctly OCPBUGS-7482 - Fix handling of single failure-domain (non-tagged) deployments in vsphere OCPBUGS-7483 - Hypershift installs on "platform: none" are broken OCPBUGS-7488 - test flake: should not reconcile SC when state is Unmanaged OCPBUGS-7495 - Platform type is ignored OCPBUGS-7517 - Helm page crashes on old releases with a new Secret OCPBUGS-7519 - NFS Storage Tests trigger Kernel Panic on Azure and Metal OCPBUGS-7523 - Add new AWS regions for ROSA OCPBUGS-7542 - Bump router to k8s APIs v0.26.1 OCPBUGS-7555 - Enable default sysctls for kubelet OCPBUGS-7558 - Rebase coredns to 1.10.1 OCPBUGS-7563 - vSphere install can't complete with out-of-tree CCM OCPBUGS-7579 - [azure] failed to parse client certificate when using certificate-based Service Principal with passpharse OCPBUGS-7611 - PTPOperator config transportHost with AMQ is not detected OCPBUGS-7616 - vSphere multiple in-tree test failures (non-zonal) OCPBUGS-7617 - Azure Disk volume is taking time to attach/detach OCPBUGS-7622 - vSphere UPI jobs failing with 'Managed cluster should have machine resources' OCPBUGS-7648 - Bump cluster-dns-operator to k8s APIs v0.26.1 OCPBUGS-7689 - Project Admin is able to Label project with empty string in RHOCP 4 OCPBUGS-7696 - [ Azure ]not able to deploy machine with publicIp:true OCPBUGS-7707 - /etc/NetworkManager/dispatcher.d needs to be relabeled during pivot from 8.6 to 9.2 OCPBUGS-7719 - Update to 4.13.0-ec.3 stuck on leaked MachineConfig OCPBUGS-7729 - Remove ETCD liviness probe. OCPBUGS-7731 - Need to cancel threads when agent-tui timeout is stopped OCPBUGS-7733 - Afterburn fails on AWS/GCP clusters born in OCP 4.1/4.2 OCPBUGS-7743 - SNO upgrade from 4.12 to 4.13 rhel9.2 is broken cause of dnsmasq default config OCPBUGS-7750 - fix gofmt check issue in network-metrics-daemon OCPBUGS-7754 - ART having trouble building olm images OCPBUGS-7774 - RawCNIConfig is printed in byte representation on failure, not human readable OCPBUGS-7785 - migrate to using Lease for leader election OCPBUGS-7806 - add "nfs-export" under PV details page OCPBUGS-7809 - sg3_utils package is missing in the assisted-installer-agent Docker file OCPBUGS-781 - ironic-proxy is using a deprecated field to fetch cluster VIP OCPBUGS-7833 - Storage tests failing in no-capabilities job OCPBUGS-7837 - hypershift: aws-ebs-csi-driver-operator uses guest cluster proxy causing PV provisioning failure OCPBUGS-7860 - [azure] message is unclear when missing clientCertificatePassword in osServicePrincipal.json OCPBUGS-7876 - [Descheduler] Enabling LifeCycleUtilization to test namespace filtering does not work OCPBUGS-7879 - Devfile isn't be processed correctly on 'Add from git repo' OCPBUGS-7896 - MCO should not add keepalived pod manifests in case of VSPHERE UPI OCPBUGS-7899 - ODF Monitor pods failing to be bounded because timeout issue with thin-csi SC OCPBUGS-7903 - Pool degraded with error: rpm-ostree kargs: signal: terminated OCPBUGS-7909 - Baremetal runtime prepender creates /etc/resolv.conf mode 0600 and bad selinux context OCPBUGS-794 - OLM version rule is not clear OCPBUGS-7940 - apiserver panics in admission controller OCPBUGS-7943 - AzureFile CSI driver does not compile with cachito OCPBUGS-7970 - [E2E] Always close the filter dropdown in listPage.filter.by OCPBUGS-799 - Reply packet for DNS conversation to service IP uses pod IP as source OCPBUGS-8066 - Create Serverless Function form breaks if Pipeline Operator is not installed OCPBUGS-8086 - Visual issues with listing items OCPBUGS-8243 - [release 4.13] Gather Monitoring pods' Persistent Volumes OCPBUGS-8308 - Bump openshift/kubernetes to 1.26.2 OCPBUGS-8312 - IPI on Power VS clusters cannot deploy MCO OCPBUGS-8326 - Azure cloud provider should use Kubernetes 1.26 dependencies OCPBUGS-8341 - Unable to set capabilities with agent installer based installation OCPBUGS-8342 - create cluster-manifests fails when imageContentSources is missing OCPBUGS-8353 - PXE support is incomplete OCPBUGS-8381 - Console shows x509 error when requesting token from oauth endpoint OCPBUGS-8401 - Bump openshift/origin to kube 1.26.2 OCPBUGS-8424 - ControlPlaneMachineSet: Machine's Node should be Ready to consider the Machine Ready OCPBUGS-8445 - cgroups default setting in OCP 4.13 generates extra MachineConfig OCPBUGS-8463 - OpenStack Failure domains as 4.13 TechPreview OCPBUGS-8471 - [4.13] egress firewall only createas 1 acl for long namespace names OCPBUGS-8475 - TestBoundTokenSignerController causes unrecoverable disruption in e2e-gcp-operator CI job OCPBUGS-8481 - CAPI rebases 4.13 backports OCPBUGS-8490 - agent-tui: display additional checks only when primary check fails OCPBUGS-8498 - aws-ebs-csi-driver-operator ServiceAccount does not include the HCP pull-secret in its imagePullSecrets OCPBUGS-8505 - [4.13] egress firewall acls are deleted on restart OCPBUGS-8511 - [4.13+ ONLY] Don't use port 80 in bootstrap IPI bare metal OCPBUGS-855 - When setting allowedRegistries urls the openshift-samples operator is degraded OCPBUGS-859 - monitor not working with UDP lb when externalTrafficPolicy: Local OCPBUGS-860 - CSR are generated with incorrect Subject Alternate Names OCPBUGS-8699 - Metal IPI Install Rate Below 90% OCPBUGS-8701 - oc patch project not working with OCP 4.12 OCPBUGS-8702 - OKD SCOS: remove workaround for rpm-ostree auth OCPBUGS-8703 - fails to switch to kernel-rt with rhel 9.2 OCPBUGS-8710 - [4.13] don't enforce PSa in 4.13 OCPBUGS-8712 - AES-GCM encryption at rest is not supported by kube-apiserver-operator OCPBUGS-8719 - Allow the user to scroll the content of the agent-tui details view OCPBUGS-8741 - [4.13] Pods in same deployment will have different ability to query services in same namespace from one another; ocp 4.10 OCPBUGS-8742 - Origin tests should not specify readyz as the health check path OCPBUGS-881 - fail to create install-config.yaml as apiVIP and ingressVIP are not in machine networks OCPBUGS-8941 - Introduce tooltips for contextual information OCPBUGS-904 - Alerts from MCO are missing namespace OCPBUGS-9079 - ICMP fragmentation needed sent to pods behind a service don't seem to reach the pods OCPBUGS-91 - [ExtDNS] New TXT record breaks downward compatibility by retroactively limiting record length OCPBUGS-9132 - WebSCale: ovn logical router polices incorrect/l3 gw config not updated after IP change OCPBUGS-9185 - Pod latency spikes are observed when there is a compaction/leadership transfer OCPBUGS-9233 - ConsoleQuickStart {{copy}} and {{execute}} features do not work in some cases OCPBUGS-931 - [osp][octavia lb] NodePort allocation cannot be disabled for LB type svcs OCPBUGS-9338 - editor toggle radio input doesn't have distinguishable attributes OCPBUGS-9389 - Detach code in vsphere csi driver is failing OCPBUGS-948 - OLM sets invalid SCC label on its namespaces OCPBUGS-95 - NMstate removes egressip in OpenShift cluster with SDN plugin OCPBUGS-9913 - bacport tests for PDBUnhealthyPodEvictionPolicy as Tech Preview OCPBUGS-9924 - Remove unsupported warning in oc-mirror when using the --skip-pruning flag OCPBUGS-9926 - Enable node healthz server for ovnk in CNO OCPBUGS-9951 - fails to reconcile to RT kernel on interrupted updates OCPBUGS-9957 - Garbage collect grafana-dashboard-etcd OCPBUGS-996 - Control Plane Machine Set Operator OnDelete update should cause an error when more than one machine is ready in an index OCPBUGS-9963 - Better to change the error information more clearly to help understand OCPBUGS-9968 - Operands running management side missing affinity, tolerations, node selector and priority rules than the operator

  1. Summary:

The Migration Toolkit for Containers (MTC) 1.7.4 is now available. Description:

The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):

1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 2054663 - CVE-2022-0512 nodejs-url-parse: authorization bypass through user-controlled key 2057442 - CVE-2022-0639 npm-url-parse: Authorization Bypass Through User-Controlled Key 2060018 - CVE-2022-0686 npm-url-parse: Authorization bypass through user-controlled key 2060020 - CVE-2022-0691 npm-url-parse: authorization bypass through user-controlled key 2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information 2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read

5

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202205-1990",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "vim",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "vim",
        "version": "8.2.5037"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "36"
      },
      {
        "model": "macos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.0"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "35"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "34"
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-1927"
      }
    ]
  },
  "configurations": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/configurations#",
      "children": {
        "@container": "@list"
      },
      "cpe_match": {
        "@container": "@list"
      },
      "data": {
        "@container": "@list"
      },
      "nodes": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "CVE_data_version": "4.0",
        "nodes": [
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:a:vim:vim:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "8.2.5037",
                "vulnerable": true
              }
            ],
            "operator": "OR"
          },
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:34:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:35:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:36:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              }
            ],
            "operator": "OR"
          },
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "13.0",
                "vulnerable": true
              }
            ],
            "operator": "OR"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-1927"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "168022"
      },
      {
        "db": "PACKETSTORM",
        "id": "168538"
      },
      {
        "db": "PACKETSTORM",
        "id": "168112"
      },
      {
        "db": "PACKETSTORM",
        "id": "168213"
      },
      {
        "db": "PACKETSTORM",
        "id": "168139"
      },
      {
        "db": "PACKETSTORM",
        "id": "172441"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      }
    ],
    "trust": 0.7
  },
  "cve": "CVE-2022-1927",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "acInsufInfo": false,
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "NVD",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "obtainAllPrivilege": false,
            "obtainOtherPrivilege": false,
            "obtainUserPrivilege": false,
            "severity": "MEDIUM",
            "trust": 1.0,
            "userInteractionRequired": true,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "VHN-423615",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:P/I:P/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "LOCAL",
            "author": "NVD",
            "availabilityImpact": "HIGH",
            "baseScore": 7.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 1.8,
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "LOW",
            "attackVector": "LOCAL",
            "author": "security@huntr.dev",
            "availabilityImpact": "HIGH",
            "baseScore": 7.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 1.8,
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.0/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "NVD",
            "id": "CVE-2022-1927",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "security@huntr.dev",
            "id": "CVE-2022-1927",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202205-4253",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "VULHUB",
            "id": "VHN-423615",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-423615"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202205-4253"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-1927"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-1927"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Buffer Over-read in GitHub repository vim/vim prior to 8.2. Vim is a cross-platform text editor. Vim versions prior to 8.2 have a security vulnerability caused by buffer overreading. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n2031228 - CVE-2021-43813 grafana: directory traversal vulnerability\n2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources\n2115198 - build ceph containers for RHCS 5.2 release\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2041540 - RHACM 2.4 using deprecated APIs in managed clusters\n2074766 - vSphere network name doesn\u0027t allow entering spaces and doesn\u0027t reflect YAML changes\n2079418 - cluster update status is stuck, also update is not even visible\n2088486 - Policy that creates cluster role is showing as not compliant due to Request entity too large message\n2089490 - Upgraded from RHACM 2.2--\u003e2.3--\u003e2.4 and cannot create cluster\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2097464 - ACM Console Becomes Unusable After a Time\n2100613 - RHACM 2.4.6 images\n2102436 - Cluster Pools with conflicting name of existing clusters in same namespace fails creation and deletes existing cluster\n2102495 - ManagedClusters in Pending import state after ACM hub migration\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n2109354 - CVE-2022-31150 nodejs16: CRLF injection in node-undici\n2121396 - CVE-2022-31151 nodejs/undici: Cookie headers uncleared on cross-origin redirect\n2124794 - CVE-2022-36067 vm2:  Sandbox Escape in vm2\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Important: Logging Subsystem 5.5.0 - Red Hat OpenShift security update\nAdvisory ID:       RHSA-2022:6051-01\nProduct:           RHOL\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:6051\nIssue date:        2022-08-18\nCVE Names:         CVE-2021-38561 CVE-2022-0759 CVE-2022-1012\n                   CVE-2022-1292 CVE-2022-1586 CVE-2022-1785\n                   CVE-2022-1897 CVE-2022-1927 CVE-2022-2068\n                   CVE-2022-2097 CVE-2022-21698 CVE-2022-30631\n                   CVE-2022-32250\n====================================================================\n1. Summary:\n\nAn update is now available for RHOL-5.5-RHEL-8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Description:\n\nLogging Subsystem 5.5.0 - Red Hat OpenShift\n\nSecurity Fix(es):\n\n* kubeclient: kubeconfig parsing error can lead to MITM attacks\n(CVE-2022-0759)\n\n* golang: compress/gzip: stack exhaustion in Reader.Read (CVE-2022-30631)\n\n* golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n(CVE-2021-38561)\n\n* prometheus/client_golang: Denial of service using\nInstrumentHandlerCounter (CVE-2022-21698)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n3. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2058404 - CVE-2022-0759 kubeclient: kubeconfig parsing error can lead to MITM attacks\n2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1415 - Allow users to tune fluentd\nLOG-1539 - Events and CLO csv are not collected after running `oc adm must-gather --image=$downstream-clo-image `\nLOG-1713 - Reduce Permissions granted for prometheus-k8s service account\nLOG-2063 - Collector pods fail to start when a Vector only Cluster Logging instance is created. \nLOG-2134 - The infra logs are sent to app-xx indices\nLOG-2159 - Cluster Logging Pods in CrashLoopBackOff\nLOG-2165 - [Vector] Default log level debug makes it hard to find useful error/failure messages. \nLOG-2167 - [Vector] Collector pods fails to start with configuration error when using Kafka SASL over SSL\nLOG-2169 - [Vector] Logs not being sent to Kafka with SASL plaintext. \nLOG-2172 - [vector]The openshift-apiserver and ovn audit logs can not  be collected. \nLOG-2242 - Log file metric exporter is still following /var/log/containers files. \nLOG-2243 - grafana-dashboard-cluster-logging should be deleted once clusterlogging/instance was removed\nLOG-2264 - Logging link should contain an icon\nLOG-2274 - [Logging 5.5] EO doesn\u0027t recreate secrets kibana and kibana-proxy after removing them. \nLOG-2276 - Fluent config format is hard to read via configmap\nLOG-2290 - ClusterLogging Instance status in not getting updated in UI\nLOG-2291 - [release-5.5] Events listing out of order in Kibana 6.8.1\nLOG-2294 - [Vector] Vector internal metrics are not exposed via HTTPS due to which OpenShift Monitoring Prometheus service cannot scrape the metrics endpoint. \nLOG-2300 - [Logging 5.5]ES pods can\u0027t be ready after removing secret/signing-elasticsearch\nLOG-2303 - [Logging 5.5] Elasticsearch cluster upgrade stuck\nLOG-2308 - configmap grafana-dashboard-elasticsearch is being created and deleted continously\nLOG-2333 - Journal logs not reaching Elasticsearch output\nLOG-2337 - [Vector] Missing @ prefix from the timestamp field in log record. \nLOG-2342 - [Logging 5.5] Kibana pod can\u0027t connect to ES cluster after removing secret/signing-elasticsearch: \"x509: certificate signed by unknown authority\"\nLOG-2384 - Provide a method to get authenticated from GCP\nLOG-2411 - [Vector] Audit logs forwarding not working. \nLOG-2412 - CLO\u0027s loki output url is parsed wrongly\nLOG-2413 - PriorityClass cluster-logging is deleted if provide an invalid log type\nLOG-2418 - EO supported time units don\u0027t match the units specified in CRDs. \nLOG-2439 - Telemetry: the managedStatus\u0026healthStatus\u0026version values are wrong\nLOG-2440 - [loki-operator] Live tail of logs does not work on OpenShift\nLOG-2444 - The write index is removed when `the size of the index` \u003e `diskThresholdPercent% * total size`. \nLOG-2460 - [Vector] Collector pods fail to start on a FIPS enabled cluster. \nLOG-2461 - [Vector] Vector auth config not generated when user provided bearer token is used in a secret for connecting to LokiStack. \nLOG-2463 - Elasticsearch operator repeatedly prints error message when checking indices\nLOG-2474 - EO shouldn\u0027t grant cluster-wide permission to system:serviceaccount:openshift-monitoring:prometheus-k8s when ES cluster is deployed. [openshift-logging 5.5]\nLOG-2522 - CLO supported time units don\u0027t match the units specified in CRDs. \nLOG-2525 - The container\u0027s logs are not sent to separate index if the annotation is added after the pod is ready. \nLOG-2546 - TLS handshake error on loki-gateway for FIPS cluster\nLOG-2549 - [Vector] [master] Journald logs not sent to the Log store when using Vector as collector. \nLOG-2554 - [Vector] [master] Fallback index is not used when structuredTypeKey is missing from JSON log data\nLOG-2588 - FluentdQueueLengthIncreasing rule failing to be evaluated. \nLOG-2596 - [vector]the condition in [transforms.route_container_logs] is inaccurate\nLOG-2599 - Supported values for level field don\u0027t match documentation\nLOG-2605 - $labels.instance is empty in the message when firing FluentdNodeDown alert\nLOG-2609 - fluentd and vector are unable to ship logs to elasticsearch when cluster-wide proxy is in effect\nLOG-2619 - containers violate PodSecurity -- Log Exporation\nLOG-2627 - containers violate PodSecurity -- Loki\nLOG-2649 - Level Critical should match the beginning of the line as the other levels\nLOG-2656 - Logging uses deprecated v1beta1 apis\nLOG-2664 - Deprecated Feature logs causing too much noise\nLOG-2665 - [Logging 5.5] Sometimes collector fails to push logs to Elasticsearch cluster\nLOG-2693 - Integration with Jaeger fails for ServiceMonitor\nLOG-2700 - [Vector] vector container can\u0027t start due to \"unknown field `pod_annotation_fields`\" . \nLOG-2703 - Collector DaemonSet is not removed when CLF is deleted for fluentd/vector only CL instance\nLOG-2725 - Upgrade logging-eventrouter Golang  version and tags\nLOG-2731 - CLO keeps reporting `Reconcile ServiceMonitor retry error` and `Reconcile Service retry error` after creating clusterlogging. \nLOG-2732 - Prometheus Operator pod throws \u0027skipping servicemonitor\u0027 error on Jaeger integration\nLOG-2742 - unrecognized outputs when use the sts role secret\nLOG-2746 - CloudWatch forwarding rejecting large log events, fills tmpfs\nLOG-2749 - OpenShift Logging Dashboard for Elastic Shards shows \"active_primary\" instead of \"active\" shards. \nLOG-2753 - Update Grafana configuration for LokiStack integration on grafana/loki repo\nLOG-2763 - [Vector]{Master} Vector\u0027s healthcheck fails when forwarding logs to Lokistack. \nLOG-2764 - ElasticSearch operator does not respect referencePolicy when selecting oauth-proxy image\nLOG-2765 - ingester pod can not be started in IPv6 cluster\nLOG-2766 - [vector] failed to parse cluster url: invalid authority IPv6 http-proxy\nLOG-2772 - arn validation failed when role_arn=arn:aws-us-gov:xxx\nLOG-2773 - No cluster-logging-operator-metrics  service in logging 5.5\nLOG-2778 - [Vector] [OCP 4.11] SA token not added to Vector config when connecting to LokiStack instance without CLF creds secret required by LokiStack. \nLOG-2784 - Japanese log messages are garbled at Kibana\nLOG-2793 - [Vector] OVN audit logs are missing the level field. \nLOG-2864 - [vector] Can not sent logs to default when loki is the default output in CLF\nLOG-2867 - [fluentd] All logs are sent to application tenant when loki is used as default logstore in CLF. \nLOG-2873 - [Vector] Cannot configure CPU/Memory requests/limits when using Vector as collector. \nLOG-2875 - Seeing a black rectangle box on the graph in Logs view\nLOG-2876 - The link to the \u0027Container details\u0027 page on the \u0027Logs\u0027 screen throws error\nLOG-2877 - When there is no query entered, seeing error message on the Logs view\nLOG-2882 - RefreshIntervalDropdown and TimeRangeDropdown always set back to its original values when switching between pages in \u0027Logs\u0027 screen\n\n6. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-38561\nhttps://access.redhat.com/security/cve/CVE-2022-0759\nhttps://access.redhat.com/security/cve/CVE-2022-1012\nhttps://access.redhat.com/security/cve/CVE-2022-1292\nhttps://access.redhat.com/security/cve/CVE-2022-1586\nhttps://access.redhat.com/security/cve/CVE-2022-1785\nhttps://access.redhat.com/security/cve/CVE-2022-1897\nhttps://access.redhat.com/security/cve/CVE-2022-1927\nhttps://access.redhat.com/security/cve/CVE-2022-2068\nhttps://access.redhat.com/security/cve/CVE-2022-2097\nhttps://access.redhat.com/security/cve/CVE-2022-21698\nhttps://access.redhat.com/security/cve/CVE-2022-30631\nhttps://access.redhat.com/security/cve/CVE-2022-32250\nhttps://access.redhat.com/security/updates/classification/#important\n\n7. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYv5/w9zjgjWX9erEAQhRBBAAiZe24VtCQruCG/MvGEOowBvHf/YNANlR\nN6WAw2VezEfvFkG7z599MWZVWz2jnZO6cn9i+CoNDanAmItPJ8ljK4sitrP2ywrG\nOKwqIa4DPrywFFTSMxemB604ewE0cvXifuqG5bQDn+GvndiV/u/XaVTYZseY1P5X\n8ZIJ20cxROOE9pg0/3eya27edZxDrgWx6BtzSEZw47ReV3Dogqy+KzRCAAoN+pE5\ng2t/E0u0Ypmjil9Ttsop/ejUg/iz8UTGtua4m1nzhZrsoE84p5xIgvCEkYlh3OrD\ntfawpj1r9Avcjk4zbZkAe/enSQZQv0iWD792SoP7/ddX5tIu05ArvPWj/NvN/rI4\ndFzMe2UmezuS2EQpzaWOug2xSQUbR1hI+Y4cy0YOHuwzeaMeoHSbNYTJmOxKR0v1\n44a9oSBku+Xfk8nUNqS+9oq0z3DlAWt2BjbfrJCbSjZQdOUOIGM95L3ClrXY9LYF\nPT5v+h2W4myonj6HVhkv+Wy7aRbYQ7Qhk/3AaN7Dz5soBSNK4exvOzWXGuf/BdSf\nXFef6O87ipZveHQYmTfH+t8aJV1plEVTrm8pyz2EfzCv1Fnhjn0rvbGZAFBlvqW+\nvhxoj505RQBBhcno16V1zczdd8KsiqY7aZniTuh2DQAVvNhqsHgn8rvQ7HJlExun\neIFVKOxx310=ynB/\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.3.12 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. Bugs fixed (https://bugzilla.redhat.com/):\n\n2076856 - [doc] Remove 1.9.1 from Proxy Patch Documentation\n2101411 - RHACM 2.3.12 images\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n\n5. \n2109205 - HTTPS_PROXY ENV missing in some CSI driver operators\n2109270 - Kube controllers crash when nodes are shut off in OpenStack\n2109489 - Reply to arp requests on interfaces with no ip\n2109709 - Namespace value is missing on the list when selecting \"All namespaces\" for operators\n2109731 - alertmanager-main pods failing to start due to startupprobe timeout\n2109866 - Cannot delete a Machine if a VM got stuck in ERROR\n2109977 - storageclass should not be created for unsupported vsphere version\n2110482 - [vsphere] failed to create cluster if datacenter is embedded in a Folder\n2110723 - openshift-tests: allow -f to match tests for any test suite\n2110737 - Master node in SchedulingDisabled after upgrade from 4.10.24 -\u003e 4.11.0-rc.4\n2111037 - Affinity rule created in console deployment for single-replica infrastructure\n2111347 - dummy bug for 4.10.z bz2111335\n2111471 - Node internal DNS address is not set for machine\n2111475 - Fetch internal IPs of vms from dhcp server\n2111587 - [4.11] Export OVS metrics\n2111619 - Pods are unable to reach clusterIP services, ovn-controller isn\u0027t installing the group mod flows correctly\n2111992 - OpenShift controller manager needs permissions to get/create/update leases for leader election\n2112297 - bond-cni: Backport \"mac duplicates\" 4.11\n2112353 - lifecycle.posStart hook does not have network connectivity. \n2112908 - Search resource \"virtualmachine\" in \"Home -\u003e Search\" crashes the console\n2112912 - sum_irate doesn\u0027t work in OCP 4.8\n2113926 - hypershift cluster deployment hang due to nil pointer dereference for hostedControlPlane.Spec.Etcd.Managed\n2113938 - Fix e2e tests for [reboots][machine_config_labels] (tsc=nowatchdog)\n2114574 - can not upgrade. Incorrect reading of olm.maxOpenShiftVersion\n2114602 - Upgrade failing because restrictive scc is injected into version pod\n2114964 - kola dhcp.propagation test failing\n2115315 - README file for helm charts coded in Chinese shows messy characters when viewing in developer perspective. \n2115435 - [4.11] INIT container stuck forever\n2115564 - ClusterVersion availableUpdates is stale: PromQL conditional risks vs. slow/stuck Thanos\n2115817 - Updates / config metrics are not available in 4.11\n2116009 - Node Tuning Operator(NTO) - OCP upgrade failed due to node-tuning CO still progressing\n2116557 - Order of config attributes are not maintained during conversion of PT4l from ptpconfig to ptp4l.0.config file\n2117223 - kubernetes-nmstate-operator fails to install with error \"no channel heads (entries not replaced by another entry) found in channel\"\n2117324 - catalog-operator fatal error: concurrent map writes\n2117353 - kola dhcp.propagation test out of memory\n2117370 - Migrate openshift-ansible to ansible-core\n2117746 - Bump to latest k8s.io 1.24 release\n2118214 - dummy bug for 4.10.z bz2118209\n2118375 - pass the \"--quiet\" option via the buildconfig for s2i\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nOCPBUGS-1 - Test Bug\n\n6. Summary:\n\nRed Hat OpenShift Container Platform release 4.13.0 is now available with\nupdates to packages and images that fix several bugs and add enhancements. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.13.0. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2023:1325\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.13/release_notes/ocp-4-13-release-notes.html\n\nSecurity Fix(es):\n\n* goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as\nrandom as they should be (CVE-2021-4238)\n\n* go-yaml: Denial of Service in go-yaml (CVE-2021-4235)\n\n* mongo-go-driver: specific cstrings input may not be properly validated\n(CVE-2021-20329)\n\n* golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n(CVE-2021-38561)\n\n* prometheus/client_golang: Denial of service using\nInstrumentHandlerCounter (CVE-2022-21698)\n\n* helm: Denial of service through through repository index file\n(CVE-2022-23525)\n\n* helm: Denial of service through schema file (CVE-2022-23526)\n\n* golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)\n\n* vault: insufficient certificate revocation list checking (CVE-2022-41316)\n\n* golang: net/http: excessive memory growth in a Go server accepting HTTP/2\nrequests (CVE-2022-41717)\n\n* x/net/http2/h2c: request smuggling (CVE-2022-41721)\n\n* net/http, golang.org/x/net/http2: avoid quadratic complexity in HPACK\ndecoding (CVE-2022-41723)\n\n* golang: crypto/tls: large handshake records may cause panics\n(CVE-2022-41724)\n\n* golang: net/http, mime/multipart: denial of service from excessive\nresource consumption (CVE-2022-41725)\n\n* exporter-toolkit: authentication bypass via cache poisoning\n(CVE-2022-46146)\n\n* vault: Vault\u2019s Microsoft SQL Database Storage Backend Vulnerable to SQL\nInjection Via Configuration File (CVE-2023-0620)\n\n* hashicorp/vault: Vault\u2019s PKI Issuer Endpoint Did Not Correctly Authorize\nAccess to Issuer Metadata (CVE-2023-0665)\n\n* hashicorp/vault: Cache-Timing Attacks During Seal and Unseal Operations\n(CVE-2023-25000)\n\n* helm: getHostByName Function Information Disclosure (CVE-2023-25165)\n\n* containerd: Supplementary groups are not set up properly (CVE-2023-25173)\n\n* runc: volume mount race condition (regression of CVE-2019-19921)\n(CVE-2023-27561)\n\n* runc: AppArmor can be bypassed when `/proc` inside the container is\nsymlinked with a specific mount configuration (CVE-2023-28642)\n\n* baremetal-operator: plain-text username and hashed password readable by\nanyone having a cluster-wide read-access (CVE-2023-30841)\n\n* runc: Rootless runc makes `/sys/fs/cgroup` writable (CVE-2023-25809)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAll OpenShift Container Platform 4.13 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift CLI (oc)\nor web console. Instructions for upgrading a cluster are available at\nhttps://docs.openshift.com/container-platform/4.13/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.13 see the following documentation,\nwhich will be updated shortly for this release, for important instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.13/release_notes/ocp-4-13-release-notes.html\n\nYou may download the oc tool and use it to inspect release image metadata\nfor x86_64, s390x, ppc64le, and aarch64 architectures. The image digests\nmay be found at\nhttps://quay.io/repository/openshift-release-dev/ocp-release?tab=tags\n\nThe sha values for the release are:\n\n(For x86_64 architecture)\nThe image digest is\nsha256:74b23ed4bbb593195a721373ed6693687a9b444c97065ce8ac653ba464375711\n\n(For s390x architecture)\nThe image digest is\nsha256:a32d509d960eb3e889a22c4673729f95170489789c85308794287e6e9248fb79\n\n(For ppc64le architecture)\nThe image digest is\nsha256:bca0e4a4ed28b799e860e302c4f6bb7e11598f7c136c56938db0bf9593fb76f8\n\n(For aarch64 architecture)\nThe image digest is\nsha256:e07e4075c07fca21a1aed9d7f9c165696b1d0fa4940a219a000894e5683d846c\n\nAll OpenShift Container Platform 4.13 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.13/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1770297 - console odo download link needs to go to an official location or have caveats [openshift-4.4]\n1853264 - Metrics produce high unbound cardinality\n1877261 - [RFE] Mounted volume size issue when restore a larger size pvc than snapshot\n1904573 - OpenShift: containers modify /etc/passwd group writable\n1943194 - when using gpus, more nodes than needed are created by the node autoscaler\n1948666 - After entering valid git repo url on Import from git page, throwing warning message instead Validated\n1971033 - CVE-2021-20329 mongo-go-driver: specific cstrings input may not be properly validated\n2005232 - Pods list page should only show Create Pod button to user has sufficient permission\n2016006 - Repositories list does not show the running pipelinerun as last pipelinerun\n2027000 - The user is ignored when we create a new file using a MachineConfig\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2047299 - nodeport not reachable port connection timeout\n2050230 - Implement LIST call chunking in openshift-sdn\n2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server\n2065166 - GCP - Less privileged service accounts are created with Service Account User role\n2066388 - Wrong Error generates when https is missing in the value of `regionEndpoint`   in `configs.imageregistry.operator.openshift.io/cluster`\n2066664 - [cluster-storage-operator] - Minimize wildcard/privilege Usage in Cluster and Local Roles\n2070744 - openshift-install destroy in us-gov-west-1 results in infinite loop - AWS govcloud\n2075548 - Support AllocateLoadBalancerNodePorts=False with ETP=local, LGW mode\n2076619 - Could not create deployment with an unknown git repo and builder image build strategy\n2078222 - egressIPs behave inconsistently towards in-cluster traffic (hosts and services backed by host-networked pods)\n2079981 - PVs not deleting on azure (or very slow to delete) since CSI migration to azuredisk\n2081858 - OVN-Kubernetes: SyncServices for nodePortWatcherIptables should propagate failures back to caller\n2083087 - \"Delete dependent objects of this resource\" might cause confusions\n2084452 - PodDisruptionBudgets help message should be semantic\n2087043 - Cluster API components should use K8s 1.24 dependencies\n2087553 - No rhcos-4.11/x86_64 images in the 2 new regions on alibabacloud, \"ap-northeast-2 (South Korea (Seoul))\" and \"ap-southeast-7 (Thailand (Bangkok))\"\n2089093 - CVO hotloops on OperatorGroup due to the diff of \"upgradeStrategy\":  string(\"Default\")\n2089138 - CVO hotloops on ValidatingWebhookConfiguration /performance-addon-operator\n2090680 - upgrade for a disconnected cluster get hang on retrieving and verifying payload\n2092567 - Network policy is not being applied as expected\n2092811 - Datastore name is too long\n2093339 - [rebase v1.24]  Only known images used by tests\n2095719 - serviceaccounts are not updated after upgrade from 4.10 to 4.11\n2100181 - WebScale: configure-ovs.sh fails because it picks the wrong default interface\n2100429 - [apiserver-auth] default SCC restricted allow volumes don\u0027t have \"ephemeral\" caused deployment with Generic Ephemeral Volumes stuck at Pending\n2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n2104978 - MCD degrades are not overwrite-able by subsequent errors\n2110565 - PDB: Remove add/edit/remove actions in Pod resource action menu\n2110570 - Topology sidebar: Edit pod count shows not the latest replicas value when edit the count again\n2110982 - On GCP, need to check load balancer health check IPs  required for restricted installation\n2113973 - operator scc is nor fixed when we define a custom scc with readOnlyRootFilesystem: true\n2114515 - Getting critical NodeFilesystemAlmostOutOfSpace alert for 4K tmpfs\n2115265 - Search page: LazyActionMenus are shown below Add/Remove from navigation button\n2116686 - [capi] Cluster kind should be valid\n2117374 - Improve Pod Admission failure for restricted-v2 denials that pass with restricted\n2135339 - CVE-2022-41316 vault: insufficient certificate revocation list checking\n2149436 - CVE-2022-46146 exporter-toolkit: authentication bypass via cache poisoning\n2154196 - CVE-2022-23526 helm: Denial of service through schema file\n2154202 - CVE-2022-23525 helm: Denial of service through through repository index file\n2156727 - CVE-2021-4235 go-yaml: Denial of Service in go-yaml\n2156729 - CVE-2021-4238 goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be\n2161274 - CVE-2022-41717 golang: net/http: excessive memory growth in a Go server accepting HTTP/2 requests\n2162182 - CVE-2022-41721 x/net/http2/h2c: request smuggling\n2168458 - CVE-2023-25165 helm: getHostByName Function Information Disclosure\n2174485 - CVE-2023-25173 containerd: Supplementary groups are not set up properly\n2175721 - CVE-2023-27561 runc: volume mount race condition (regression of CVE-2019-19921)\n2178358 - CVE-2022-41723 net/http, golang.org/x/net/http2: avoid quadratic complexity in HPACK decoding\n2178488 - CVE-2022-41725 golang: net/http, mime/multipart: denial of service from excessive resource consumption\n2178492 - CVE-2022-41724 golang: crypto/tls: large handshake records may cause panics\n2182883 - CVE-2023-28642 runc: AppArmor can be bypassed when `/proc` inside the container is symlinked with a specific mount configuration\n2182884 - CVE-2023-25809 runc: Rootless runc makes `/sys/fs/cgroup` writable\n2182972 - CVE-2023-25000 hashicorp/vault: Cache-Timing Attacks During Seal and Unseal Operations\n2182981 - CVE-2023-0665 hashicorp/vault: Vault?s PKI Issuer Endpoint Did Not Correctly Authorize Access to Issuer Metadata\n2184663 - CVE-2023-0620 vault: Vault?s Microsoft SQL Database Storage Backend Vulnerable to SQL Injection Via Configuration File\n2190116 - CVE-2023-30841 baremetal-operator: plain-text username and hashed password readable by anyone having a cluster-wide read-access\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nOCPBUGS-10036 - Enable aesgcm encryption provider by default in openshift/api\nOCPBUGS-10038 - Enable aesgcm encryption provider by default in openshift/cluster-config-operator\nOCPBUGS-10042 - Enable aesgcm encryption provider by default in openshift/cluster-kube-apiserver-operator\nOCPBUGS-10043 - Enable aesgcm encryption provider by default in openshift/cluster-openshift-apiserver-operator\nOCPBUGS-10044 - Enable aesgcm encryption provider by default in openshift/cluster-authentication-operator\nOCPBUGS-10047 - oc-mirror  print log: unable to parse reference oci://mno/redhat-operator-index:v4.12\nOCPBUGS-10057 - With WPC card configured as GM or BC, phc2sys clock lock state is shown as FREERUN in ptp metrics while it should be LOCKED\nOCPBUGS-10213 - aws: mismatch between RHCOS and AWS SDK regions\nOCPBUGS-10220 - Newly provisioned machines unable to join cluster\nOCPBUGS-10221 - Risk cache warming takes too long on channel changes\nOCPBUGS-10237 - Limit the nested repository path while mirroring the images using oc-mirror for those who cant have nested paths in their container registry\nOCPBUGS-10239 - [release-4.13] Fix of ServiceAccounts gathering\nOCPBUGS-10249 - PollConsoleUpdates won\u0027t fire toast if one or more manifests errors when plugins change\nOCPBUGS-10267 - NetworkManager TUI quits regardless of a detected unsupported configuration\nOCPBUGS-10271 - [4.13] Netflink overflow alert\nOCPBUGS-10278 - Graph-data is not mounted on graph-builder correctly while install using graph-data image built by oc-mirror\nOCPBUGS-10281 - Openshift Ansible OVS version out of sync with RHCOS\nOCPBUGS-10291 - Broken link for Ansible tagging\nOCPBUGS-10298 - TenantID is ignored in some cases\nOCPBUGS-10320 - Catalogs should not be included in the ImageContentSourcePolicy.yaml\nOCPBUGS-10321 - command cannot be worked after chroot /host for oc debug pod\nOCPBUGS-1033 - Multiple extra manifests in the same file are not applied correctly\nOCPBUGS-10334 - Nutanix cloud-controller-manager pod not have permission to get/list ConfigMap\nOCPBUGS-10353 - kube-apiserver not receiving or processing shutdown signal after coreos 9.2 bump\nOCPBUGS-10367 - Pausing pools in OCP 4.13 will cause critical alerts to fire\nOCPBUGS-10377 - [gcp] IPI installation with Shielded VMs enabled failed on restarting the master machines\nOCPBUGS-10404 - Workload annotation missing from deployments\nOCPBUGS-10421 - RHCOS 4.13 live iso x84_64 contains restrictive policy.json\nOCPBUGS-10426 - node-topology is not exported due to kubelet.sock: connect: permission denied \nOCPBUGS-10427 - 4.1 born cluster fails to scale-up due to podman run missing `--authfile` flag\nOCPBUGS-10432 - CSI Inline Volume admission plugin does not log object name correctly\nOCPBUGS-10440 - OVN IPSec - does not create IPSec tunnels\nOCPBUGS-10474 - OpenShift pipeline TaskRun(s) column Duration is not present as column in UI\nOCPBUGS-10476 - Disable netlink mode of netclass collector in Node Exporter. \nOCPBUGS-1048 - if tag categories don\u0027t exist, the installation will fail to bootstrap\nOCPBUGS-10483 - [4.13 arm64 image][AWS EFS] Driver fails to get installed/exec format error\nOCPBUGS-10558 - MAPO failing to retrieve flavour information after rotating credentials\nOCPBUGS-10585 - [4.13] Request to update RHCOS installer bootimage metadata \nOCPBUGS-10586 - Console shows x509 error when requesting token from oauth endpoint\nOCPBUGS-10597 - The agent-tui shows again during the installation\nOCPBUGS-1061 - administrator console, monitoring-alertmanager-edit user list or create silence, \"Observe - Alerting - Silences\" page is pending\nOCPBUGS-10645 - 4.13: Operands running management side missing affinity, tolerations, node selector and priority rules than the operator\nOCPBUGS-10656 - create image command erroneously logs that Base ISO was obtained from release\nOCPBUGS-10657 - When releaseImage is a digest the create image command generates spurious warning\nOCPBUGS-10658 - Wrong PrimarySubnet in OpenstackProviderSpec when using Failure Domains\nOCPBUGS-10661 - machine API operator failing with No Major.Minor.Patch elements found\nOCPBUGS-10678 - Developer catalog shows ImageStreams as samples which has no sampleRepo\nOCPBUGS-10679 - Show type of sample on the samples view\nOCPBUGS-10689 - [IPI on BareMetal]: Workers failing inspection when installing with proxy\nOCPBUGS-10697 - [release-4.13] User is allowed to create IP Address pool with duplicate entries for namespace and matchExpression for serviceSelector and namespaceSelector\nOCPBUGS-10698 - [release-4.13] Already assigned IP address is removed from a service on editing the ip address pool. \nOCPBUGS-10710 - Metal virtual media job permafails during early bootstrap\nOCPBUGS-10716 - Image Registry default to Removed on IBM cloud after 4.13.0-ec.3\nOCPBUGS-10739 - [4.13] Bootimage bump tracker\nOCPBUGS-10744 - [4.13] EgressFirewall status disappeared \nOCPBUGS-10746 - Downstream Operator-SDK v1.22.2 to OCP 4.13\nOCPBUGS-10771 - upgrade test failure with \"Cluster operator control-plane-machine-set is not available\"\nOCPBUGS-10773 - TestNewAppRun unit test failing\nOCPBUGS-10792 - Hypershift namespace servicemonitor has wrong API group\nOCPBUGS-10793 - Ignore device list missing in Node Exporter \nOCPBUGS-10796 - [4.13] Egress firewall is not retried on error\nOCPBUGS-10799 - Network policy perf improvements\nOCPBUGS-10801 - [4.13] Upgrade to 4.10 stalled on timeout completing syncEgressFirewall\nOCPBUGS-10811 - Missing vCenter build number in telemetry\nOCPBUGS-10813 - SCOS bootstrap should skip pivot when root is not writable\nOCPBUGS-10826 - RHEL 9.2 doesn\u0027t contain the `kernel-abi-whitelists` package. \nOCPBUGS-10832 - Edit Deployment (and DC) form doesn\u0027t enable Save button when changing strategy type\nOCPBUGS-10833 - update the default pipelineRun template name\nOCPBUGS-10834 - [OVNK] [IC] Having only one leader election in the master process\nOCPBUGS-10873 - OVN to OVN-H migration seems broken\nOCPBUGS-10888 - oauth-server fails to invalidate cache, causing non existing groups being referenced\nOCPBUGS-10890 - Hypershift replace upgrade: node in NotReady after upgrading from a 4.14 image to another 4.14 image\nOCPBUGS-10891 - Cluster Autoscaler balancing similar nodes test fails randomly\nOCPBUGS-10892 - Passwords printed in log messages\nOCPBUGS-10893 - Remove unsupported warning in oc-mirror when using the --skip-pruning flag\nOCPBUGS-10902 - [IBMCloud] destroyed the private cluster, fail to cleanup the dns records\nOCPBUGS-10903 - [IBMCloud] fail to ssh to master/bootstrap/worker nodes from the bastion inside a customer vpc. \nOCPBUGS-10907 - move to rhel9 in DTK for 4.13\nOCPBUGS-10914 - Node healthz server: return unhealthy when pod is to be deleted\nOCPBUGS-10919 - Update Samples Operator to use latest jenkins 4.12 release\nOCPBUGS-10923 - Cluster bootstrap waits for only one master to join before finishing \nOCPBUGS-10929 - Kube 1.26 for ovn-k\nOCPBUGS-10946 - For IPv6-primary dual-stack cluster, kubelet.service renders only single node-ip\nOCPBUGS-10951 - When imagesetconfigure without OCI FBC format config, but command with use-oci-feature  flag, the oc-mirror command should check the imagesetconfigure firstly and print error immediately\nOCPBUGS-10953 - ovnkube-node does not close up correctly\nOCPBUGS-10955 - [release-4.13] NMstate complains about ping not working when adding multiple routing tables with different gateways\nOCPBUGS-10960 - [4.13] Vertical Scaling: do not trigger inadvertent machine deletion during bootstrap\nOCPBUGS-10965 - The network-tools image stream is missing in the cluster samples\nOCPBUGS-10982 - [4.13] nodeSelector in EgressFirewall doesn\u0027t work in dualstack cluster\nOCPBUGS-10989 - Agent create sub-command is returning fatal error\nOCPBUGS-10990 - EgressIP doesn\u0027t work in GCP XPN cluster\nOCPBUGS-11004 - Bootstrap kubelet client cert should include system:serviceaccounts group\nOCPBUGS-11010 - [vsphere] zone cluster installation fails if vSphere Cluster is embedded in Folder\nOCPBUGS-11022 - [4.13][scale] all egressfirewalls will be updated on every node update\nOCPBUGS-11023 - [4.13][scale] Ingress network policy creates more flows than before\nOCPBUGS-11031 - SNO OCP upgrade from 4.12 to 4.13 failed due to node-tuning operator is not available - tuned pod stuck at Terminating\nOCPBUGS-11032 - Update the validation interval for the cluster transfer to 12 hours\nOCPBUGS-11040 - --container-runtime is being removed in k8s 1.27\nOCPBUGS-11054 - GCP: add europe-west12 region to the survey as supported region\nOCPBUGS-11055 - APIServer service isn\u0027t selected correctly for PublicAndPrivate cluster when external-dns is not configured\nOCPBUGS-11058 - [4.13] Conmon leaks symbolic links in /var/run/crio when pods are deleted\nOCPBUGS-11068 - nodeip-configuration not enabled for VSphere UPI\nOCPBUGS-11107 - Alerts display incorrect source when adding external alert sources\nOCPBUGS-11117 - The provided gcc RPM inside DTK does not match the gcc used to build the kernel\nOCPBUGS-11120 - DTK docs should mention the ubi9 base image instead of ubi8\nOCPBUGS-11213 - BMH moves to deleting before all finalizers are processed\nOCPBUGS-11218 - \"pipelines-as-code-pipelinerun-go\" configMap is not been used for the Go repository \nOCPBUGS-11222 - kube-controller-manager cluster operator is degraded due connection refused while querying rules\nOCPBUGS-11227 - Relax CSR check due to k8s 1.27 changes\nOCPBUGS-11232 - All projects options shows as undefined after selection in Dev perspective Pipelines page \nOCPBUGS-11248 - Secret name variable get renders in Create Image pull secret alert\nOCPBUGS-1125 - Fix disaster recovery test [sig-etcd][Feature:DisasterRecovery][Disruptive] [Feature:EtcdRecovery] Cluster should restore itself after quorum loss [Serial]\nOCPBUGS-11257 - egressip cannot be assigned on hypershift hosted cluster node\nOCPBUGS-11261 - [AWS][4.13] installer get stuck if BYO private hosted zone is configured\nOCPBUGS-11263 - PTP KPI version 4.13 RC2 WPC - offset jumps to huge numbers \nOCPBUGS-11307 - Egress firewall node selector test missing\nOCPBUGS-11333 - startupProbe for UWM prometheus is still 15m\nOCPBUGS-11339 - ose-ansible-operator base image version is still 4.12 in the operators that generated by operator-sdk 4.13\nOCPBUGS-11340 - ose-helm-operator base image version is still 4.12 in the operators that generated by operator-sdk 4.13\nOCPBUGS-11341 - openshift-manila-csi-driver is missing the workload.openshift.io/allowed label\nOCPBUGS-11354 - CPMS: node readiness transitions not always trigger reconcile \nOCPBUGS-11384 - Switching from enabling realTime to disabling Realtime Workloadhint causes stalld to be enabled\nOCPBUGS-11390 - Service Binding Operator installation fails: \"A subscription for this operator already exists in namespace ...\"\nOCPBUGS-11424 - [release-4.13] new whereabouts reconciler relies on HOSTNAME which != spec.nodeName\nOCPBUGS-11427 - [release-4.13] whereabouts reads wrong annotation \"k8s.v1.cni.cncf.io/networks-status\", should be \"k8s.v1.cni.cncf.io/network-status\"\nOCPBUGS-11456 - PTP - When GM and downstream slaves are configured on same server, ptp metrics show slaves as FREERUN\nOCPBUGS-11458 - Ingress Takes 40s on Average Downtime During GCP OVN Upgrades\nOCPBUGS-11460 - CPMS doesn\u0027t always generate configurations for AWS\nOCPBUGS-11468 - Community operator cannot be mirrored due to malformed image address\nOCPBUGS-11469 - [release4.13] \"exclude bundles with `olm.deprecated` property when rendering\" not backport\nOCPBUGS-11473 - NS autolabeler requires RoleBinding subject namespace to be set when using ServiceAccount\nOCPBUGS-11485 - [4.13] NVMe disk by-id rename breaks LSO/ODF\nOCPBUGS-11503 - Update 4.13 cluster-network-operator image in Dockerfile to be consistent with ART\nOCPBUGS-11506 - CPMS e2e periodics tests timeout failures\nOCPBUGS-11507 - Potential 4.12 to 4.13 upgrade failure due to NIC rename\nOCPBUGS-11510 - Setting cpu-quota.crio.io to `disable` with crun causes container creation to fail\nOCPBUGS-11511 - [4.13] static container pod cannot be running due to CNI request failed with status 400\nOCPBUGS-11529 - [Azure] fail to collect the vm serial log with ?gather bootstrap?\nOCPBUGS-11536 - Cluster monitoring operator runs node-exporter with btrfs collector\nOCPBUGS-11545 - multus-admission-controller should not run as root under Hypershift-managed CNO\nOCPBUGS-11558 - multus-admission-controller should not run as root under Hypershift-managed CNO\nOCPBUGS-11589 - Ensure systemd is compatible with rhel8 journalctl\nOCPBUGS-11598 - openshift-azure-routes triggered continously on rhel9\nOCPBUGS-11606 - User configured In-cluster proxy configuration squashed in hypershift\nOCPBUGS-11643 - Updating kube-rbac-proxy images to be consistent with ART\nOCPBUGS-11657 - [4.13] Static IPv6 LACP bonding is randomly failing in RHCOS 413.92\nOCPBUGS-11659 - Error extracting libnmstate.so.1.3.3 when create image\nOCPBUGS-11661 - AWS s3 policy changes block all OCP installs on AWS\nOCPBUGS-11669 - Bump to kubernetes 1.26.3\nOCPBUGS-11683 - [4.13] Add Controller health to CEO liveness probe\nOCPBUGS-11694 - [4.13] Update legacy toolbox to use registry.redhat.io/rhel9/support-tools\nOCPBUGS-11706 - ccoctl cannot create STS documents in 4.10-4.13 due to s3 policy changes\nOCPBUGS-11750 - TuningCNI cnf-test failure: sysctl allowlist update\nOCPBUGS-11765 - [4.13] Keep current OpenSSH default config in RHCOS 9\nOCPBUGS-11776 - [4.13] VSphereStorageDriver does not document the platform default\nOCPBUGS-11778 - Upgrade SNO: no resolv.conf caused by failure in forcedns dispatcher script\nOCPBUGS-11787 - Update 4.14 ose-vmware-vsphere-csi-driver image to be consistent with ART\nOCPBUGS-11789 - [4.13] Bootimage bump tracker\nOCPBUGS-11799 - [4.13] Bootimage bump tracker\nOCPBUGS-11823 - [Reliability]kube-apiserver\u0027s memory usage keep increasing to max 3GB in 7 days\nOCPBUGS-11848 - PtpOperatorsConfig not applying correctly\nOCPBUGS-11866 - Pipeline is not removed when Deployment/DC/Knative Service or Application is deleted\nOCPBUGS-11870 - [4.13] Nodes in Ironic are created without namespaces initially\nOCPBUGS-11876 - oc-mirror generated file-based catalogs crashloop\nOCPBUGS-11908 - Got the `file exists` error when different digest direct to the same tag\nOCPBUGS-11917 - the warn message won\u0027t disappear in co/node-tuning when scale down machineset\nOCPBUGS-11919 - Console metrics could have a high cardinality (4.13)\nOCPBUGS-11950 - fail to create vSphere IPI cluster as apiVIP and ingressVIP are not in machine networks\nOCPBUGS-11955 - NTP config not applied\nOCPBUGS-11968 - Instance shouldn\u0027t be moved back from f to a\nOCPBUGS-11985 - [4.13] Ironic inspector service should be proxied\nOCPBUGS-12172 - Users don\u0027t know what type of resource is being created by Import from Git or Deploy Image flows\nOCPBUGS-12179 - agent-tui is failing to start when using libnmstate.2\nOCPBUGS-12186 - Pipeline doesn\u0027t render correctly when displayed but looks fine in edit mode\nOCPBUGS-12198 - create hosted cluster failed with aws s3 access issue\nOCPBUGS-12212 - cluster failed to convert from dualstack to ipv4 single stack\nOCPBUGS-12225 - Add new OCP 4.13 storage admission plugin\nOCPBUGS-12257 - Catalogs rebuilt by oc-mirror are in crashloop : cache is invalid\nOCPBUGS-12259 - oc-mirror fails to complete with heads only complaining about devworkspace-operator\nOCPBUGS-12271 - Hypershift conformance test fails new cpu partitioning tests\nOCPBUGS-12272 - Importing a kn Service shows a non-working Open URL decorator also when the Add Route checkbox was unselected\nOCPBUGS-12273 - When Creating Sample Devfile from the Samples Page, Topology Icon is not set\nOCPBUGS-12450 - [4.13] Fix Flake TestAttemptToScaleDown/scale_down_only_by_one_machine_at_a_time\nOCPBUGS-12465 - --use-oci-feature leads to confusion and needs to be better named\nOCPBUGS-12478 - CSI driver + operator containers are not pinned to mgmt cores\nOCPBUGS-1264 - e2e-vsphere-zones failing due to unable to parse cloud-config\nOCPBUGS-12698 - redfish-virtualmedia mount not working \nOCPBUGS-12703 - redfish-virtualmedia mount not working \nOCPBUGS-12708 - [4.13] Changing a PreprovisioningImage ImageURL and/or ExtraKernelParams should reboot the host\nOCPBUGS-1272 - \"opm alpha render-veneer basic\" doesn\u0027t support pipe stdin\nOCPBUGS-12737 - Multus admission controller must have \"hypershift.openshift.io/release-image\" annotation when CNO is managed by Hypershift\nOCPBUGS-12786 - OLM CatalogSources in guest cluster cannot pull images if pre-GA\nOCPBUGS-12804 - Dual stack VIPs incompatible with EnableUnicast setting\nOCPBUGS-12854 - `cluster-reader` role cannot access \"k8s.ovn.org\" API Group resources\nOCPBUGS-12862 - IPv6 ingress VIP not configured in keepalived on vSphere Dual-stack\nOCPBUGS-12865 - Kubernetes-NMState CI is perma-failing\nOCPBUGS-12933 - Node Tuning Operator crashloops when in Hypershift mode\nOCPBUGS-12994 - TCP DNS Local Preference is not working for Openshift SDN\nOCPBUGS-12999 - Backport owners through 4.13, 4.12\nOCPBUGS-13029 - Update Cluster Sample Operator dependencies and libraries for OCP 4.13\nOCPBUGS-13057 - ppc64le releases don\u0027t install because ovs fails to start (invalid permissions)\nOCPBUGS-13069 - [whereabouts-cni] CNO must use reconciliation controller in order to support dual stack in 4.12 [4.13 dependency]\nOCPBUGS-13071 - CI fails on TestClientTLS\nOCPBUGS-13072 - Capture tests don\u0027t work in OVNK\nOCPBUGS-13076 - Load balancers/ Ingress controller removal race condition\nOCPBUGS-13157 - CI fails on TestRouterCompressionOperation\nOCPBUGS-13254 - Nutanix cloud provider should use Kubernetes 1.26 dependencies\nOCPBUGS-1327 - [IBMCloud] Worker machines unreachable during initial bring up\nOCPBUGS-1352 - OVN silently failing in case of a stuck pod\nOCPBUGS-1427 - Ignore non-ready endpoints when processing endpointslices\nOCPBUGS-1428 - service account token secret reference\nOCPBUGS-1435 - [Ingress Node Firewall Operator] [Web Console] Allow user to override namespace where the operator is installed, currently user can install it only in openshift-operators ns\nOCPBUGS-1443 - Unable to get ClusterVersion error while upgrading 4.11 to 4.12\nOCPBUGS-1453 - TargetDown alert expression is NOT correctly joining kube-state-metrics metric\nOCPBUGS-1458 - cvo pod crashloop during bootstrap: featuregates: connection refused\nOCPBUGS-1486 - Avoid re-metric\u0027ing the pods that are already setup when ovnkube-master disrupts/reinitializes/restarts/goes through leader election\nOCPBUGS-1557 - Default to floating automaticRestart for new GCP instances\nOCPBUGS-1560 - [vsphere] installation fails when only configure single zone in install-config\nOCPBUGS-1565 - Possible split brain with keepalived unicast\nOCPBUGS-1566 - Automation Offline CPUs Test cases\nOCPBUGS-1577 - Incorrect network configuration in worker node with two interfaces\nOCPBUGS-1604 - Common resources out-of-date when using multicluster switcher\nOCPBUGS-1606 - Multi-cluster: We should not filter OLM catalog by console pod architecture and OS on managed clusters \nOCPBUGS-1612 - [vsphere] installation errors out when missing topology in a failure domain\nOCPBUGS-1617 - Remove unused node.kubernetes.io/not-reachable toleration\nOCPBUGS-1627 - [vsphere] installation fails when setting user-defined folder in failure domain\nOCPBUGS-1646 - [osp][octavia lb] LBs type svcs not updated until all the LBs are created\nOCPBUGS-166 - 4.11 SNOs fail to complete install because of \"failed to get pod annotation: timed out waiting for annotations: context deadline exceeded\"\nOCPBUGS-1665 - Scorecard failed because of the request of PodSecurity\nOCPBUGS-1671 - Creating a statefulset with the example image from the UI on ARM64 leads to a Pod in crashloopbackoff due to the only-amd64 image provided\nOCPBUGS-1704 - [gcp] when the optional Service Usage API is disabled, IPI installation cannot succeed\nOCPBUGS-1725 - Affinity rule created in router deployment for single-replica infrastructure and \"NodePortService\" endpoint publishing strategy\nOCPBUGS-1741 - Can\u0027t load additional Alertmanager templates with latest 4.12 OpenShift\nOCPBUGS-1748 - PipelineRun templates must be fetched from OpenShift namespace\nOCPBUGS-1761 - osImages that cannot be pulled do not set the node as Degraded properly\nOCPBUGS-1769 - gracefully fail when iam:GetRole is denied\nOCPBUGS-1778 - Can\u0027t install clusters with schedulable masters\nOCPBUGS-1791 - Wait-for install-complete  did not exit upon completion. \nOCPBUGS-1805 - [vsphere-csi-driver-operator] CSI cloud.conf doesn\u0027t list multiple datacenters when specified \nOCPBUGS-1807 - Ingress Operator startup bad log message formatting\nOCPBUGS-1844 - Ironic dnsmasq doesn\u0027t include existing DNS settings during iPXE boot\nOCPBUGS-1852 - [RHOCP 4.10] Subscription tab for operator doesn\u0027t land on correct URL\nOCPBUGS-186 - PipelineRun task status overlaps status text\nOCPBUGS-1998 - Cluster monitoring fails to achieve new level during upgrade w/ unavailable node\nOCPBUGS-2015 - TestCertRotationTimeUpgradeable failing consistently in kube-apiserver-operator\nOCPBUGS-2083 - OCP 4.10.33 uses a weak 3DES cipher in the VMWare CSI Operator for communication and provides no method to disable it\nOCPBUGS-2088 - User can set rendezvous host to be a worker\nOCPBUGS-2141 - doc link in PrometheusDataPersistenceNotConfigured message is 4.8\nOCPBUGS-2145 - \u0027maxUnavailable\u0027 and \u0027minAvailable\u0027 on PDB creation page - i18n misses\nOCPBUGS-2209 - Hard eviction thresholds is different with k8s default when PAO is enabled\nOCPBUGS-2248 - [alibabacloud] IPI installation failed with master nodes being NotReady and CCM error \"alicloud: unable to split instanceid and region from providerID\"\nOCPBUGS-2260 - KubePodNotReady - Increase Tolerance During Master Node Restarts\nOCPBUGS-2306 - On Make Serverless page, to change values of the inputs minpod, maxpod and concurrency fields, we need to click the ? + ? or ? - \u0027, it can\u0027t be changed by typing in it. \nOCPBUGS-2319 - metal-ipi upgrade success rate dropped 30+% in last week\nOCPBUGS-2384 - [2035720] [IPI on Alibabacloud] deploying a private cluster by \u0027publish: Internal\u0027 failed due to \u0027dns_public_record\u0027\nOCPBUGS-2440 - unknown field logs in prometheus-operator\nOCPBUGS-2471 - BareMetalHost is available without cleaning if the cleaning attempt fails\nOCPBUGS-2479 - Right border radius is 0 for the pipeline visualization wrapper in dark mode\nOCPBUGS-2500 - Developer Topology always blanks with large contents when first rendering\nOCPBUGS-2513 - Disconnected cluster installation fails with pull secret must contain auth for \"registry.ci.openshift.org\" \nOCPBUGS-2525 - [CI Watcher] Ongoing timeout failures associated with multiple CRD-extensions tests\nOCPBUGS-2532 - Upgrades from 4.11.9 to latest 4.12.x Nightly builds do not succeed\nOCPBUGS-2551 - \"Error loading\" when normal user check operands on All namespaces\nOCPBUGS-2569 - ovn-k network policy races\nOCPBUGS-2579 - Helm Charts and Samples are not disabled in topology actions if actions are disabled in customization\nOCPBUGS-266 - Project Access tab cannot differentiate between users and groups\nOCPBUGS-2666 - `create a project` link not backed by RBAC check\nOCPBUGS-272 - Getting duplicate word \"find\" when kube-apiserver degraded=true if webhook matches a virtual resource\nOCPBUGS-2727 - ClusterVersionRecommendedUpdate condition blocks explicitly allowed upgrade which is not in the available updates\nOCPBUGS-2729 - should ignore enP.* NICs from node-exporter on Azure cluster\nOCPBUGS-2735 - Operand List Page Layout Incorrect on small screen size. \nOCPBUGS-2738 - CVE-2022-26945 CVE-2022-30321 CVE-2022-30322 CVE-2022-30323 ose-baremetal-installer-container: various flaws [openshift-4.13.z]\nOCPBUGS-2824 - The dropdown list component will be covered by deployment details page on Topology page\nOCPBUGS-2827 - OVNK: NAT issue for packets exceeding check_pkt_larger() for NodePort services that route to hostNetworked pods\nOCPBUGS-2841 - Need validation rule for supported arch\nOCPBUGS-2845 - Unable to use application credentials for Cinder CSI after OpenStack credentials update\nOCPBUGS-2847 - GCP XPN should only be available with Tech Preview\nOCPBUGS-2851 - [OCI feature] registries.conf support in oc mirror\nOCPBUGS-2852 - etcd failure: failed to make etcd client for endpoints [https://[2620:52:0:1eb:367x:5axx:xxx:xxx]:2379]: context deadline exceeded \nOCPBUGS-2868 - Container networking pods cannot be access hosted network pods on another node in ipv6 single stack cluster\nOCPBUGS-2873 - Prometheus doesn\u0027t reload TLS certificate and key files on disk\nOCPBUGS-2886 - The LoadBalaner section shouldn\u0027t be set when using Kuryr on cloud-provider\nOCPBUGS-2891 - AWS Deprovision Fails with unrecognized elastic load balancing resource type listener \nOCPBUGS-2895 - [RFE] 4.11 Azure DiskEncryptionSet static validation does not support upper-case letters\nOCPBUGS-2904 - If all the actions are disabled in add page, Details on/off toggle switch to be disabled\nOCPBUGS-2907 - provisioning of baremetal nodes fails when using multipath device as rootDeviceHints\nOCPBUGS-2921 - br-ex interface not configured makes ovnkube-node Pod to crashloop \nOCPBUGS-2922 - \u0027Status\u0027 column sorting doesn\u0027t work as expected\nOCPBUGS-2926 - Unable to gather OpenStack console logs since kernel cmd line has no console args\nOCPBUGS-2934 - Ingress node firewall pod \u0027s events container on the node causing pod in CrashLoopBackOff state when sctp module is loaded on node\nOCPBUGS-2941 - CIRO unable to detect swift when content-type is omitted in 204-responses\nOCPBUGS-2946 - [AWS] curl network Loadbalancer always get \"Connection time out\"\nOCPBUGS-2948 - Whereabouts CNI timesout while iterating exclude range\nOCPBUGS-2988 - apiserver pods cannot reach etcd on single node IPv6 cluster: transport: authentication handshake failed: x509: certificate is valid for ::1, 127.0.0.1, ::1, fd69::2, not 2620:52:0:198::10\"\nOCPBUGS-2991 - CI jobs are failing with: admission webhook \"validation.csi.vsphere.vmware.com\" denied the request\nOCPBUGS-2992 - metal3 pod crashloops on OKD in BareMetal IPI or assisted-installer bare metal installations\nOCPBUGS-2994 - Keepalived monitor stuck for long period of time on kube-api call while installing\nOCPBUGS-2996 - [4.13] Bootimage bump tracker\nOCPBUGS-3018 - panic in WaitForBootstrapComplete\nOCPBUGS-3021 - GCP: missing me-west1 region\nOCPBUGS-3024 - Service list shows undefined:80 when type is ExternalName or LoadBalancer\nOCPBUGS-3027 - Metrics are not available when running console in development mode\nOCPBUGS-3029 - BareMetalHost CR fails to delete on cluster cleanup\nOCPBUGS-3033 - Clicking the logo in the masthead goes to `/dashboards`, even if metrics are disabled\nOCPBUGS-3041 - Guard Pod Hostnames Too Long and Truncated Down Into Collisions With Other Masters\nOCPBUGS-3069 - Should show information on page if the upgrade to a target version doesn\u0027t take effect. \nOCPBUGS-3072 - Operator-sdk run bundle with old  sqllite index image failed \nOCPBUGS-3079 - RPS hook only sets the first queue, but there are now many\nOCPBUGS-3085 - [IPI-BareMetal]: Dual stack deployment failed on BootStrap stage  \nOCPBUGS-3093 - The control plane should tag AWS security groups at creation\nOCPBUGS-3096 - The terraform binaries shipped by the installer are not statically linked\nOCPBUGS-3109 - Change text colour for ConsoleNotification that notifies user that the cluster is being \nOCPBUGS-3114 - CNO reporting incorrect status\nOCPBUGS-3123 - Operator attempts to render both GA and Tech Preview API Extensions\nOCPBUGS-3127 - nodeip-configuration retries forever on network failure, blocking ovs-configuration, spamming syslog\nOCPBUGS-3168 - Add Capacity button does not exist after upgrade OCP version [OCP4.11-\u003eOCP4.12]\nOCPBUGS-3172 - Console shouldn\u0027t try to install dynamic plugins if permissions aren\u0027t available\nOCPBUGS-3180 - Regression in ptp-operator conformance tests\nOCPBUGS-3186 - [ibmcloud] unclear error msg when zones is not match with the Subnets in BYON install\nOCPBUGS-3192 - [4.8][OVN] RHEL 7.9 DHCP worker ovs-configuration fails \nOCPBUGS-3195 - Service-ca controller exits immediately with an error on sigterm\nOCPBUGS-3206 - [sdn2ovn] Migration failed in vsphere cluster\nOCPBUGS-3207 - SCOS build fails due to pinned kernel\nOCPBUGS-3214 - Installer does not always add router CA to kubeconfig\nOCPBUGS-3228 - Broken secret created while starting a Pipeline\nOCPBUGS-3235 - Topology gets stuck loading\nOCPBUGS-3245 - ovn-kubernetes ovnkube-master containers crashlooping after 4.11.0-0.okd-2022-10-15-073651 update\nOCPBUGS-3248 - CVE-2022-27191 ose-installer-container: golang: crash in a golang.org/x/crypto/ssh server [openshift-4]\nOCPBUGS-3253 - No warning when using wait-for vs. agent wait-for commands\nOCPBUGS-3272 - Unhealthy Readiness probe failed message failing CI when ovnkube DBs are still coming up\nOCPBUGS-3275 - No-op: Unable to retrieve machine from node \"xxx\": expecting one machine for node xxx got: []\nOCPBUGS-3277 - Install failure in create-cluster-and-infraenv.service\nOCPBUGS-3278 - Shouldn\u0027t need to put host data in platform baremetal section in installconfig\nOCPBUGS-3280 - Install ends in preparing-failed due to container-images-available validation\nOCPBUGS-3283 - remove unnecessary RBAC in KCM\nOCPBUGS-3292 - DaemonSet \"/openshift-network-diagnostics/network-check-target\" is not available\nOCPBUGS-3314 - \u0027gitlab.secretReference\u0027 disappears when the buildconfig is edited on ?From View?\nOCPBUGS-3316 - Branch name should sanitised to match actual github branch name in repository plr list\nOCPBUGS-3320 - New master will be created if add duplicated failuredomains in controlplanemachineset\nOCPBUGS-3331 - Update dependencies in CMO release 4.13\nOCPBUGS-3334 - Console should be using v1 apiVersion for ConsolePlugin model\nOCPBUGS-3337 - revert \"force cert rotation every couple days for development\" in 4.12\nOCPBUGS-3338 - Environment cannot find Python\nOCPBUGS-3358 - Revert BUILD-407\nOCPBUGS-3372 - error message is too generic when creating a silence with end time before start\nOCPBUGS-3373 - cluster-monitoring-view user can not list servicemonitors on \"Observe -\u003e Targets\" page\nOCPBUGS-3377 - CephCluster and StorageCluster resources use the same paths\nOCPBUGS-3381 - Make ovnkube-trace work on hypershift deployments\nOCPBUGS-3382 - Unable to configure cluster-wide proxy\nOCPBUGS-3391 - seccomp profile unshare.json missing from nodes\nOCPBUGS-3395 - Event Source is visible without even creating knative-eventing and knative-serving. \nOCPBUGS-3404 - IngressController.spec.nodePlacement.nodeSelector.matchExpressions does not work\nOCPBUGS-3414 - Missing \u0027ImageContentSourcePolicy\u0027 and \u0027CatalogSource\u0027 in the oci fbc feature implementation\nOCPBUGS-3424 - Azure Disk CSI Driver Operator gets degraded without \"CSISnapshot\" capability\nOCPBUGS-3426 - Update Cluster Sample Operator dependencies and libraries for OCP 4.13\nOCPBUGS-3427 - Skip broken [sig-devex][Feature:ImageEcosystem] tests\nOCPBUGS-3438 - cloud-network-config-controller not using proxy settings of the management cluster\nOCPBUGS-3440 - Authentication operator doesn\u0027t respond to console being enabled\nOCPBUGS-3441 - Update cluster-authentication-operator not to go degraded without console\nOCPBUGS-3444 - [4.13] Descheduler pod is OOM killed when using descheduler-operator profiles on big clusters\nOCPBUGS-3456 - track `rhcos-4.12` branch for fedora-coreos-config submodule\nOCPBUGS-3458 - Surface ClusterVersion RetrievedUpdates condition messages\nOCPBUGS-3465 - IBM operator needs deployment manifest fixes\nOCPBUGS-3473 - Allow listing crio and kernel versions in machine-os components\nOCPBUGS-3476 - Show Tag label and tag name if tag is detected in repository PipelineRun list and details page\nOCPBUGS-3480 - Baremetal Provisioning fails on HP Gen9 systems due to eTag handling\nOCPBUGS-3499 - Route CRD validation behavior must be the same as openshift-apiserver behavior\nOCPBUGS-3501 - Route CRD host-assignment behavior must be the same as openshift-apiserver behavior\nOCPBUGS-3502 - CRD-based and openshift-apiserver-based Route validation/defaulting must use the shared implementation\nOCPBUGS-3508 - masters repeatedly losing connection to API and going NotReady\nOCPBUGS-3524 - The storage account for the CoreOS image is publicly accessible when deploying fully private cluster on Azure\nOCPBUGS-3526 - oc fails to extract layers that set xattr on Darwin\nOCPBUGS-3539 - [OVN-provider]loadBalancer svc with monitors not working\nOCPBUGS-3612 - [IPI] Baremetal ovs-configure.sh script fails to start secondary bridge br-ex1\nOCPBUGS-3621 - EUS upgrade stuck on worker pool update: error running skopeo inspect --no-tags\nOCPBUGS-3648 - Container security operator Image Manifest Vulnerabilities encounters runtime errors under some circumstances\nOCPBUGS-3659 - Expose AzureDisk metrics port over HTTPS\nOCPBUGS-3662 - don\u0027t enforce PSa in 4.12\nOCPBUGS-3667 - PTP 4.12 Regression - CLOCK REALTIME status is locked when physical interface is down\nOCPBUGS-3668 - 4.12.0-rc.0 fails to deploy on VMware IPI\nOCPBUGS-3676 - After node\u0027s reboot some pods fail to start - deleteLogicalPort failed for pod cannot delete GR SNAT for pod\nOCPBUGS-3693 - Router e2e: drop template.openshift.io apigroup dependency\nOCPBUGS-3709 - Special characters in subject name breaks prefilling role binding form\nOCPBUGS-3713 - [vsphere-problem-detector] fully qualified username must be used when checking permissions\nOCPBUGS-3714 - \u0027oc adm upgrade ...\u0027 should expose ClusterVersion Failing=True\nOCPBUGS-3739 - Pod stuck in containerCreating state when the node on which it is running is Terminated\nOCPBUGS-3744 - Egress router POD creation is failing while using openshift-sdn network plugin\nOCPBUGS-3755 - Create Alertmanager silence form does not explain the new \"Negative matcher\" option\nOCPBUGS-3761 - Consistent e2e test failure:Events.Events: event view displays created pod\nOCPBUGS-3765 - [RFE] Add kernel-rpm-macros to DTK image\nOCPBUGS-3771 - contrib/multicluster-environment.sh needs to be updated to work with ACM cluster proxy\nOCPBUGS-3776 - Manage columns tooltip remains displayed after dialog is closed\nOCPBUGS-3777 - [Dual Stack] ovn-ipsec crashlooping due to cert signing issues\nOCPBUGS-3797 - [4.13] Bump OVS control plane to get \"ovsdb/transaction.c: Refactor assess_weak_refs.\"\nOCPBUGS-3822 - Cluster-admin cannot know whether operator is fully deleted or not after normal user trigger \"Delete CSV\"\nOCPBUGS-3827 - CCM not able to remove a LB in ERROR state\nOCPBUGS-3877 - RouteTargetReference missing default for \"weight\" in Route CRD v1 schema\nOCPBUGS-3880 - [Ingress Node Firewall] Change the logo used for ingress node firewall operator\nOCPBUGS-3883 - Hosted ovnkubernetes pods are not being spread among workers evenly\nOCPBUGS-3896 - Console nav toggle button reports expanded in both expanded and not expanded states\nOCPBUGS-3904 - Delete/Add a failureDomain in CPMS to trigger update cannot work right on GCP\nOCPBUGS-3909 - Node is degraded when a machine config deploys a unit with content and mask=true\nOCPBUGS-3916 - expr for SDNPodNotReady is wrong due to there is not node label for kube_pod_status_ready\nOCPBUGS-3919 - Azure: unable to configure EgressIP if an ASG is set\nOCPBUGS-3921 - Openshift-install bootstrap operation cannot find a cloud defined in clouds.yaml in the current directory\nOCPBUGS-3923 - [CI] cluster-monitoring-operator produces more watch requests than expected\nOCPBUGS-3924 - Remove autoscaling/v2beta2 in 4.12 and later\nOCPBUGS-3929 - Use flowcontrol/v1beta2 for apf manifests in 4.13\nOCPBUGS-3931 - When all extensions are installed,  \"libkadm5\" rpm package is duplicated in the `rpm -q` command\nOCPBUGS-3933 - Fails to deprovision cluster when swift omits \u0027content-type\u0027\nOCPBUGS-3945 - Handle 0600 kubeconfig\nOCPBUGS-3951 - Dynamic plugin extensions disappear from the UI when a codeRef fails to load\nOCPBUGS-3960 - Use kernel-rt from ose repo\nOCPBUGS-3965 - must-gather namespace should have ?privileged? warn and audit pod security labels besides enforce\nOCPBUGS-3973 - [SNO] csi-snapshot-controller CO is degraded when upgrade from 4.12 to 4.13 and reports permissions issue. \nOCPBUGS-3974 - CIRO panics when suspended flag is nil\nOCPBUGS-3975 - \"Failed to open directory, disabling udev device properties\" in node-exporter logs\nOCPBUGS-3978 - AWS EBS CSI driver operator is degraded without \"CSISnapshot\" capability\nOCPBUGS-3985 - Allow PSa enforcement in 4.13 by using featuresets\nOCPBUGS-3987 - Some nmstate validations are skipped when NM config is in agent-config.yaml\nOCPBUGS-3990 - HyperShift control plane operators have wrong priorityClass\nOCPBUGS-3993 - egressIP annotation including two interfaces when multiple networks\nOCPBUGS-4000 - fix operator naming convention \nOCPBUGS-4008 - Console deployment does not roll out when managed cluster configmap is updated\nOCPBUGS-4012 - Disabled Serverless add actions should not be displayed in topology menu\nOCPBUGS-4026 - Endless rerender loop and a stuck browser on the add and topology page when SBO is installed\nOCPBUGS-4047 - [CI-Watcher] e2e test flake: Create key/value secrets Validate a key/value secret\nOCPBUGS-4049 - MCO reconcile fails if user replace the pull secret to empty one\nOCPBUGS-4052 - [ALBO] OpenShift Load Balancer Operator does not properly support cluster wide proxy\nOCPBUGS-4054 - cluster-ingress-operator\u0027s configurable-route controller\u0027s startup is noisy\nOCPBUGS-4089 - Kube-State-metrics pod fails to start due to panic\nOCPBUGS-4090 - OCP on OSP - Image registry is deployed with cinder instead of swift storage backend \nOCPBUGS-4101 - Empty/missing node-sizing SYSTEM_RESERVED_ES parameter can result in kubelet not starting\nOCPBUGS-4110 - Form footer buttons are misaligned in web terminal form\nOCPBUGS-4119 - Random SYN drops in OVS bridges of OVN-Kubernetes\nOCPBUGS-4166 - Update Cluster Sample Operator dependencies and libraries for OCP 4.13\nOCPBUGS-4168 - Prometheus continuously restarts due to slow WAL replay\nOCPBUGS-4173 - vsphere-problem-detector should re-check passwords after change\nOCPBUGS-4181 - Prometheus and Alertmanager incorrect ExternalURL configured\nOCPBUGS-4184 - Use mTLS authentication for all monitoring components instead of bearer token\nOCPBUGS-4203 - Unnecessary padding around alert atop debug pod terminal\nOCPBUGS-4206 - getContainerStateValue contains incorrectly internationalized text\nOCPBUGS-4207 - Remove debug level logging on openshift-config-operator\nOCPBUGS-4219 - Add runbook link to PrometheusRuleFailures\nOCPBUGS-4225 - [4.13] boot sequence override request fails with Base.1.8.PropertyNotWritable on Lenovo SE450\nOCPBUGS-4232 - CNCC: Wrong log format for Azure locking\nOCPBUGS-4245 - L2 does not work if a metallb is not able to listen to arp requests on a single interface\nOCPBUGS-4252 - Node Terminal tab results in error\nOCPBUGS-4253 - Add PodNetworkConnectivityCheck for must-gather\nOCPBUGS-4266 - crio.service should use a more safe restart policy to provide recoverability against concurrency issues\nOCPBUGS-4279 - Custom Victory-Core components in monitoring ui code causing build issues \nOCPBUGS-4280 - Return 0 when `oc import-image` failed\nOCPBUGS-4282 - [IR-269]Can\u0027t pull sub-manifest image using imagestream of manifest list\nOCPBUGS-4291 - [OVN]Sometimes after reboot egress node, egress IP cannot be applied anymore. \nOCPBUGS-4293 - Specify resources.requests for operator pod\nOCPBUGS-4298 - Specify resources.requests for operator pod\nOCPBUGS-4302 - Specify resources.requests for operator pod\nOCPBUGS-4305 - [4.13] Improve ironic logging configuration in metal3\nOCPBUGS-4317 - [IBM][4.13][Snapshot] restore size in snapshot is not the same size of pvc request size \nOCPBUGS-4328 - Update installer images to be consistent with ART\nOCPBUGS-434 - After FIPS enabled in S390X, ingress controller in degraded state\nOCPBUGS-4343 - Use flowcontrol/v1beta3 for apf manifests in 4.13\nOCPBUGS-4347 - set TLS cipher suites in Kube RBAC sidecars\nOCPBUGS-4350 - CNO in HyperShift reports upgrade complete in clusteroperator prematurely\nOCPBUGS-4352 - [RHOCP] HPA shows different API versions in web console\nOCPBUGS-4357 - Bump samples operator k8s dep to 1.25.2\nOCPBUGS-4359 - cluster-dns-operator corrupts /etc/hosts when fs full\nOCPBUGS-4367 - Debug log messages missing from output and Info messages malformed\nOCPBUGS-4377 - Service name search ability while creating the Route from console\nOCPBUGS-4401 - limit cluster-policy-controller RBAC permissions\nOCPBUGS-4411 - ovnkube node pod crashed after converting to a dual-stack cluster network\nOCPBUGS-4417 - ip-reconciler removes the overlappingrangeipreservations whether the pod is alive or not\nOCPBUGS-4425 - Egress FW ACL rules are invalid in dualstack mode\nOCPBUGS-4447 - [MetalLB Operator] The CSV needs an update to reflect the correct version of operator\nOCPBUGS-446 - Cannot Add a project from DevConsole in airgap mode using git importing\nOCPBUGS-4483 - apply retry logic to ovnk-node controllers\nOCPBUGS-4490 - hypershift: csi-snapshot-controller uses wrong kubeconfig\nOCPBUGS-4491 - hypershift: aws-ebs-csi-driver-operator uses wrong kubeconfig\nOCPBUGS-4492 - [4.13] The property TransferProtocolType is required for VirtualMedia.InsertMedia\nOCPBUGS-4502 - [4.13] [OVNK] Add support for service session affinity timeout\nOCPBUGS-4516 - `oc-mirror` does not work as expected relative path for OCI format copy \nOCPBUGS-4517 - Better to detail the --command-os of mac for `oc adm release extract` command\nOCPBUGS-4521 - all kubelet targets are down after a few hours\nOCPBUGS-4524 - Hold lock when deleting completed pod during update event\nOCPBUGS-4525 - Don\u0027t log in iterateRetryResources when there are no retry entries\nOCPBUGS-4535 - There is no 4.13 gcp-filestore-csi-driver-operator version for test\nOCPBUGS-4536 - Image registry panics while deploying OCP in eu-south-2 AWS region\nOCPBUGS-4537 - Image registry panics while deploying OCP in eu-central-2 AWS region\nOCPBUGS-4538 - Image registry panics while deploying OCP in ap-south-2 AWS region\nOCPBUGS-4541 - Azure: remove deprecated ADAL\nOCPBUGS-4546 - CVE-2021-38561 ose-installer-container: golang: out-of-bounds read in golang.org/x/text/language leads to DoS [openshift-4]\nOCPBUGS-4549 - Azure: replace deprecated AD Graph API\nOCPBUGS-4550 - [CI] console-operator produces more watch requests than expected\nOCPBUGS-4571 - The operator recommended namespace is incorrect after change installation mode to \"A specific namespace on the cluster\"\nOCPBUGS-4574 - Machine stuck in no phase when creating in a nonexistent zone and stuck in Deleting when deleting on GCP\nOCPBUGS-463 - OVN-Kubernetes should not send IPs with leading zeros to OVN\nOCPBUGS-4630 - Bump documentationBaseURL to 4.13\nOCPBUGS-4635 - [OCP 4.13] ironic container images have old packages\nOCPBUGS-4638 - Support RHOBS monitoring for HyperShift in CNO\nOCPBUGS-4652 - Fixes for RHCOS 9 based on RHEL 9.0\nOCPBUGS-4654 - Azure: UPI: Fix storage arm template to work with Galleries and MAO\nOCPBUGS-4659 - Network Policy executes duplicate transactions for every pod update\nOCPBUGS-4684 - In DeploymentConfig both the Form view and Yaml view are not in sync\nOCPBUGS-4689 - SNO not able to bring up Provisioning resource in 4.11.17\nOCPBUGS-4691 - Topology sidebar actions doesn\u0027t show the latest resource data\nOCPBUGS-4692 - PTP operator: Use priority class node critical\nOCPBUGS-4700 - read-only update UX: confusing \"Update blocked\" pop-up\nOCPBUGS-4701 - read-only update UX: confusing \"Control plane is hosted\" banner\nOCPBUGS-4703 - Router can migrate to use LivenessProbe.TerminationGracePeriodSeconds\nOCPBUGS-4712 - ironic-proxy daemonset not deleted when provisioningNetwork is changed from Disabled to Managed/Unmanaged\nOCPBUGS-4724 - [4.13] egressIP annotations not present on OpenShift on Openstack multiAZ installation\nOCPBUGS-4725 - mapi_machinehealthcheck_short_circuit not properly reconciling causing MachineHealthCheckUnterminatedShortCircuit alert to fire\nOCPBUGS-4746 - Removal of detection of host kubelet kubeconfig breaks IBM Cloud ROKS\nOCPBUGS-4756 - OLM generates invalid component selector labels\nOCPBUGS-4757 - Revert Catalog PSA decisions for 4.13 (OLM)\nOCPBUGS-4758 - Revert Catalog PSA decisions for 4.13 (Marketplace)\nOCPBUGS-4769 - Old AWS boot images vs. 4.12: unknown provider \u0027ec2\u0027\nOCPBUGS-4780 - Update openshift/builder release-4.13 to go1.19\nOCPBUGS-4781 - Get Helm Release seems to be using List Releases api\nOCPBUGS-4793 - CMO may generate Kubernetes events with a wrong object reference\nOCPBUGS-4802 - Update formatting with gofmt for go1.19\nOCPBUGS-4825 - Pods completed + deleted may leak\nOCPBUGS-4827 - Ingress Controller is missing a required AWS resource permission for SC2S region us-isob-east-1\nOCPBUGS-4873 - openshift-marketplace namespace missing \"audit-version\" and \"warn-version\" PSA label\nOCPBUGS-4874 - Baremetal host data is still sometimes required\nOCPBUGS-4883 - Default Git type to other info alert should get remove after changing the git type\nOCPBUGS-4894 - Disabled Serverless add actions should not be displayed for Knative Service\nOCPBUGS-4899 - coreos-installer output not available in the logs\nOCPBUGS-4900 - Volume limits test broken on AWS and GCP TechPreview clusters\nOCPBUGS-4906 - Cross-namespace template processing is not being tested\nOCPBUGS-4909 - Can\u0027t reach own service when egress netpol are enabled\nOCPBUGS-4913 - Need to wait longer for VM to obtain IP from DHCP\nOCPBUGS-4941 - Fails to deprovision cluster when swift omits \u0027content-type\u0027 and there are empty containers\nOCPBUGS-4950 - OLM K8s Dependencies should be at 1.25\nOCPBUGS-4954 - [IBMCloud] COS Reclamation prevents ResourceGroup cleanup\nOCPBUGS-4955 - Bundle Unpacker Using \"Always\" ImagePullPolicy for digests\nOCPBUGS-4969 - ROSA Machinepool EgressIP Labels Not Discovered\nOCPBUGS-4975 - Missing translation in ceph storage plugin\nOCPBUGS-4986 - precondition: Do not claim warnings would have blocked\nOCPBUGS-4997 - Agent ISO does not respect proxy settings\nOCPBUGS-5001 - MachineConfigControllerPausedPoolKubeletCA should have a working runbook URI\nOCPBUGS-501 - oc get dc fails when AllRequestBodies audit-profile is set in apiserver\nOCPBUGS-5010 - Should always delete the must-gather pod when run the must-gather\nOCPBUGS-5016 - Editing Pipeline in the ocp console to get information error\nOCPBUGS-5018 - Upgrade from 4.11 to  4.12 with Windows machine workers (Spot Instances) failing due to: hcnCreateEndpoint failed in Win32: The object already exists. \nOCPBUGS-5036 - Cloud Controller Managers do not react to changes in configuration leading to assorted errors\nOCPBUGS-5045 - unit test data race with egress ip tests\nOCPBUGS-5068 - [4.13] virtual media provisioning fails when iLO Ironic driver is used\nOCPBUGS-5073 - Connection reset by peer issue with SSL OAuth Proxy when route objects are created more than 80. \nOCPBUGS-5079 - [CI Watcher] pull-ci-openshift-console-master-e2e-gcp-console jobs: Process did not finish before 4h0m0s timeout\nOCPBUGS-5085 - Should only show the selected catalog when after apply  the ICSP and catalogsource\nOCPBUGS-5101 - [GCP] [capi] Deletion of cluster  is happening  , it shouldn\u0027t be allowed\nOCPBUGS-5116 - machine.openshift.io API is not supported in Machine API webhooks\nOCPBUGS-512 - Permission denied when write data to mounted gcp filestore volume instance\nOCPBUGS-5124 - kubernetes-nmstate does not pass CVP tests in 4.12\nOCPBUGS-5136 - provisioning on ilo4-virtualmedia BMC driver fails with error: \"Creating vfat image failed: Unexpected error while running command\"\nOCPBUGS-5140 - [alibabacloud] IPI install got bootstrap failure and without any node ready, due to enforced EIP bandwidth 5 Mbit/s\nOCPBUGS-5151 - Installer - provisioning interface on master node not getting ipv4 dhcp ip address from bootstrap dhcp server on OCP IPI BareMetal install\nOCPBUGS-5164 - Add support for API version v1beta1 for knativeServing and knativeEventing\nOCPBUGS-5165 - Dev Sandbox clusters uses clusterType OSD and there is no way to enforce DEVSANDBOX\nOCPBUGS-5182 - [azure] Fail to create master node with vm size in family ECIADSv5 and ECIASv5\nOCPBUGS-5184 - [azure] Fail to create master node with vm size in standardNVSv4Family\nOCPBUGS-5188 - Wrong message in MCCDrainError alert\nOCPBUGS-5234 - [azure] Azure Stack Hub (wwt) UPI installation failed to scale up worker nodes using machinesets \nOCPBUGS-5235 - mapi_instance_create_failed metric cannot work when set acceleratedNetworking: true on Azure\nOCPBUGS-5269 - remove unnecessary RBAC in KCM: file removal\nOCPBUGS-5275 - remove unnecessary RBAC in OCM\nOCPBUGS-5287 - Bug with Red Hat Integration - 3scale - Managed Application Services causes operator-install-single-namespace.spec.ts to fail\nOCPBUGS-5292 - Multus: Interface name contains an invalid character / [ocp 4.13]\nOCPBUGS-5300 - WriteRequestBodies audit profile records routes/status events at RequestResponse level\nOCPBUGS-5306 - One old machine stuck in Deleting and many co get degraded when doing master replacement on the cluster with OVN network\nOCPBUGS-5346 - Reported vSphere Connection status is misleading\nOCPBUGS-5347 - Clusteroperator Available condition is updated every 2 mins when operator is disabled\nOCPBUGS-5353 - Dashboard graph should not be stacked - Kubernetes / Compute Resources / Pod Dashboard\nOCPBUGS-5410 - [AWS-EBS-CSI-Driver] provision volume using customer kms key couldn\u0027t restore its snapshot successfully\nOCPBUGS-5423 - openshift-marketplace pods cause PodSecurityViolation alert to fire\nOCPBUGS-5428 - Many plugin SDK extension docs are missing descriptions\nOCPBUGS-5432 - Downstream Operator-SDK v1.25.1 to OCP 4.13\nOCPBUGS-5458 - wal: max entry size limit exceeded\nOCPBUGS-5465 - Context Deadline exceeded when PTP service is disabled from the switch\nOCPBUGS-5466 - Default CatalogSource aren\u0027t always reverted to default settings\nOCPBUGS-5492 - CI \"[Feature:bond] should create a pod with bond interface\" fail for MTU migration jobs\nOCPBUGS-5497 - MCDRebootError alarm disappears after 15 minutes\nOCPBUGS-5498 - Host inventory quick start for OCP\nOCPBUGS-5505 - Upgradeability check is throttled too much and with unnecessary non-determinism\nOCPBUGS-5508 - Report topology usage in vSphere environment via telemetry\nOCPBUGS-5517 - [Azure/ARO] Update Azure SDK to v63.1.0+incompatible \nOCPBUGS-5520 - MCDPivotError alert fires due temporary transient failures \nOCPBUGS-5523 - Catalog, fatal error: concurrent map read and map write\nOCPBUGS-5524 - Disable vsphere intree tests that exercise multiple tests\nOCPBUGS-5534 -  [UI] When OCP and ODF are upgraded, refresh web console pop-up doesn\u0027t appear after ODF upgrade resulting in dashboard crash\nOCPBUGS-5540 - Typo in WTO for Milliseconds\nOCPBUGS-5542 - Project dropdown order is not as smart as project list page order\nOCPBUGS-5546 - Machine API Provider Azure should not modify the Machine spec\nOCPBUGS-5547 - Webhook Secret (1 of 2) is not removed when Knative Service is deleted\nOCPBUGS-5559 - add default noProxy config for Azure\nOCPBUGS-5733 - [Openshift Pipelines] Description of parameters are not shown in pipelinerun description page\nOCPBUGS-5734 - Azure: VIP 168.63.129.16 should be noProxy to all clouds except Public\nOCPBUGS-5736 - The main section of the page will keep loading after normal user login\nOCPBUGS-5759 - Deletion of BYOH Windows node hangs in Ready,SchedulingDisabled\nOCPBUGS-5802 - update sprig to v3 in cno\nOCPBUGS-5836 - Incorrect redirection when user try to download windows oc binary\nOCPBUGS-5842 - executes /host/usr/bin/oc\nOCPBUGS-5851 - [CI-Watcher]: Using OLM descriptor components deletes operand \nOCPBUGS-5873 - etcd_object_counts is deprecated and replaced with apiserver_storage_objects, causing \"etcd Object Count\" dashboard to only show OpenShift resources\nOCPBUGS-5888 - Failed to install 4.13 ocp on SNO with \"error during syncRequiredMachineConfigPools\"\nOCPBUGS-5891 - oc-mirror heads-only does not work with target name\nOCPBUGS-5903 - gather default ingress controller definition\nOCPBUGS-5922 - [2047299 Jira placeholder] nodeport not reachable port connection timeout\nOCPBUGS-5939 - revert \"force cert rotation every couple days for development\" in 4.13\nOCPBUGS-5948 - Runtime error using API Explorer with AdmissionReview resource\nOCPBUGS-5949 - oc --icsp mapping scope does not match openshift icsp mapping scope\nOCPBUGS-5959 - [4.13] Bootimage bump tracker\nOCPBUGS-5988 - Degraded etcd on assisted-installer installation- bootstrap etcd is not removed properly\nOCPBUGS-5991 - Kube APIServer panics in admission controller\nOCPBUGS-5997 - Add Git Repository form shows empty permission content and non-working help link until a git url is entered\nOCPBUGS-6004 - apiserver pods cannot reach etcd on single node IPv6 cluster: transport: authentication handshake failed: x509: certificate is valid for ::1, 127.0.0.1, ::1, fd69::2, not 2620:52:0:198::10\"\nOCPBUGS-6011 - openshift-client package has wrong version of kubectl bundled\nOCPBUGS-6018 - The MCO can generate a rendered config with old KubeletConfig contents, blocking upgrades\nOCPBUGS-6026 - cannot change /etc folder ownership inside pod\nOCPBUGS-6033 - metallb 4.12.0-202301042354 (OCP 4.12)  refers to external image\nOCPBUGS-6049 - Do not show UpdateInProgress when status is Failing\nOCPBUGS-6053 - `availableUpdates: null` results in run-time error on Cluster Settings page\nOCPBUGS-6055 - thanos-ruler-user-workload-1 pod is getting repeatedly re-created after upgrade do 4.10.41\nOCPBUGS-6063 - PVs(vmdk) get deleted when scaling down machineSet with vSphere IPI\nOCPBUGS-6089 - Unnecessary event reprocessing\nOCPBUGS-6092 - ovs-configuration.service fails - Error: Connection activation failed: No suitable device found for this connection\nOCPBUGS-6097 - CVO hotloops on ImageStream and logs the information incorrectly\nOCPBUGS-6098 - Show Git icon and URL in repository link in PLR details page should be based on the git provider\nOCPBUGS-6101 - Daemonset is not upgraded after operator upgrade\nOCPBUGS-6175 - Image registry Operator does not use Proxy when connecting to openstack\nOCPBUGS-6185 - Update 4.13 ose-cluster-config-operator image to be consistent with ART\nOCPBUGS-6187 - Update 4.13 openshift-state-metrics image to be consistent with ART\nOCPBUGS-6189 - Update 4.13 ose-cluster-authentication-operator image to be consistent with ART\nOCPBUGS-6191 - Update 4.13 ose-network-metrics-daemon image to be consistent with ART\nOCPBUGS-6197 - Update 4.13 ose-openshift-apiserver image to be consistent with ART\nOCPBUGS-6201 - Update 4.13 openshift-enterprise-pod image to be consistent with ART\nOCPBUGS-6202 - Update 4.13 ose-cluster-kube-apiserver-operator image to be consistent with ART\nOCPBUGS-6213 - Update 4.13 ose-machine-config-operator image to be consistent with ART\nOCPBUGS-6222 - Update 4.13 ose-alibaba-cloud-csi-driver image to be consistent with ART\nOCPBUGS-6228 - Update 4.13 coredns image to be consistent with ART\nOCPBUGS-6231 - Update 4.13 ose-kube-storage-version-migrator image to be consistent with ART\nOCPBUGS-6232 - Update 4.13 marketplace-operator image to be consistent with ART\nOCPBUGS-6233 - Update 4.13 ose-cluster-openshift-apiserver-operator image to be consistent with ART\nOCPBUGS-6234 - Update 4.13 ose-cluster-bootstrap image to be consistent with ART\nOCPBUGS-6235 - Update 4.13 cluster-network-operator image to be consistent with ART\nOCPBUGS-6238 - Update 4.13 oauth-server image to be consistent with ART\nOCPBUGS-6240 - Update 4.13 ose-cluster-kube-storage-version-migrator-operator image to be consistent with ART\nOCPBUGS-6241 - Update 4.13 operator-lifecycle-manager image to be consistent with ART\nOCPBUGS-6247 - Update 4.13 ose-cluster-ingress-operator image to be consistent with ART\nOCPBUGS-6262 - Add more logs to \"oc extract\" in mco-first boot service \nOCPBUGS-6265 - When installing SNO with bootstrap in place it takes CVO 6 minutes to acquire the leader lease \nOCPBUGS-6270 - Irrelevant vsphere platform data is required\nOCPBUGS-6272 - E2E tests: Entire pipeline flow from Builder page Start the pipeline with workspace\nOCPBUGS-631 - machineconfig service is failed to start because Podman storage gets corrupted\nOCPBUGS-6486 - Image upload fails when installing cluster\nOCPBUGS-6503 - admin ack test nondeterministically does a check post-upgrade\nOCPBUGS-6504 - IPI Baremetal Master Node in DualStack getting fd69:: address randomly,  OVN CrashLoopBackOff\nOCPBUGS-6507 - Don\u0027t retry network policy peer pods if ips couldn\u0027t be fetched\nOCPBUGS-6577 - Node-exporter NodeFilesystemAlmostOutOfSpace alert exception needed\nOCPBUGS-6610 - Developer - Topology : \u0027Filter by resource\u0027 drop-down i18n misses\nOCPBUGS-6621 - Image registry panics while deploying OCP in ap-southeast-4 AWS region\nOCPBUGS-6624 - Issue deploying the master node with IPI\nOCPBUGS-6634 - Let the console able to build on other architectures and compatible with prow builds\nOCPBUGS-6646 - Ingress node firewall CI is broken with latest\nOCPBUGS-6647 - User Preferences - Applications : Resource type drop-down i18n misses\nOCPBUGS-6651 - Nodes unready in PublicAndPrivate / Private Hypershift setups behind a proxy\nOCPBUGS-6660 - Uninstall Operator? modal instructions always reference optional checkbox\nOCPBUGS-6663 - Platform baremetal warnings during create image when fields not defined\nOCPBUGS-6682 - [OVN] ovs-configuration vSphere vmxnet3 allmulti workaround is now permanent\nOCPBUGS-6698 - Fix conflict error message in cluster-ingress-operator\u0027s ensureNodePortService\nOCPBUGS-6700 - Cluster-ingress-operator\u0027s updateIngressClass function logs success message when error\nOCPBUGS-6701 - The ingress-operator spuriously updates ingressClass on startup\nOCPBUGS-6714 - Traffic from egress IPs was interrupted after Cluster patch to Openshift 4.10.46\nOCPBUGS-672 - Redhat-operators are failing regularly due to startup probe timing out which in turn increases CPU/Mem usage on Master nodes\nOCPBUGS-6722 - s390x: failed to generate asset \"Image\": multiple \"disk\" artifacts found\nOCPBUGS-6730 - Pod latency spikes are observed when there is a compaction/leadership transfer\nOCPBUGS-6731 - Gathered Environment variables (HTTP_PROXY/HTTPS_PROXY) may contain sensible information and should be obfuscated\nOCPBUGS-6741 - opm fails to serve FBC if cachedir not provided\nOCPBUGS-6757 - Pipeline Repository (Pipeline-as-Code) list page shows an empty Event type column\nOCPBUGS-6760 - Couldn\u0027t update/delete cpms on gcp private cluster\nOCPBUGS-6762 - Enhance the user experience for the name-filter-input on Metrics target page\nOCPBUGS-6765 - \"Delete dependent objects of this resource\" might cause confusions\nOCPBUGS-6777 - [gcp][CORS-1988] \"create manifests\" without an existing \"install-config.yaml\" missing 4 YAML files in \"\u003cinstall dir\u003e/openshift\" which leads to \"create cluster\" failure\nOCPBUGS-6781 - gather Machine objects\nOCPBUGS-6797 - Empty IBMCOS storage config causes operator to crashloop\nOCPBUGS-6799 - Repositories list does not show the running pipelinerun as last pipelinerun\nOCPBUGS-6809 - Uploading large layers fails with \"blob upload invalid\"\nOCPBUGS-6811 - Update Cluster Sample Operator dependencies and libraries for OCP 4.13\nOCPBUGS-6821 - Update NTO images to be consistent with ART\nOCPBUGS-6832 - Include openshift_apps_deploymentconfigs_strategy_total to recent_metrics\nOCPBUGS-6893 - Dev console doesn\u0027t finish loading for users with limited access\nOCPBUGS-6902 - 4.13-e2e-metal-ipi-upgrade-ovn-ipv6 on permafail\nOCPBUGS-6917 - MultinetworkPolicy: unknown service runtime.v1alpha2.RuntimeService\nOCPBUGS-6925 - Update OWNERS_ALIASES in release-4.13 branch\nOCPBUGS-6945 - OS Release reports incorrect version ID\nOCPBUGS-6953 - ovnkube-master panic nil deref\nOCPBUGS-6955 -  panic in an ovnkube-master  pod\nOCPBUGS-6962 - \u0027agent_installer\u0027 invoker not showing up in telemetry\nOCPBUGS-6977 - pod-identity-webhook replicas=2 is failing single node jobs\nOCPBUGS-6978 - Index violation on IGMP_Group during upgrade from 4.12.0 to 4.12.1\nOCPBUGS-6994 - All Clusters perspective is not activated automatically when ACM is installed\nOCPBUGS-702 - The caBundle field of alertmanagerconfigs.monitoring.coreos.com crd is getting removed\nOCPBUGS-7031 - Pipelines repository list and creation form doesn\u0027t show Tech Preview status\nOCPBUGS-7090 - Add to navigation button in search result does nothing\nOCPBUGS-7102 - OLM downstream utest fails due to new release-XX+1 branch creation\nOCPBUGS-7106 - network-tools needs to be updated to give ovn-k master leader info\nOCPBUGS-7118 - OCP 4.12 does not support launching SGX enclaves\nOCPBUGS-7144 - On mobile screens, At pipeline details page the info alert on metrics tab is not showing correctly\nOCPBUGS-7149 - IPv6 multinode spoke no moving from rebooting/configuring stage\nOCPBUGS-7173 - [OVN] DHCP timeouts on Azure arm64, install fails\nOCPBUGS-7180 - [4.13] Bootimage bump tracker\nOCPBUGS-7186 - [gcp][CORS-2424] with \"secureBoot\" enabled, after deleting control-plane machine, the new machine is created with \"enableSecureBoot\" being False unexpectedly\nOCPBUGS-7195 - [CI-Watcher] e2e issue with tests: Create Samples Page Timeout Error\nOCPBUGS-7199 - [CI-Watcher] e2e issue with tests: Interacting with CatalogSource page\nOCPBUGS-7204 - Manifests generated to multiple \"results-xxx\" folders when using the oci feature with OCI and nonOCI catalogs \nOCPBUGS-7207 - MTU migration configuration is cleaned up prematurely while in progress\nOCPBUGS-723 - ClusterResourceQuota values are not reflecting. \nOCPBUGS-7268 - [4.13] Modify the PSa pod extractor to mutate pod controller pod specs\nOCPBUGS-7284 - Hypershift failing new SCC conformance tests\nOCPBUGS-7291 - ptp keeps trying to start phc2sys even if it\u0027s configured as empty string in phc2sysOpts\nOCPBUGS-7293 - RHCOS 9.2 Failing to Bootstrap on Metal, OpenStack, vSphere (all baremetal runtime platforms)\nOCPBUGS-7300 - aws-ebs-csi-driver-operator crash loops with HC proxy configured\nOCPBUGS-7301 - Not possible to use certain start addresses in whereabouts IPv6 range [Backport 4.13]\nOCPBUGS-7308 - Download kubeconfig for ServiceAccount returns error\nOCPBUGS-7354 - Installation failed on Azure SDN as network is degraded \nOCPBUGS-7356 - Default channel on OCP 4.13 should be stable-4.13\nOCPBUGS-7359 - [Azure] Replace master failed as new master did not add into lb backend \nOCPBUGS-736 - Kuryr uses default MTU for service network\nOCPBUGS-7366 - [gcp] New machine stuck in Provisioning when delete one zone from cpms on gcp with customer vpc\nOCPBUGS-7372 - fail early on missing node status envs\nOCPBUGS-7374 - set default timeouts in etcdcli\nOCPBUGS-7391 - Monitoring operator long delay reconciling extension-apiserver-authentication\nOCPBUGS-7399 - In the Edit application mode, the name of the added pipeline is not displayed anymore\nOCPBUGS-7408 - AzureDisk CSI driver does not compile with cachito\nOCPBUGS-7412 - gomod dependencies failures in 4.13-4.14 container builds\nOCPBUGS-7417 - gomod dependencies failures in 4.13-4.14 container builds\nOCPBUGS-7418 - Default values for Scaling fields is not set in Create Serverless function form\nOCPBUGS-7419 - CVO delay when setting clusterversion available status to true  \nOCPBUGS-7421 - Missing i18n key for PAC section in Git import form\nOCPBUGS-7424 - Bump cluster-ingress-operator to k8s APIs v0.26.1\nOCPBUGS-7427 - dynamic-demo-plugin.spec.ts requires 10 minutes of unnecessary wait time\nOCPBUGS-7438 - Egress service does not handle invalid nodeSelectors correctly\nOCPBUGS-7482 - Fix handling of single failure-domain (non-tagged) deployments in vsphere\nOCPBUGS-7483 - Hypershift installs on \"platform: none\" are broken\nOCPBUGS-7488 - test flake: should not reconcile SC when state is Unmanaged\nOCPBUGS-7495 - Platform type is ignored\nOCPBUGS-7517 - Helm page crashes on old releases with a new Secret\nOCPBUGS-7519 - NFS Storage Tests trigger Kernel Panic on Azure and Metal\nOCPBUGS-7523 - Add new AWS regions for ROSA\nOCPBUGS-7542 - Bump router to k8s APIs v0.26.1\nOCPBUGS-7555 - Enable default sysctls for kubelet\nOCPBUGS-7558 - Rebase coredns to 1.10.1\nOCPBUGS-7563 - vSphere install can\u0027t complete with out-of-tree CCM\nOCPBUGS-7579 - [azure] failed to parse client certificate when using certificate-based Service Principal with passpharse\nOCPBUGS-7611 - PTPOperator config transportHost with AMQ is not detected \nOCPBUGS-7616 - vSphere multiple in-tree test failures (non-zonal)\nOCPBUGS-7617 - Azure Disk volume is taking time to attach/detach\nOCPBUGS-7622 - vSphere UPI jobs failing with \u0027Managed cluster should have machine resources\u0027\nOCPBUGS-7648 - Bump cluster-dns-operator to k8s APIs v0.26.1\nOCPBUGS-7689 - Project Admin is able to Label project with empty string in RHOCP 4\nOCPBUGS-7696 - [ Azure ]not able to deploy machine with publicIp:true\nOCPBUGS-7707 - /etc/NetworkManager/dispatcher.d needs to be relabeled during pivot from 8.6 to 9.2\nOCPBUGS-7719 - Update to 4.13.0-ec.3 stuck on leaked MachineConfig\nOCPBUGS-7729 - Remove ETCD liviness probe. \nOCPBUGS-7731 - Need to cancel threads when agent-tui timeout is stopped\nOCPBUGS-7733 - Afterburn fails on AWS/GCP clusters born in OCP 4.1/4.2\nOCPBUGS-7743 - SNO upgrade from 4.12 to 4.13 rhel9.2 is broken cause of dnsmasq default config\nOCPBUGS-7750 - fix gofmt check issue in network-metrics-daemon\nOCPBUGS-7754 - ART having trouble building olm images\nOCPBUGS-7774 - RawCNIConfig is printed in byte representation on failure, not human readable\nOCPBUGS-7785 - migrate to using Lease for leader election\nOCPBUGS-7806 - add \"nfs-export\" under PV details page\nOCPBUGS-7809 - sg3_utils package is missing in the assisted-installer-agent Docker file\nOCPBUGS-781 - ironic-proxy is using a deprecated field to fetch cluster VIP\nOCPBUGS-7833 - Storage tests failing in no-capabilities job\nOCPBUGS-7837 - hypershift: aws-ebs-csi-driver-operator uses guest cluster proxy causing PV provisioning failure\nOCPBUGS-7860 - [azure] message is unclear when missing clientCertificatePassword in osServicePrincipal.json\nOCPBUGS-7876 - [Descheduler] Enabling LifeCycleUtilization to test namespace filtering does not work\nOCPBUGS-7879 - Devfile isn\u0027t be processed correctly on \u0027Add from git repo\u0027\nOCPBUGS-7896 - MCO should not add keepalived pod manifests in case of VSPHERE UPI\nOCPBUGS-7899 - ODF Monitor pods failing to be bounded because timeout issue with thin-csi SC\nOCPBUGS-7903 -  Pool degraded with error: rpm-ostree kargs: signal: terminated\nOCPBUGS-7909 - Baremetal runtime prepender creates /etc/resolv.conf mode 0600 and bad selinux context\nOCPBUGS-794 - OLM version rule is not clear\nOCPBUGS-7940 - apiserver panics in admission controller\nOCPBUGS-7943 - AzureFile CSI driver does not compile with cachito\nOCPBUGS-7970 - [E2E] Always close the filter dropdown in listPage.filter.by\nOCPBUGS-799 - Reply packet for DNS conversation to service IP uses pod IP as source\nOCPBUGS-8066 - Create Serverless Function form breaks if Pipeline Operator is not installed\nOCPBUGS-8086 - Visual issues with listing items\nOCPBUGS-8243 - [release 4.13] Gather Monitoring pods\u0027 Persistent Volumes\nOCPBUGS-8308 - Bump openshift/kubernetes to 1.26.2\nOCPBUGS-8312 - IPI on Power VS clusters cannot deploy MCO\nOCPBUGS-8326 - Azure cloud provider should use Kubernetes 1.26 dependencies\nOCPBUGS-8341 - Unable to set capabilities with agent installer based installation \nOCPBUGS-8342 - create cluster-manifests fails when imageContentSources is missing\nOCPBUGS-8353 - PXE support is incomplete\nOCPBUGS-8381 - Console shows x509 error when requesting token from oauth endpoint\nOCPBUGS-8401 - Bump openshift/origin to kube 1.26.2\nOCPBUGS-8424 - ControlPlaneMachineSet: Machine\u0027s Node should be Ready to consider the Machine Ready\nOCPBUGS-8445 - cgroups default setting in OCP 4.13 generates extra MachineConfig\nOCPBUGS-8463 - OpenStack Failure domains as 4.13 TechPreview\nOCPBUGS-8471 - [4.13] egress firewall only createas 1 acl for long namespace names\nOCPBUGS-8475 - TestBoundTokenSignerController causes unrecoverable disruption in e2e-gcp-operator CI job\nOCPBUGS-8481 - CAPI rebases 4.13 backports\nOCPBUGS-8490 - agent-tui: display additional checks only when primary check fails\nOCPBUGS-8498 - aws-ebs-csi-driver-operator ServiceAccount does not include the HCP pull-secret in its imagePullSecrets\nOCPBUGS-8505 - [4.13] egress firewall acls are deleted on restart\nOCPBUGS-8511 - [4.13+ ONLY] Don\u0027t use port 80 in bootstrap IPI bare metal\nOCPBUGS-855 - When setting allowedRegistries urls the openshift-samples operator is degraded\nOCPBUGS-859 - monitor not working with UDP lb when externalTrafficPolicy: Local\nOCPBUGS-860 - CSR are generated with incorrect Subject Alternate Names\nOCPBUGS-8699 - Metal IPI Install Rate Below 90%\nOCPBUGS-8701 - `oc patch project`  not working with OCP 4.12\nOCPBUGS-8702 - OKD SCOS: remove workaround for rpm-ostree auth\nOCPBUGS-8703 - fails to switch to kernel-rt with rhel 9.2\nOCPBUGS-8710 - [4.13] don\u0027t enforce PSa in 4.13\nOCPBUGS-8712 - AES-GCM encryption at rest is not supported by kube-apiserver-operator\nOCPBUGS-8719 - Allow the user to scroll the content of the agent-tui details view\nOCPBUGS-8741 - [4.13] Pods in same deployment will have different ability to query services in same namespace from one another; ocp 4.10\nOCPBUGS-8742 - Origin tests should not specify `readyz` as the health check path\nOCPBUGS-881 - fail to create install-config.yaml as apiVIP and ingressVIP are not in machine networks\nOCPBUGS-8941 - Introduce tooltips for contextual information\nOCPBUGS-904 - Alerts from MCO are missing namespace\nOCPBUGS-9079 - ICMP fragmentation needed sent to pods behind a service don\u0027t seem to reach the pods\nOCPBUGS-91 - [ExtDNS] New TXT record breaks downward compatibility by retroactively limiting record length\nOCPBUGS-9132 - WebSCale: ovn logical router polices incorrect/l3 gw config not updated after IP change\nOCPBUGS-9185 - Pod latency spikes are observed when there is a compaction/leadership transfer\nOCPBUGS-9233 - ConsoleQuickStart {{copy}} and {{execute}} features do not work in some cases\nOCPBUGS-931 - [osp][octavia lb] NodePort allocation cannot be disabled for LB type svcs\nOCPBUGS-9338 - editor toggle radio input doesn\u0027t have distinguishable attributes\nOCPBUGS-9389 - Detach code in vsphere csi driver is failing\nOCPBUGS-948 - OLM sets invalid SCC label on its namespaces\nOCPBUGS-95 - NMstate removes egressip in OpenShift cluster with SDN plugin\nOCPBUGS-9913 - bacport tests for PDBUnhealthyPodEvictionPolicy as Tech Preview\nOCPBUGS-9924 - Remove unsupported warning in oc-mirror when using the --skip-pruning flag\nOCPBUGS-9926 - Enable node healthz server for ovnk in CNO \nOCPBUGS-9951 - fails to reconcile to RT kernel on interrupted updates\nOCPBUGS-9957 - Garbage collect grafana-dashboard-etcd\nOCPBUGS-996 - Control Plane Machine Set Operator OnDelete update should cause an error when more than one machine is ready in an index\nOCPBUGS-9963 - Better to change the error information more clearly to help understand \nOCPBUGS-9968 - Operands running management side missing affinity, tolerations, node selector and priority rules than the operator\n\n6. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.7.4 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):\n\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n2054663 - CVE-2022-0512 nodejs-url-parse: authorization bypass through user-controlled key\n2057442 - CVE-2022-0639 npm-url-parse: Authorization Bypass Through User-Controlled Key\n2060018 - CVE-2022-0686 npm-url-parse: Authorization bypass through user-controlled key\n2060020 - CVE-2022-0691 npm-url-parse: authorization bypass through user-controlled key\n2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n5",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-1927"
      },
      {
        "db": "VULHUB",
        "id": "VHN-423615"
      },
      {
        "db": "PACKETSTORM",
        "id": "168022"
      },
      {
        "db": "PACKETSTORM",
        "id": "168538"
      },
      {
        "db": "PACKETSTORM",
        "id": "168112"
      },
      {
        "db": "PACKETSTORM",
        "id": "168213"
      },
      {
        "db": "PACKETSTORM",
        "id": "168139"
      },
      {
        "db": "PACKETSTORM",
        "id": "172441"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      }
    ],
    "trust": 1.62
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2022-1927",
        "trust": 2.4
      },
      {
        "db": "PACKETSTORM",
        "id": "168538",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "168112",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "168022",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "167944",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "168378",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "168182",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "168222",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "168284",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "168013",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "169443",
        "trust": 0.7
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202205-4253",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4122",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.6290",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.2896",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.4082",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5247",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5300",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4233",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3813",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4316",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4167",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4747",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3921",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4568",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3002",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.0019",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.6434",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022062022",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "168139",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "168213",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "168516",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168150",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168289",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168287",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "169435",
        "trust": 0.1
      },
      {
        "db": "VULHUB",
        "id": "VHN-423615",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "172441",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168352",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-423615"
      },
      {
        "db": "PACKETSTORM",
        "id": "168022"
      },
      {
        "db": "PACKETSTORM",
        "id": "168538"
      },
      {
        "db": "PACKETSTORM",
        "id": "168112"
      },
      {
        "db": "PACKETSTORM",
        "id": "168213"
      },
      {
        "db": "PACKETSTORM",
        "id": "168139"
      },
      {
        "db": "PACKETSTORM",
        "id": "172441"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202205-4253"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-1927"
      }
    ]
  },
  "id": "VAR-202205-1990",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-423615"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2024-07-23T20:56:27.098000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Vim Buffer error vulnerability fix",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=212449"
      }
    ],
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202205-4253"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-126",
        "trust": 1.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-423615"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-1927"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.7,
        "url": "https://support.apple.com/kb/ht213488"
      },
      {
        "trust": 1.7,
        "url": "https://huntr.dev/bounties/945107ef-0b27-41c7-a03c-db99def0e777"
      },
      {
        "trust": 1.7,
        "url": "http://seclists.org/fulldisclosure/2022/oct/28"
      },
      {
        "trust": 1.7,
        "url": "http://seclists.org/fulldisclosure/2022/oct/41"
      },
      {
        "trust": 1.7,
        "url": "https://security.gentoo.org/glsa/202208-32"
      },
      {
        "trust": 1.7,
        "url": "https://github.com/vim/vim/commit/4d97a565ae8be0d4debba04ebd2ac3e75a0c8010"
      },
      {
        "trust": 1.6,
        "url": "https://security.gentoo.org/glsa/202305-16"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/ozslfikfyu5y2km5ejkqnyhwrubdq4gj/"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/qmfhbc5oqxdpv2sdya2juqgvcpyastjb/"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/tynk6sdcmolqjoi3b4aoe66p2g2ih4zm/"
      },
      {
        "trust": 0.7,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/qmfhbc5oqxdpv2sdya2juqgvcpyastjb/"
      },
      {
        "trust": 0.7,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/tynk6sdcmolqjoi3b4aoe66p2g2ih4zm/"
      },
      {
        "trust": 0.7,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/ozslfikfyu5y2km5ejkqnyhwrubdq4gj/"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2022-1586"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2022-1785"
      },
      {
        "trust": 0.7,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.7,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2022-1897"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2022-1927"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2022-2097"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1897"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1927"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2022-1292"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2022-2068"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1586"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1785"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4747"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3813"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/168538/red-hat-security-advisory-2022-6696-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/168222/red-hat-security-advisory-2022-6283-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/168182/red-hat-security-advisory-2022-6184-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.2896"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.6290"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/168013/red-hat-security-advisory-2022-5942-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4233"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5300"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.6434"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3002"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/168378/red-hat-security-advisory-2022-6507-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5247"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4316"
      },
      {
        "trust": 0.6,
        "url": "https://cxsecurity.com/cveshow/cve-2022-1927/"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/vim-out-of-bounds-memory-reading-via-parse-cmd-address-38495"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3921"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.4082"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/168112/red-hat-security-advisory-2022-6051-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/168284/red-hat-security-advisory-2022-6183-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/167944/red-hat-security-advisory-2022-5813-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.0019"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022062022"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4167"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/168022/red-hat-security-advisory-2022-6024-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4122"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169443/red-hat-security-advisory-2022-7058-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4568"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2068"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1292"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2097"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2022-29824"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-25314"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-25313"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-40528"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-32250"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-1012"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1012"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-29154"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-27782"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-27776"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-22576"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-40528"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-27774"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-30629"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-32206"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-32208"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-2526"
      },
      {
        "trust": 0.3,
        "url": "https://issues.jboss.org/):"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-30631"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-1729"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-21123"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-21166"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-21125"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1729"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-34903"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-31129"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-38561"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-21698"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32250"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30631"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21698"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-38561"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-1271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43813"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22576"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0670"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5.2/html-single/release_notes/index"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43813"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/articles/1548993"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0670"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-25314"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/articles/2789521"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21673"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21673"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-25313"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6024"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0391"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/install/index#installing"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2015-20107"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28915"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6696"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#critical"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-31150"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28915"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21123"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-36067"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2015-20107"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27666"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0391"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-31151"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0759"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0759"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6051"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26116"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26116"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/install/index#installing"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1966"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3177"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26137"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1966"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/index"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26137"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3177"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6103"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30629"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6102"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20329"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-38023"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26280"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-0620"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1587"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4235"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-0665"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-25173"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-0778"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-46146"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-41721"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-25725"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-38177"
      },
      {
        "trust": 0.1,
        "url": "https://[2620:52:0:1eb:367x:5axx:xxx:xxx]:2379]:"
      },
      {
        "trust": 0.1,
        "url": "https://quay.io/repository/openshift-release-dev/ocp-release?tab=tags"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27191"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-38178"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4238"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1587"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-28642"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-41717"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-3259"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23526"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-0286"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-41316"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-25577"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-30570"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2023:1325"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43519"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2990"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43519"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23525"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2509"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42012"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-0215"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-0056"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-30841"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20329"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-41723"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-40674"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42919"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.13/updating/updating-cluster-cli.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-0229"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-27561"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-23525"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44964"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-25000"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4238"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42011"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.13/release_notes/ocp-4-13-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2023:1326"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-25165"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-0217"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-0401"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44964"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42010"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-0216"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-41725"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-41724"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-4450"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42898"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-4304"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4235"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-47629"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-0361"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-4203"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-25809"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-3080"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36084"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36085"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8559"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20838"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4189"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20095"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-5827"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0691"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3634"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3580"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28500"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24370"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0686"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16845"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23337"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23177"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3737"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14155"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19603"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-42771"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0639"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36087"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6429"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20231"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20232"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25219"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-31566"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17595"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36086"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16845"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0512"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15586"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28493"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1650"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13435"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-423615"
      },
      {
        "db": "PACKETSTORM",
        "id": "168022"
      },
      {
        "db": "PACKETSTORM",
        "id": "168538"
      },
      {
        "db": "PACKETSTORM",
        "id": "168112"
      },
      {
        "db": "PACKETSTORM",
        "id": "168213"
      },
      {
        "db": "PACKETSTORM",
        "id": "168139"
      },
      {
        "db": "PACKETSTORM",
        "id": "172441"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202205-4253"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-1927"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-423615"
      },
      {
        "db": "PACKETSTORM",
        "id": "168022"
      },
      {
        "db": "PACKETSTORM",
        "id": "168538"
      },
      {
        "db": "PACKETSTORM",
        "id": "168112"
      },
      {
        "db": "PACKETSTORM",
        "id": "168213"
      },
      {
        "db": "PACKETSTORM",
        "id": "168139"
      },
      {
        "db": "PACKETSTORM",
        "id": "172441"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202205-4253"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-1927"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-05-29T00:00:00",
        "db": "VULHUB",
        "id": "VHN-423615"
      },
      {
        "date": "2022-08-10T15:50:41",
        "db": "PACKETSTORM",
        "id": "168022"
      },
      {
        "date": "2022-09-27T16:01:00",
        "db": "PACKETSTORM",
        "id": "168538"
      },
      {
        "date": "2022-08-19T15:03:34",
        "db": "PACKETSTORM",
        "id": "168112"
      },
      {
        "date": "2022-09-01T16:30:25",
        "db": "PACKETSTORM",
        "id": "168213"
      },
      {
        "date": "2022-08-24T13:06:10",
        "db": "PACKETSTORM",
        "id": "168139"
      },
      {
        "date": "2023-05-18T13:46:17",
        "db": "PACKETSTORM",
        "id": "172441"
      },
      {
        "date": "2022-09-13T15:42:14",
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "date": "2022-05-29T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202205-4253"
      },
      {
        "date": "2022-05-29T14:15:08.047000",
        "db": "NVD",
        "id": "CVE-2022-1927"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-10-31T00:00:00",
        "db": "VULHUB",
        "id": "VHN-423615"
      },
      {
        "date": "2023-07-20T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202205-4253"
      },
      {
        "date": "2023-11-07T03:42:18.747000",
        "db": "NVD",
        "id": "CVE-2022-1927"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "local",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202205-4253"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Vim Buffer error vulnerability",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202205-4253"
      }
    ],
    "trust": 0.6
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "buffer error",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202205-4253"
      }
    ],
    "trust": 0.6
  }
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading...

Loading...

Loading...

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
  • Confirmed: The vulnerability is confirmed from an analyst perspective.
  • Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
  • Patched: This vulnerability was successfully patched by the user reporting the sighting.
  • Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
  • Not confirmed: The user expresses doubt about the veracity of the vulnerability.
  • Not patched: This vulnerability was not successfully patched by the user reporting the sighting.