var-202205-0855
Vulnerability from variot

Heap buffer overflow in vim_strncpy find_word in GitHub repository vim/vim prior to 8.2.4919. This vulnerability is capable of crashing software, Bypass Protection Mechanism, Modify Memory, and possible remote execution. vim/vim Exists in an out-of-bounds write vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. Relevant releases/architectures:

Red Hat Enterprise Linux AppStream (v. 9) - aarch64, noarch, ppc64le, s390x, x86_64

Security Fix(es):

  • vim: Use of Out-of-range Pointer Offset in vim (CVE-2022-0554)

  • vim: Heap-based Buffer Overflow occurs in vim (CVE-2022-0943)

  • vim: Out-of-range Pointer Offset (CVE-2022-1420)

  • vim: heap buffer overflow (CVE-2022-1621)

  • vim: buffer over-read (CVE-2022-1629)

  • vim: use after free in utf_ptr2char (CVE-2022-1154)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Bugs fixed (https://bugzilla.redhat.com/):

2058483 - CVE-2022-0554 vim: Use of Out-of-range Pointer Offset in vim 2064064 - CVE-2022-0943 vim: Heap-based Buffer Overflow occurs in vim 2073013 - CVE-2022-1154 vim: use after free in utf_ptr2char 2077734 - CVE-2022-1420 vim: Out-of-range Pointer Offset 2083924 - CVE-2022-1621 vim: heap buffer overflow 2083931 - CVE-2022-1629 vim: buffer over-read

  1. Package List:

Red Hat Enterprise Linux AppStream (v. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Important: OpenShift Container Platform 4.11.0 bug fix and security update Advisory ID: RHSA-2022:5069-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2022:5069 Issue date: 2022-08-10 CVE Names: CVE-2018-25009 CVE-2018-25010 CVE-2018-25012 CVE-2018-25013 CVE-2018-25014 CVE-2018-25032 CVE-2019-5827 CVE-2019-13750 CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 CVE-2020-13435 CVE-2020-14155 CVE-2020-17541 CVE-2020-19131 CVE-2020-24370 CVE-2020-28493 CVE-2020-35492 CVE-2020-36330 CVE-2020-36331 CVE-2020-36332 CVE-2021-3481 CVE-2021-3580 CVE-2021-3634 CVE-2021-3672 CVE-2021-3695 CVE-2021-3696 CVE-2021-3697 CVE-2021-3737 CVE-2021-4115 CVE-2021-4156 CVE-2021-4189 CVE-2021-20095 CVE-2021-20231 CVE-2021-20232 CVE-2021-23177 CVE-2021-23566 CVE-2021-23648 CVE-2021-25219 CVE-2021-31535 CVE-2021-31566 CVE-2021-36084 CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 CVE-2021-38185 CVE-2021-38593 CVE-2021-40528 CVE-2021-41190 CVE-2021-41617 CVE-2021-42771 CVE-2021-43527 CVE-2021-43818 CVE-2021-44225 CVE-2021-44906 CVE-2022-0235 CVE-2022-0778 CVE-2022-1012 CVE-2022-1215 CVE-2022-1271 CVE-2022-1292 CVE-2022-1586 CVE-2022-1621 CVE-2022-1629 CVE-2022-1706 CVE-2022-1729 CVE-2022-2068 CVE-2022-2097 CVE-2022-21698 CVE-2022-22576 CVE-2022-23772 CVE-2022-23773 CVE-2022-23806 CVE-2022-24407 CVE-2022-24675 CVE-2022-24903 CVE-2022-24921 CVE-2022-25313 CVE-2022-25314 CVE-2022-26691 CVE-2022-26945 CVE-2022-27191 CVE-2022-27774 CVE-2022-27776 CVE-2022-27782 CVE-2022-28327 CVE-2022-28733 CVE-2022-28734 CVE-2022-28735 CVE-2022-28736 CVE-2022-28737 CVE-2022-29162 CVE-2022-29810 CVE-2022-29824 CVE-2022-30321 CVE-2022-30322 CVE-2022-30323 CVE-2022-32250 ==================================================================== 1. Summary:

Red Hat OpenShift Container Platform release 4.11.0 is now available with updates to packages and images that fix several bugs and add enhancements.

This release includes a security update for Red Hat OpenShift Container Platform 4.11.

Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

This advisory contains the container images for Red Hat OpenShift Container Platform 4.11.0. See the following advisory for the RPM packages for this release:

https://access.redhat.com/errata/RHSA-2022:5068

Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:

https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html

Security Fix(es):

  • go-getter: command injection vulnerability (CVE-2022-26945)
  • go-getter: unsafe download (issue 1 of 3) (CVE-2022-30321)
  • go-getter: unsafe download (issue 2 of 3) (CVE-2022-30322)
  • go-getter: unsafe download (issue 3 of 3) (CVE-2022-30323)
  • nanoid: Information disclosure via valueOf() function (CVE-2021-23566)
  • sanitize-url: XSS (CVE-2021-23648)
  • minimist: prototype pollution (CVE-2021-44906)
  • node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
  • prometheus/client_golang: Denial of service using InstrumentHandlerCounter (CVE-2022-21698)
  • golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)
  • go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses (CVE-2022-29810)
  • opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

You may download the oc tool and use it to inspect release image metadata as follows:

(For x86_64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-x86_64

The image digest is sha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4

(For aarch64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-aarch64

The image digest is sha256:29fa8419da2afdb64b5475d2b43dad8cc9205e566db3968c5738e7a91cf96dfe

(For s390x architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-s390x

The image digest is sha256:015d6180238b4024d11dfef6751143619a0458eccfb589f2058ceb1a6359dd46

(For ppc64le architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-ppc64le

The image digest is sha256:5052f8d5597c6656ca9b6bfd3de521504c79917aa80feb915d3c8546241f86ca

All OpenShift Container Platform 4.11 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html

  1. Solution:

For OpenShift Container Platform 4.11 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html

  1. Bugs fixed (https://bugzilla.redhat.com/):

1817075 - MCC & MCO don't free leader leases during shut down -> 10 minutes of leader election timeouts 1822752 - cluster-version operator stops applying manifests when blocked by a precondition check 1823143 - oc adm release extract --command, --tools doesn't pull from localregistry when given a localregistry/image 1858418 - [OCPonRHV] OpenShift installer fails when Blank template is missing in oVirt/RHV 1859153 - [AWS] An IAM error occurred occasionally during the installation phase: Invalid IAM Instance Profile name 1896181 - [ovirt] install fails: due to terraform error "Cannot run VM. VM is being updated" on vm resource 1898265 - [OCP 4.5][AWS] Installation failed: error updating LB Target Group 1902307 - [vSphere] cloud labels management via cloud provider makes nodes not ready 1905850 - oc adm policy who-can failed to check the operatorcondition/status resource 1916279 - [OCPonRHV] Sometimes terraform installation fails on -failed to fetch Cluster(another terraform bug) 1917898 - [ovirt] install fails: due to terraform error "Tag not matched: expect but got " on vm resource 1918005 - [vsphere] If there are multiple port groups with the same name installation fails 1918417 - IPv6 errors after exiting crictl 1918690 - Should update the KCM resource-graph timely with the latest configure 1919980 - oVirt installer fails due to terraform error "Failed to wait for Templte(...) to become ok" 1921182 - InspectFailed: kubelet Failed to inspect image: rpc error: code = DeadlineExceeded desc = context deadline exceeded 1923536 - Image pullthrough does not pass 429 errors back to capable clients 1926975 - [aws-c2s] kube-apiserver crashloops due to missing cloud config 1928932 - deploy/route_crd.yaml in openshift/router uses deprecated v1beta1 CRD API 1932812 - Installer uses the terraform-provider in the Installer's directory if it exists 1934304 - MemoryPressure Top Pod Consumers seems to be 2x expected value 1943937 - CatalogSource incorrect parsing validation 1944264 - [ovn] CNO should gracefully terminate OVN databases 1944851 - List of ingress routes not cleaned up when routers no longer exist - take 2 1945329 - In k8s 1.21 bump conntrack 'should drop INVALID conntrack entries' tests are disabled 1948556 - Cannot read property 'apiGroup' of undefined error viewing operator CSV 1949827 - Kubelet bound to incorrect IPs, referring to incorrect NICs in 4.5.x 1957012 - Deleting the KubeDescheduler CR does not remove the corresponding deployment or configmap 1957668 - oc login does not show link to console 1958198 - authentication operator takes too long to pick up a configuration change 1958512 - No 1.25 shown in REMOVEDINRELEASE for apis audited with k8s.io/removed-release 1.25 and k8s.io/deprecated true 1961233 - Add CI test coverage for DNS availability during upgrades 1961844 - baremetal ClusterOperator installed by CVO does not have relatedObjects 1965468 - [OSP] Delete volume snapshots based on cluster ID in their metadata 1965934 - can not get new result with "Refresh off" if click "Run queries" again 1965969 - [aws] the public hosted zone id is not correct in the destroy log, while destroying a cluster which is using BYO private hosted zone. 1968253 - GCP CSI driver can provision volume with access mode ROX 1969794 - [OSP] Document how to use image registry PVC backend with custom availability zones 1975543 - [OLM] Remove stale cruft installed by CVO in earlier releases 1976111 - [tracker] multipathd.socket is missing start conditions 1976782 - Openshift registry starts to segfault after S3 storage configuration 1977100 - Pod failed to start with message "set CPU load balancing: readdirent /proc/sys/kernel/sched_domain/cpu66/domain0: no such file or directory" 1978303 - KAS pod logs show: [SHOULD NOT HAPPEN] ...failed to convert new object...CertificateSigningRequest) to smd typed: .status.conditions: duplicate entries for key [type=\"Approved\"] 1978798 - [Network Operator] Upgrade: The configuration to enable network policy ACL logging is missing on the cluster upgraded from 4.7->4.8 1979671 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning 1982737 - OLM does not warn on invalid CSV 1983056 - IP conflict while recreating Pod with fixed name 1984785 - LSO CSV does not contain disconnected annotation 1989610 - Unsupported data types should not be rendered on operand details page 1990125 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit 1990384 - 502 error on "Observe -> Alerting" UI after disabled local alertmanager 1992553 - all the alert rules' annotations "summary" and "description" should comply with the OpenShift alerting guidelines 1994117 - Some hardcodes are detected at the code level in orphaned code 1994820 - machine controller doesn't send vCPU quota failed messages to cluster install logs 1995953 - Ingresscontroller change the replicas to scaleup first time will be rolling update for all the ingress pods 1996544 - AWS region ap-northeast-3 is missing in installer prompt 1996638 - Helm operator manager container restart when CR is creating&deleting 1997120 - test_recreate_pod_in_namespace fails - Timed out waiting for namespace 1997142 - OperatorHub: Filtering the OperatorHub catalog is extremely slow 1997704 - [osp][octavia lb] given loadBalancerIP is ignored when creating a LoadBalancer type svc 1999325 - FailedMount MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered 1999529 - Must gather fails to gather logs for all the namespace if server doesn't have volumesnapshotclasses resource 1999891 - must-gather collects backup data even when Pods fails to be created 2000653 - Add hypershift namespace to exclude namespaces list in descheduler configmap 2002009 - IPI Baremetal, qemu-convert takes to long to save image into drive on slow/large disks 2002602 - Storageclass creation page goes blank when "Enable encryption" is clicked if there is a syntax error in the configmap 2002868 - Node exporter not able to scrape OVS metrics 2005321 - Web Terminal is not opened on Stage of DevSandbox when terminal instance is not created yet 2005694 - Removing proxy object takes up to 10 minutes for the changes to propagate to the MCO 2006067 - Objects are not valid as a React child 2006201 - ovirt-csi-driver-node pods are crashing intermittently 2007246 - Openshift Container Platform - Ingress Controller does not set allowPrivilegeEscalation in the router deployment 2007340 - Accessibility issues on topology - list view 2007611 - TLS issues with the internal registry and AWS S3 bucket 2007647 - oc adm release info --changes-from does not show changes in repos that squash-merge 2008486 - Double scroll bar shows up on dragging the task quick search to the bottom 2009345 - Overview page does not load from openshift console for some set of users after upgrading to 4.7.19 2009352 - Add image-registry usage metrics to telemeter 2009845 - Respect overrides changes during installation 2010361 - OpenShift Alerting Rules Style-Guide Compliance 2010364 - OpenShift Alerting Rules Style-Guide Compliance 2010393 - [sig-arch][Late] clients should not use APIs that are removed in upcoming releases [Suite:openshift/conformance/parallel] 2011525 - Rate-limit incoming BFD to prevent ovn-controller DoS 2011895 - Details about cloud errors are missing from PV/PVC errors 2012111 - LSO still try to find localvolumeset which is already deleted 2012969 - need to figure out why osupdatedstart to reboot is zero seconds 2013144 - Developer catalog category links could not be open in a new tab (sharing and open a deep link works fine) 2013461 - Import deployment from Git with s2i expose always port 8080 (Service and Pod template, not Route) if another Route port is selected by the user 2013734 - unable to label downloads route in openshift-console namespace 2013822 - ensure that the container-tools content comes from the RHAOS plashets 2014161 - PipelineRun logs are delayed and stuck on a high log volume 2014240 - Image registry uses ICSPs only when source exactly matches image 2014420 - Topology page is crashed 2014640 - Cannot change storage class of boot disk when cloning from template 2015023 - Operator objects are re-created even after deleting it 2015042 - Adding a template from the catalog creates a secret that is not owned by the TemplateInstance 2015356 - Different status shows on VM list page and details page 2015375 - PVC creation for ODF/IBM Flashsystem shows incorrect types 2015459 - [azure][openstack]When image registry configure an invalid proxy, registry pods are CrashLoopBackOff 2015800 - [IBM]Shouldn't change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value 2016425 - Adoption controller generating invalid metadata.Labels for an already adopted Subscription resource 2016534 - externalIP does not work when egressIP is also present 2017001 - Topology context menu for Serverless components always open downwards 2018188 - VRRP ID conflict between keepalived-ipfailover and cluster VIPs 2018517 - [sig-arch] events should not repeat pathologically expand_less failures - s390x CI 2019532 - Logger object in LSO does not log source location accurately 2019564 - User settings resources (ConfigMap, Role, RB) should be deleted when a user is deleted 2020483 - Parameter $auto_interval_period is in Period drop-down list 2020622 - e2e-aws-upi and e2e-azure-upi jobs are not working 2021041 - [vsphere] Not found TagCategory when destroying ipi cluster 2021446 - openshift-ingress-canary is not reporting DEGRADED state, even though the canary route is not available and accessible 2022253 - Web terminal view is broken 2022507 - Pods stuck in OutOfpods state after running cluster-density 2022611 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment 2022745 - Cluster reader is not able to list NodeNetwork objects 2023295 - Must-gather tool gathering data from custom namespaces. 2023691 - ClusterIP internalTrafficPolicy does not work for ovn-kubernetes 2024427 - oc completion zsh doesn't auto complete 2024708 - The form for creating operational CRs is badly rendering filed names ("obsoleteCPUs" -> "Obsolete CP Us" ) 2024821 - [Azure-File-CSI] need more clear info when requesting pvc with volumeMode Block 2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion 2025624 - Ingress router metrics endpoint serving old certificates after certificate rotation 2026356 - [IPI on Azure] The bootstrap machine type should be same as master 2026461 - Completed pods in Openshift cluster not releasing IP addresses and results in err: range is full unless manually deleted 2027603 - [UI] Dropdown doesn't close on it's own after arbiter zone selection on 'Capacity and nodes' page 2027613 - Users can't silence alerts from the dev console 2028493 - OVN-migration failed - ovnkube-node: error waiting for node readiness: timed out waiting for the condition 2028532 - noobaa-pg-db-0 pod stuck in Init:0/2 2028821 - Misspelled label in ODF management UI - MCG performance view 2029438 - Bootstrap node cannot resolve api-int because NetworkManager replaces resolv.conf 2029470 - Recover from suddenly appearing old operand revision WAS: kube-scheduler-operator test failure: Node's not achieving new revision 2029797 - Uncaught exception: ResizeObserver loop limit exceeded 2029835 - CSI migration for vSphere: Inline-volume tests failing 2030034 - prometheusrules.openshift.io: dial tcp: lookup prometheus-operator.openshift-monitoring.svc on 172.30.0.10:53: no such host 2030530 - VM created via customize wizard has single quotation marks surrounding its password 2030733 - wrong IP selected to connect to the nodes when ExternalCloudProvider enabled 2030776 - e2e-operator always uses quay master images during presubmit tests 2032559 - CNO allows migration to dual-stack in unsupported configurations 2032717 - Unable to download ignition after coreos-installer install --copy-network 2032924 - PVs are not being cleaned up after PVC deletion 2033482 - [vsphere] two variables in tf are undeclared and get warning message during installation 2033575 - monitoring targets are down after the cluster run for more than 1 day 2033711 - IBM VPC operator needs e2e csi tests for ibmcloud 2033862 - MachineSet is not scaling up due to an OpenStack error trying to create multiple ports with the same MAC address 2034147 - OpenShift VMware IPI Installation fails with Resource customization when corespersocket is unset and vCPU count is not a multiple of 4 2034296 - Kubelet and Crio fails to start during upgrde to 4.7.37 2034411 - [Egress Router] No NAT rules for ipv6 source and destination created in ip6tables-save 2034688 - Allow Prometheus/Thanos to return 401 or 403 when the request isn't authenticated 2034958 - [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready 2035005 - MCD is not always removing in progress taint after a successful update 2035334 - [RFE] [OCPonRHV] Provision machines with preallocated disks 2035899 - Operator-sdk run bundle doesn't support arm64 env 2036202 - Bump podman to >= 3.3.0 so that setup of multiple credentials for a single registry which can be distinguished by their path will work 2036594 - [MAPO] Machine goes to failed state due to a momentary error of the cluster etcd 2036948 - SR-IOV Network Device Plugin should handle offloaded VF instead of supporting only PF 2037190 - dns operator status flaps between True/False/False and True/True/(False|True) after updating dnses.operator.openshift.io/default 2037447 - Ingress Operator is not closing TCP connections. 2037513 - I/O metrics from the Kubernetes/Compute Resources/Cluster Dashboard show as no datapoints found 2037542 - Pipeline Builder footer is not sticky and yaml tab doesn't use full height 2037610 - typo for the Terminated message from thanos-querier pod description info 2037620 - Upgrade playbook should quit directly when trying to upgrade RHEL-7 workers to 4.10 2037625 - AppliedClusterResourceQuotas can not be shown on project overview 2037626 - unable to fetch ignition file when scaleup rhel worker nodes on cluster enabled Tang disk encryption 2037628 - Add test id to kms flows for automation 2037721 - PodDisruptionBudgetAtLimit alert fired in SNO cluster 2037762 - Wrong ServiceMonitor definition is causing failure during Prometheus configuration reload and preventing changes from being applied 2037841 - [RFE] use /dev/ptp_hyperv on Azure/AzureStack 2038115 - Namespace and application bar is not sticky anymore 2038244 - Import from git ignore the given servername and could not validate On-Premises GitHub and BitBucket installations 2038405 - openshift-e2e-aws-workers-rhel-workflow in CI step registry broken 2038774 - IBM-Cloud OVN IPsec fails, IKE UDP ports and ESP protocol not in security group 2039135 - the error message is not clear when using "opm index prune" to prune a file-based index image 2039161 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected 2039253 - ovnkube-node crashes on duplicate endpoints 2039256 - Domain validation fails when TLD contains a digit. 2039277 - Topology list view items are not highlighted on keyboard navigation 2039462 - Application tab in User Preferences dropdown menus are too wide. 2039477 - validation icon is missing from Import from git 2039589 - The toolbox command always ignores [command] the first time 2039647 - Some developer perspective links are not deep-linked causes developer to sometimes delete/modify resources in the wrong project 2040180 - Bug when adding a new table panel to a dashboard for OCP UI with only one value column 2040195 - Ignition fails to enable systemd units with backslash-escaped characters in their names 2040277 - ThanosRuleNoEvaluationFor10Intervals alert description is wrong 2040488 - OpenShift-Ansible BYOH Unit Tests are Broken 2040635 - CPU Utilisation is negative number for "Kubernetes / Compute Resources / Cluster" dashboard 2040654 - 'oc adm must-gather -- some_script' should exit with same non-zero code as the failed 'some_script' exits 2040779 - Nodeport svc not accessible when the backend pod is on a window node 2040933 - OCP 4.10 nightly build will fail to install if multiple NICs are defined on KVM nodes 2041133 - 'oc explain route.status.ingress.conditions' shows type 'Currently only Ready' but actually is 'Admitted' 2041454 - Garbage values accepted for --reference-policy in oc import-image without any error 2041616 - Ingress operator tries to manage DNS of additional ingresscontrollers that are not under clusters basedomain, which can't work 2041769 - Pipeline Metrics page not showing data for normal user 2041774 - Failing git detection should not recommend Devfiles as import strategy 2041814 - The KubeletConfigController wrongly process multiple confs for a pool 2041940 - Namespace pre-population not happening till a Pod is created 2042027 - Incorrect feedback for "oc label pods --all" 2042348 - Volume ID is missing in output message when expanding volume which is not mounted. 2042446 - CSIWithOldVSphereHWVersion alert recurring despite upgrade to vmx-15 2042501 - use lease for leader election 2042587 - ocm-operator: Improve reconciliation of CA ConfigMaps 2042652 - Unable to deploy hw-event-proxy operator 2042838 - The status of container is not consistent on Container details and pod details page 2042852 - Topology toolbars are unaligned to other toolbars 2042999 - A pod cannot reach kubernetes.default.svc.cluster.local cluster IP 2043035 - Wrong error code provided when request contains invalid argument 2043068 - available of text disappears in Utilization item if x is 0 2043080 - openshift-installer intermittent failure on AWS with Error: InvalidVpcID.NotFound: The vpc ID 'vpc-123456789' does not exist 2043094 - ovnkube-node not deleting stale conntrack entries when endpoints go away 2043118 - Host should transition through Preparing when HostFirmwareSettings changed 2043132 - Add a metric when vsphere csi storageclass creation fails 2043314 - oc debug node does not meet compliance requirement 2043336 - Creating multi SriovNetworkNodePolicy cause the worker always be draining 2043428 - Address Alibaba CSI driver operator review comments 2043533 - Update ironic, inspector, and ironic-python-agent to latest bugfix release 2043672 - [MAPO] root volumes not working 2044140 - When 'oc adm upgrade --to-image ...' rejects an update as not recommended, it should mention --allow-explicit-upgrade 2044207 - [KMS] The data in the text box does not get cleared on switching the authentication method 2044227 - Test Managed cluster should only include cluster daemonsets that have maxUnavailable update of 10 or 33 percent fails 2044412 - Topology list misses separator lines and hover effect let the list jump 1px 2044421 - Topology list does not allow selecting an application group anymore 2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2044803 - Unify button text style on VM tabs 2044824 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s] 2045065 - Scheduled pod has nodeName changed 2045073 - Bump golang and build images for local-storage-operator 2045087 - Failed to apply sriov policy on intel nics 2045551 - Remove enabled FeatureGates from TechPreviewNoUpgrade 2045559 - API_VIP moved when kube-api container on another master node was stopped 2045577 - [ocp 4.9 | ovn-kubernetes] ovsdb_idl|WARN|transaction error: {"details":"cannot delete Datapath_Binding row 29e48972-xxxx because of 2 remaining reference(s)","error":"referential integrity violation 2045872 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt 2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter 2046133 - [MAPO]IPI proxy installation failed 2046156 - Network policy: preview of affected pods for non-admin shows empty popup 2046157 - Still uses pod-security.admission.config.k8s.io/v1alpha1 in admission plugin config 2046191 - Opeartor pod is missing correct qosClass and priorityClass 2046277 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the module.vpc.aws_subnet.private_subnet[0] resource 2046319 - oc debug cronjob command failed with error "unable to extract pod template from type v1.CronJob". 2046435 - Better Devfile Import Strategy support in the 'Import from Git' flow 2046496 - Awkward wrapping of project toolbar on mobile 2046497 - Re-enable TestMetricsEndpoint test case in console operator e2e tests 2046498 - "All Projects" and "all applications" use different casing on topology page 2046591 - Auto-update boot source is not available while create new template from it 2046594 - "Requested template could not be found" while creating VM from user-created template 2046598 - Auto-update boot source size unit is byte on customize wizard 2046601 - Cannot create VM from template 2046618 - Start last run action should contain current user name in the started-by annotation of the PLR 2046662 - Should upgrade the go version to be 1.17 for example go operator memcached-operator 2047197 - Sould upgrade the operator_sdk.util version to "0.4.0" for the "osdk_metric" module 2047257 - [CP MIGRATION] Node drain failure during control plane node migration 2047277 - Storage status is missing from status card of virtualization overview 2047308 - Remove metrics and events for master port offsets 2047310 - Running VMs per template card needs empty state when no VMs exist 2047320 - New route annotation to show another URL or hide topology URL decorator doesn't work for Knative Services 2047335 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used 2047362 - Removing prometheus UI access breaks origin test 2047445 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure 2047670 - Installer should pre-check that the hosted zone is not associated with the VPC and throw the error message. 2047702 - Issue described on bug #2013528 reproduced: mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8 2047710 - [OVN] ovn-dbchecker CrashLoopBackOff and sbdb jsonrpc unix socket receive error 2047732 - [IBM]Volume is not deleted after destroy cluster 2047741 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the module.masters.aws_network_interface.master[1] resource 2047790 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2047799 - release-openshift-ocp-installer-e2e-aws-upi-4.9 2047870 - Prevent redundant queries of BIOS settings in HostFirmwareController 2047895 - Fix architecture naming in oc adm release mirror for aarch64 2047911 - e2e: Mock CSI tests fail on IBM ROKS clusters 2047913 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2047925 - [FJ OCP4.10 Bug]: IRONIC_KERNEL_PARAMS does not contain coreos_kernel_params during iPXE boot 2047935 - [4.11] Bootimage bump tracker 2047998 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin- 2048059 - Service Level Agreement (SLA) always show 'Unknown' 2048067 - [IPI on Alibabacloud] "Platform Provisioning Check" tells '"ap-southeast-6": enhanced NAT gateway is not supported', which seems false 2048186 - Image registry operator panics when finalizes config deletion 2048214 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud 2048219 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool 2048221 - Capitalization of titles in the VM details page is inconsistent. 2048222 - [AWS GovCloud] Cluster can not be installed on AWS GovCloud regions via terminal interactive UI. 2048276 - Cypress E2E tests fail due to a typo in test-cypress.sh 2048333 - prometheus-adapter becomes inaccessible during rollout 2048352 - [OVN] node does not recover after NetworkManager restart, NotReady and unreachable 2048442 - [KMS] UI does not have option to specify kube auth path and namespace for cluster wide encryption 2048451 - Custom serviceEndpoints in install-config are reported to be unreachable when environment uses a proxy 2048538 - Network policies are not implemented or updated by OVN-Kubernetes 2048541 - incorrect rbac check for install operator quick starts 2048563 - Leader election conventions for cluster topology 2048575 - IP reconciler cron job failing on single node 2048686 - Check MAC address provided on the install-config.yaml file 2048687 - All bare metal jobs are failing now due to End of Life of centos 8 2048793 - Many Conformance tests are failing in OCP 4.10 with Kuryr 2048803 - CRI-O seccomp profile out of date 2048824 - [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class 2048841 - [ovn] Missing lr-policy-list and snat rules for egressip when new pods are added 2048955 - Alibaba Disk CSI Driver does not have CI 2049073 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured 2049078 - Bond CNI: Failed to attach Bond NAD to pod 2049108 - openshift-installer intermittent failure on AWS with 'Error: Error waiting for NAT Gateway (nat-xxxxx) to become available' 2049117 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently 2049133 - oc adm catalog mirror throws 'missing signature key' error when using file://local/index 2049142 - Missing "app" label 2049169 - oVirt CSI driver should use the trusted CA bundle when cluster proxy is configured 2049234 - ImagePull fails with error "unable to pull manifest from example.com/busy.box:v5 invalid reference format" 2049410 - external-dns-operator creates provider section, even when not requested 2049483 - Sidepanel for Connectors/workloads in topology shows invalid tabs 2049613 - MTU migration on SDN IPv4 causes API alerts 2049671 - system:serviceaccount:openshift-cluster-csi-drivers:aws-ebs-csi-driver-operator trying to GET and DELETE /api/v1/namespaces/openshift-cluster-csi-drivers/configmaps/kube-cloud-config which does not exist 2049687 - superfluous apirequestcount entries in audit log 2049775 - cloud-provider-config change not applied when ExternalCloudProvider enabled 2049787 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs 2049832 - ContainerCreateError when trying to launch large (>500) numbers of pods across nodes 2049872 - cluster storage operator AWS credentialsrequest lacks KMS privileges 2049889 - oc new-app --search nodejs warns about access to sample content on quay.io 2050005 - Plugin module IDs can clash with console module IDs causing runtime errors 2050011 - Observe > Metrics page: Timespan text input and dropdown do not align 2050120 - Missing metrics in kube-state-metrics 2050146 - Installation on PSI fails with: 'openstack platform does not have the required standard-attr-tag network extension' 2050173 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0 2050180 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2 2050300 - panic in cluster-storage-operator while updating status 2050332 - Malformed ClusterClaim lifetimes cause the clusterclaims-controller to silently fail to reconcile all clusterclaims 2050335 - azure-disk failed to mount with error special device does not exist 2050345 - alert data for burn budget needs to be updated to prevent regression 2050407 - revert "force cert rotation every couple days for development" in 4.11 2050409 - ip-reconcile job is failing consistently 2050452 - Update osType and hardware version used by RHCOS OVA to indicate it is a RHEL 8 guest 2050466 - machine config update with invalid container runtime config should be more robust 2050637 - Blog Link not re-directing to the intented website in the last modal in the Dev Console Onboarding Tour 2050698 - After upgrading the cluster the console still show 0 of N, 0% progress for worker nodes 2050707 - up test for prometheus pod look to far in the past 2050767 - Vsphere upi tries to access vsphere during manifests generation phase 2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function 2050882 - Crio appears to be coredumping in some scenarios 2050902 - not all resources created during import have common labels 2050946 - Cluster-version operator fails to notice TechPreviewNoUpgrade featureSet change after initialization-lookup error 2051320 - Need to build ose-aws-efs-csi-driver-operator-bundle-container image for 4.11 2051333 - [aws] records in public hosted zone and BYO private hosted zone were not deleted. 2051377 - Unable to switch vfio-pci to netdevice in policy 2051378 - Template wizard is crashed when there are no templates existing 2051423 - migrate loadbalancers from amphora to ovn not working 2051457 - [RFE] PDB for cloud-controller-manager to avoid going too many replicas down 2051470 - prometheus: Add validations for relabel configs 2051558 - RoleBinding in project without subject is causing "Project access" page to fail 2051578 - Sort is broken for the Status and Version columns on the Cluster Settings > ClusterOperators page 2051583 - sriov must-gather image doesn't work 2051593 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line 2051611 - Remove Check which enforces summary_interval must match logSyncInterval 2051642 - Remove "Tech-Preview" Label for the Web Terminal GA release 2051657 - Remove 'Tech preview' from minnimal deployment Storage System creation 2051718 - MetaLLB: Validation Webhook: BGPPeer hold time is allowed to be set to less than 3s 2051722 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop 2051881 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total not valid 2051954 - Allow changing of policyAuditConfig ratelimit post-deployment 2051969 - Need to build local-storage-operator-metadata-container image for 4.11 2051985 - An APIRequestCount without dots in the name can cause a panic 2052016 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. 2052034 - Can't start correct debug pod using pod definition yaml in OCP 4.8 2052055 - Whereabouts should implement client-go 1.22+ 2052056 - Static pod installer should throttle creating new revisions 2052071 - local storage operator metrics target down after upgrade 2052095 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1 2052270 - FSyncControllerDegraded has "treshold" -> "threshold" typos 2052309 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests 2052332 - Probe failures and pod restarts during 4.7 to 4.8 upgrade 2052393 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh 2052398 - 4.9 to 4.10 upgrade fails for ovnkube-masters 2052415 - Pod density test causing problems when using kube-burner 2052513 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. 2052578 - Create new app from a private git repository using 'oc new app' with basic auth does not work. 2052595 - Remove dev preview badge from IBM FlashSystem deployment windows 2052618 - Node reboot causes duplicate persistent volumes 2052671 - Add Sprint 214 translations 2052674 - Remove extra spaces 2052700 - kube-controller-manger should use configmap lease 2052701 - kube-scheduler should use configmap lease 2052814 - go fmt fails in OSM after migration to go 1.17 2052840 - IMAGE_BUILDER=docker make test-e2e-operator-ocp runs with podman instead of docker 2052953 - Observe dashboard always opens for last viewed workload instead of the selected one 2052956 - Installing virtualization operator duplicates the first action on workloads in topology 2052975 - High cpu load on Juniper Qfx5120 Network switches after upgrade to Openshift 4.8.26 2052986 - Console crashes when Mid cycle hook in Recreate strategy(edit deployment/deploymentConfig) selects Lifecycle strategy as "Tags the current image as an image stream tag if the deployment succeeds" 2053006 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11 2053104 - [vSphere CSI driver Operator] hw_version_total metric update wrong value after upgrade nodes hardware version from vmx-13 to vmx-15 2053112 - nncp status is unknown when nnce is Progressing 2053118 - nncp Available condition reason should be exposed in oc get 2053168 - Ensure the core dynamic plugin SDK package has correct types and code 2053205 - ci-openshift-cluster-network-operator-master-e2e-agnostic-upgrade is failing most of the time 2053304 - Debug terminal no longer works in admin console 2053312 - requestheader IDP test doesn't wait for cleanup, causing high failure rates 2053334 - rhel worker scaleup playbook failed because missing some dependency of podman 2053343 - Cluster Autoscaler not scaling down nodes which seem to qualify for scale-down 2053491 - nmstate interprets interface names as float64 and subsequently crashes on state update 2053501 - Git import detection does not happen for private repositories 2053582 - inability to detect static lifecycle failure 2053596 - [IBM Cloud] Storage IOPS limitations and lack of IPI ETCD deployment options trigger leader election during cluster initialization 2053609 - LoadBalancer SCTP service leaves stale conntrack entry that causes issues if service is recreated 2053622 - PDB warning alert when CR replica count is set to zero 2053685 - Topology performance: Immutable .toJSON consumes a lot of CPU time when rendering a large topology graph (~100 nodes) 2053721 - When using RootDeviceHint rotational setting the host can fail to provision 2053922 - [OCP 4.8][OVN] pod interface: error while waiting on OVS.Interface.external-ids 2054095 - [release-4.11] Gather images.conifg.openshift.io cluster resource definiition 2054197 - The ProjectHelmChartRepositrory schema has merged but has not been initialized in the cluster yet 2054200 - Custom created services in openshift-ingress removed even though the services are not of type LoadBalancer 2054238 - console-master-e2e-gcp-console is broken 2054254 - vSphere test failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial] 2054285 - Services other than knative service also shows as KSVC in add subscription/trigger modal 2054319 - must-gather | gather_metallb_logs can't detect metallb pod 2054351 - Rrestart of ptp4l/phc2sys on change of PTPConfig generates more than one times, socket error in event frame work 2054385 - redhat-operatori ndex image build failed with AMQ brew build - amq-interconnect-operator-metadata-container-1.10.13 2054564 - DPU network operator 4.10 branch need to sync with master 2054630 - cancel create silence from kebab menu of alerts page will navigated to the previous page 2054693 - Error deploying HorizontalPodAutoscaler with oc new-app command in OpenShift 4 2054701 - [MAPO] Events are not created for MAPO machines 2054705 - [tracker] nf_reinject calls nf_queue_entry_free on an already freed entry->state 2054735 - Bad link in CNV console 2054770 - IPI baremetal deployment metal3 pod crashes when using capital letters in hosts bootMACAddress 2054787 - SRO controller goes to CrashLoopBackOff status when the pull-secret does not have the correct permissions 2054950 - A large number is showing on disk size field 2055305 - Thanos Querier high CPU and memory usage till OOM 2055386 - MetalLB changes the shared external IP of a service upon updating the externalTrafficPolicy definition 2055433 - Unable to create br-ex as gateway is not found 2055470 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation 2055492 - The default YAML on vm wizard is not latest 2055601 - installer did not destroy .app dns recored in a IPI on ASH install 2055702 - Enable Serverless tests in CI 2055723 - CCM operator doesn't deploy resources after enabling TechPreviewNoUpgrade feature set. 2055729 - NodePerfCheck fires and stays active on momentary high latency 2055814 - Custom dynamic exntension point causes runtime and compile time error 2055861 - cronjob collect-profiles failed leads node reach to OutOfpods status 2055980 - [dynamic SDK][internal] console plugin SDK does not support table actions 2056454 - Implement preallocated disks for oVirt in the cluster API provider 2056460 - Implement preallocated disks for oVirt in the OCP installer 2056496 - If image does not exists for builder image then upload jar form crashes 2056519 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies 2056607 - Running kubernetes-nmstate handler e2e tests stuck on OVN clusters 2056752 - Better to named the oc-mirror version info with more information like the oc version --client 2056802 - "enforcedLabelLimit|enforcedLabelNameLengthLimit|enforcedLabelValueLengthLimit" do not take effect 2056841 - [UI] [DR] Web console update is available pop-up is seen multiple times on Hub cluster where ODF operator is not installed and unnecessarily it pop-up on the Managed cluster as well where ODF operator is installed 2056893 - incorrect warning for --to-image in oc adm upgrade help 2056967 - MetalLB: speaker metrics is not updated when deleting a service 2057025 - Resource requests for the init-config-reloader container of prometheus-k8s- pods are too high 2057054 - SDK: k8s methods resolves into Response instead of the Resource 2057079 - [cluster-csi-snapshot-controller-operator] CI failure: events should not repeat pathologically 2057101 - oc commands working with images print an incorrect and inappropriate warning 2057160 - configure-ovs selects wrong interface on reboot 2057183 - OperatorHub: Missing "valid subscriptions" filter 2057251 - response code for Pod count graph changed from 422 to 200 periodically for about 30 minutes if pod is rescheduled 2057358 - [Secondary Scheduler] - cannot build bundle index image using the secondary scheduler operator bundle 2057387 - [Secondary Scheduler] - olm.skiprange, com.redhat.openshift.versions is incorrect and no minkubeversion 2057403 - CMO logs show forbidden: User "system:serviceaccount:openshift-monitoring:cluster-monitoring-operator" cannot get resource "replicasets" in API group "apps" in the namespace "openshift-monitoring" 2057495 - Alibaba Disk CSI driver does not provision small PVCs 2057558 - Marketplace operator polls too frequently for cluster operator status changes 2057633 - oc rsync reports misleading error when container is not found 2057642 - ClusterOperator status.conditions[].reason "etcd disk metrics exceeded..." should be a CamelCase slug 2057644 - FSyncControllerDegraded latches True, even after fsync latency recovers on all members 2057696 - Removing console still blocks OCP install from completing 2057762 - ingress operator should report Upgradeable False to remind user before upgrade to 4.10 when Non-SAN certs are used 2057832 - expr for record rule: "cluster:telemetry_selected_series:count" is improper 2057967 - KubeJobCompletion does not account for possible job states 2057990 - Add extra debug information to image signature workflow test 2057994 - SRIOV-CNI failed to load netconf: LoadConf(): failed to get VF information 2058030 - On OCP 4.10+ using OVNK8s on BM IPI, nodes register as localhost.localdomain 2058217 - [vsphere-problem-detector-operator] 'vsphere_rwx_volumes_total' metric name make confused 2058225 - openshift_csi_share_ metrics are not found from telemeter server 2058282 - Websockets stop updating during cluster upgrades 2058291 - CI builds should have correct version of Kube without needing to push tags everytime 2058368 - Openshift OVN-K got restarted mutilple times with the error " ovsdb-server/memory-trim-on-compaction on'' failed: exit status 1 and " ovndbchecker.go:118] unable to turn on memory trimming for SB DB, stderr " , cluster unavailable 2058370 - e2e-aws-driver-toolkit CI job is failing 2058421 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install 2058424 - ConsolePlugin proxy always passes Authorization header even if authorize property is omitted or false 2058623 - Bootstrap server dropdown menu in Create Event Source- KafkaSource form is empty even if it's created 2058626 - Multiple Azure upstream kube fsgroupchangepolicy tests are permafailing expecting gid "1000" but geting "root" 2058671 - whereabouts IPAM CNI ip-reconciler cronjob specification requires hostnetwork, api-int lb usage & proper backoff 2058692 - [Secondary Scheduler] Creating secondaryscheduler instance fails with error "key failed with : secondaryschedulers.operator.openshift.io "secondary-scheduler" not found" 2059187 - [Secondary Scheduler] - key failed with : serviceaccounts "secondary-scheduler" is forbidden 2059212 - [tracker] Backport https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa 2059213 - ART cannot build installer images due to missing terraform binaries for some architectures 2059338 - A fully upgraded 4.10 cluster defaults to HW-13 hardware version even if HW-15 is default (and supported) 2059490 - The operator image in CSV file of the ART DPU network operator bundle is incorrect 2059567 - vMedia based IPI installation of OpenShift fails on Nokia servers due to issues with virtual media attachment and boot source override 2059586 - (release-4.11) Insights operator doesn't reconcile clusteroperator status condition messages 2059654 - Dynamic demo plugin proxy example out of date 2059674 - Demo plugin fails to build 2059716 - cloud-controller-manager flaps operator version during 4.9 -> 4.10 update 2059791 - [vSphere CSI driver Operator] didn't update 'vsphere_csi_driver_error' metric value when fixed the error manually 2059840 - [LSO]Could not gather logs for pod diskmaker-discovery and diskmaker-manager 2059943 - MetalLB: Move CI config files to metallb repo from dev-scripts repo 2060037 - Configure logging level of FRR containers 2060083 - CMO doesn't react to changes in clusteroperator console 2060091 - CMO produces invalid alertmanager statefulset if console cluster .status.consoleURL is unset 2060133 - [OVN RHEL upgrade] could not find IP addresses: failed to lookup link br-ex: Link not found 2060147 - RHEL8 Workers Need to Ensure libseccomp is up to date at install time 2060159 - LGW: External->Service of type ETP=Cluster doesn't go to the node 2060329 - Detect unsupported amount of workloads before rendering a lazy or crashing topology 2060334 - Azure VNET lookup fails when the NIC subnet is in a different resource group 2060361 - Unable to enumerate NICs due to missing the 'primary' field due to security restrictions 2060406 - Test 'operators should not create watch channels very often' fails 2060492 - Update PtpConfigSlave source-crs to use network_transport L2 instead of UDPv4 2060509 - Incorrect installation of ibmcloud vpc csi driver in IBM Cloud ROKS 4.10 2060532 - LSO e2e tests are run against default image and namespace 2060534 - openshift-apiserver pod in crashloop due to unable to reach kubernetes svc ip 2060549 - ErrorAddingLogicalPort: duplicate IP found in ECMP Pod route cache! 2060553 - service domain can't be resolved when networkpolicy is used in OCP 4.10-rc 2060583 - Remove Console internal-kubevirt plugin SDK package 2060605 - Broken access to public images: Unable to connect to the server: no basic auth credentials 2060617 - IBMCloud destroy DNS regex not strict enough 2060687 - Azure Ci: SubscriptionDoesNotSupportZone - does not support availability zones at location 'westus' 2060697 - [AWS] partitionNumber cannot work for specifying Partition number 2060714 - [DOCS] Change source_labels to sourceLabels in "Configuring remote write storage" section 2060837 - [oc-mirror] Catalog merging error when two or more bundles does not have a set Replace field 2060894 - Preceding/Trailing Whitespaces In Form Elements on the add page 2060924 - Console white-screens while using debug terminal 2060968 - Installation failing due to ironic-agent.service not starting properly 2060970 - Bump recommended FCOS to 35.20220213.3.0 2061002 - Conntrack entry is not removed for LoadBalancer IP 2061301 - Traffic Splitting Dialog is Confusing With Only One Revision 2061303 - Cachito request failure with vendor directory is out of sync with go.mod/go.sum 2061304 - workload info gatherer - don't serialize empty images map 2061333 - White screen for Pipeline builder page 2061447 - [GSS] local pv's are in terminating state 2061496 - etcd RecentBackup=Unknown ControllerStarted contains no message string 2061527 - [IBMCloud] infrastructure asset missing CloudProviderType 2061544 - AzureStack is hard-coded to use Standard_LRS for the disk type 2061549 - AzureStack install with internal publishing does not create api DNS record 2061611 - [upstream] The marker of KubeBuilder doesn't work if it is close to the code 2061732 - Cinder CSI crashes when API is not available 2061755 - Missing breadcrumb on the resource creation page 2061833 - A single worker can be assigned to multiple baremetal hosts 2061891 - [IPI on IBMCLOUD] missing ?br-sao? region in openshift installer 2061916 - mixed ingress and egress policies can result in half-isolated pods 2061918 - Topology Sidepanel style is broken 2061919 - Egress Ip entry stays on node's primary NIC post deletion from hostsubnet 2062007 - MCC bootstrap command lacks template flag 2062126 - IPfailover pod is crashing during creation showing keepalived_script doesn't exist 2062151 - Add RBAC for 'infrastructures' to operator bundle 2062355 - kubernetes-nmstate resources and logs not included in must-gathers 2062459 - Ingress pods scheduled on the same node 2062524 - [Kamelet Sink] Topology crashes on click of Event sink node if the resource is created source to Uri over ref 2062558 - Egress IP with openshift sdn in not functional on worker node. 2062568 - CVO does not trigger new upgrade again after fail to update to unavailable payload 2062645 - configure-ovs: don't restart networking if not necessary 2062713 - Special Resource Operator(SRO) - No sro_used_nodes metric 2062849 - hw event proxy is not binding on ipv6 local address 2062920 - Project selector is too tall with only a few projects 2062998 - AWS GovCloud regions are recognized as the unknown regions 2063047 - Configuring a full-path query log file in CMO breaks Prometheus with the latest version of the operator 2063115 - ose-aws-efs-csi-driver has invalid dependency in go.mod 2063164 - metal-ipi-ovn-ipv6 Job Permafailing and Blocking OpenShift 4.11 Payloads: insights operator is not available 2063183 - DefragDialTimeout is set to low for large scale OpenShift Container Platform - Cluster 2063194 - cluster-autoscaler-default will fail when automated etcd defrag is running on large scale OpenShift Container Platform 4 - Cluster 2063321 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs 2063324 - MCO template output directories created with wrong mode causing render failure in unprivileged container environments 2063375 - ptp operator upgrade from 4.9 to 4.10 stuck at pending due to service account requirements not met 2063414 - on OKD 4.10, when image-registry is enabled, the /etc/hosts entry is missing on some nodes 2063699 - Builds - Builds - Logs: i18n misses. 2063708 - Builds - Builds - Logs: translation correction needed. 2063720 - Metallb EBGP neighbor stuck in active until adding ebgp-multihop (directly connected neighbors) 2063732 - Workloads - StatefulSets : I18n misses 2063747 - When building a bundle, the push command fails because is passes a redundant "IMG=" on the the CLI 2063753 - User Preferences - Language - Language selection : Page refresh rquired to change the UI into selected Language. 2063756 - User Preferences - Applications - Insecure traffic : i18n misses 2063795 - Remove go-ovirt-client go.mod replace directive 2063829 - During an IPI install with the 4.10.4 installer on vSphere, getting "Check": platform.vsphere.network: Invalid value: "VLAN_3912": unable to find network provided" 2063831 - etcd quorum pods landing on same node 2063897 - Community tasks not shown in pipeline builder page 2063905 - PrometheusOperatorWatchErrors alert may fire shortly in case of transient errors from the API server 2063938 - sing the hard coded rest-mapper in library-go 2063955 - cannot download operator catalogs due to missing images 2063957 - User Management - Users : While Impersonating user, UI is not switching into user's set language 2064024 - SNO OCP upgrade with DU workload stuck at waiting for kube-apiserver static pod 2064170 - [Azure] Missing punctuation in the installconfig.controlPlane.platform.azure.osDisk explain 2064239 - Virtualization Overview page turns into blank page 2064256 - The Knative traffic distribution doesn't update percentage in sidebar 2064553 - UI should prefer to use the virtio-win configmap than v2v-vmware configmap for windows creation 2064596 - Fix the hubUrl docs link in pipeline quicksearch modal 2064607 - Pipeline builder makes too many (100+) API calls upfront 2064613 - [OCPonRHV]- after few days that cluster is alive we got error in storage operator 2064693 - [IPI][OSP] Openshift-install fails to find the shiftstack cloud defined in clouds.yaml in the current directory 2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server 2064705 - the alertmanagerconfig validation catches the wrong value for invalid field 2064744 - Errors trying to use the Debug Container feature 2064984 - Update error message for label limits 2065076 - Access monitoring Routes based on monitoring-shared-config creates wrong URL 2065160 - Possible leak of load balancer targets on AWS Machine API Provider 2065224 - Configuration for cloudFront in image-registry operator configuration is ignored & duration is corrupted 2065290 - CVE-2021-23648 sanitize-url: XSS 2065338 - VolumeSnapshot creation date sorting is broken 2065507 - oc adm upgrade should return ReleaseAccepted condition to show upgrade status. 2065510 - [AWS] failed to create cluster on ap-southeast-3 2065513 - Dev Perspective -> Project Dashboard shows Resource Quotas which are a bit misleading, and too many decimal places 2065547 - (release-4.11) Gather kube-controller-manager pod logs with garbage collector errors 2065552 - [AWS] Failed to install cluster on AWS ap-southeast-3 region due to image-registry panic error 2065577 - user with user-workload-monitoring-config-edit role can not create user-workload-monitoring-config configmap 2065597 - Cinder CSI is not configurable 2065682 - Remote write relabel config adds label __tmp_openshift_cluster_id to all metrics 2065689 - Internal Image registry with GCS backend does not redirect client 2065749 - Kubelet slowly leaking memory and pods eventually unable to start 2065785 - ip-reconciler job does not complete, halts node drain 2065804 - Console backend check for Web Terminal Operator incorrectly returns HTTP 204 2065806 - stop considering Mint mode as supported on Azure 2065840 - the cronjob object is created with a wrong api version batch/v1beta1 when created via the openshift console 2065893 - [4.11] Bootimage bump tracker 2066009 - CVE-2021-44906 minimist: prototype pollution 2066232 - e2e-aws-workers-rhel8 is failing on ansible check 2066418 - [4.11] Update channels information link is taking to a 404 error page 2066444 - The "ingress" clusteroperator's relatedObjects field has kind names instead of resource names 2066457 - Prometheus CI failure: 503 Service Unavailable 2066463 - [IBMCloud] failed to list DNS zones: Exactly one of ApiKey or RefreshToken must be specified 2066605 - coredns template block matches cluster API to loose 2066615 - Downstream OSDK still use upstream image for Hybird type operator 2066619 - The GitCommit of the oc-mirror version is not correct 2066665 - [ibm-vpc-block] Unable to change default storage class 2066700 - [node-tuning-operator] - Minimize wildcard/privilege Usage in Cluster and Local Roles 2066754 - Cypress reports for core tests are not captured 2066782 - Attached disk keeps in loading status when add disk to a power off VM by non-privileged user 2066865 - Flaky test: In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies 2066886 - openshift-apiserver pods never going NotReady 2066887 - Dependabot alert: Path traversal in github.com/valyala/fasthttp 2066889 - Dependabot alert: Path traversal in github.com/valyala/fasthttp 2066923 - No rule to make target 'docker-push' when building the SRO bundle 2066945 - SRO appends "arm64" instead of "aarch64" to the kernel name and it doesn't match the DTK 2067004 - CMO contains grafana image though grafana is removed 2067005 - Prometheus rule contains grafana though grafana is removed 2067062 - should update prometheus-operator resources version 2067064 - RoleBinding in Developer Console is dropping all subjects when editing 2067155 - Incorrect operator display name shown in pipelines quickstart in devconsole 2067180 - Missing i18n translations 2067298 - Console 4.10 operand form refresh 2067312 - PPT event source is lost when received by the consumer 2067384 - OCP 4.10 should be firing APIRemovedInNextEUSReleaseInUse for APIs removed in 1.25 2067456 - OCP 4.11 should be firing APIRemovedInNextEUSReleaseInUse and APIRemovedInNextReleaseInUse for APIs removed in 1.25 2067995 - Internal registries with a big number of images delay pod creation due to recursive SELinux file context relabeling 2068115 - resource tab extension fails to show up 2068148 - [4.11] /etc/redhat-release symlink is broken 2068180 - OCP UPI on AWS with STS enabled is breaking the Ingress operator 2068181 - Event source powered with kamelet type source doesn't show associated deployment in resources tab 2068490 - OLM descriptors integration test failing 2068538 - Crashloop back-off popover visual spacing defects 2068601 - Potential etcd inconsistent revision and data occurs 2068613 - ClusterRoleUpdated/ClusterRoleBindingUpdated Spamming Event Logs 2068908 - Manual blog link change needed 2069068 - reconciling Prometheus Operator Deployment failed while upgrading from 4.7.46 to 4.8.35 2069075 - [Alibaba 4.11.0-0.nightly] cluster storage component in Progressing state 2069181 - Disabling community tasks is not working 2069198 - Flaky CI test in e2e/pipeline-ci 2069307 - oc mirror hangs when processing the Red Hat 4.10 catalog 2069312 - extend rest mappings with 'job' definition 2069457 - Ingress operator has superfluous finalizer deletion logic for LoadBalancer-type services 2069577 - ConsolePlugin example proxy authorize is wrong 2069612 - Special Resource Operator (SRO) - Crash when nodeSelector does not match any nodes 2069632 - Not able to download previous container logs from console 2069643 - ConfigMaps leftovers while uninstalling SpecialResource with configmap 2069654 - Creating VMs with YAML on Openshift Virtualization UI is missing labels flavor, os and workload 2069685 - UI crashes on load if a pinned resource model does not exist 2069705 - prometheus target "serviceMonitor/openshift-metallb-system/monitor-metallb-controller/0" has a failure with "server returned HTTP status 502 Bad Gateway" 2069740 - On-prem loadbalancer ports conflict with kube node port range 2069760 - In developer perspective divider does not show up in navigation 2069904 - Sync upstream 1.18.1 downstream 2069914 - Application Launcher groupings are not case-sensitive 2069997 - [4.11] should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces 2070000 - Add warning alerts for installing standalone k8s-nmstate 2070020 - InContext doesn't work for Event Sources 2070047 - Kuryr: Prometheus when installed on the cluster shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured 2070160 - Copy-to-clipboard and

 elements cause display issues for ACM dynamic plugins
2070172 - SRO uses the chart's name as Helm release, not the SpecialResource's
2070181 - [MAPO] serverGroupName ignored
2070457 - Image vulnerability Popover overflows from the visible area
2070674 - [GCP] Routes get timed out and nonresponsive after creating 2K service routes
2070703 - some ipv6 network policy tests consistently failing
2070720 - [UI] Filter reset doesn't work on Pods/Secrets/etc pages and complete list disappears
2070731 - details switch label is not clickable on add page
2070791 - [GCP]Image registry are crash on cluster with GCP workload identity enabled
2070792 - service "openshift-marketplace/marketplace-operator-metrics" is not annotated with capability
2070805 - ClusterVersion: could not download the update
2070854 - cv.status.capabilities.enabledCapabilities doesn?t show the day-2 enabled caps when there are errors on resources update
2070887 - Cv condition ImplicitlyEnabledCapabilities doesn?t complain about the disabled capabilities which is previously enabled
2070888 - Cannot bind driver vfio-pci when apply sriovnodenetworkpolicy with type vfio-pci
2070929 - OVN-Kubernetes: EgressIP breaks access from a pod with EgressIP to other host networked pods on different nodes
2071019 - rebase vsphere csi driver 2.5
2071021 - vsphere driver has snapshot support missing
2071033 - conditionally relabel volumes given annotation not working - SELinux context match is wrong
2071139 - Ingress pods scheduled on the same node
2071364 - All image building tests are broken with "            error: build error: attempting to convert BUILD_LOGLEVEL env var value "" to integer: strconv.Atoi: parsing "": invalid syntax
2071578 - Monitoring navigation should not be shown if monitoring is not available (CRC)
2071599 - RoleBidings are not getting updated for ClusterRole in OpenShift Web Console
2071614 - Updating EgressNetworkPolicy rejecting with error UnsupportedMediaType
2071617 - remove Kubevirt extensions in favour of dynamic plugin
2071650 - ovn-k ovn_db_cluster metrics are not exposed for SNO
2071691 - OCP Console global PatternFly overrides adds padding to breadcrumbs
2071700 - v1 events show "Generated from" message without the source/reporting component
2071715 - Shows 404 on Environment nav in Developer console
2071719 - OCP Console global PatternFly overrides link button whitespace
2071747 - Link to documentation from the overview page goes to a missing link
2071761 - Translation Keys Are Not Namespaced
2071799 - Multus CNI should exit cleanly on CNI DEL when the API server is unavailable
2071859 - ovn-kube pods spec.dnsPolicy should be Default
2071914 - cloud-network-config-controller 4.10.5:  Error building cloud provider client, err: %vfailed to initialize Azure environment: autorest/azure: There is no cloud environment matching the name ""
2071998 - Cluster-version operator should share details of signature verification when it fails in 'Force: true' updates
2072106 - cluster-ingress-operator tests do not build on go 1.18
2072134 - Routes are not accessible within cluster from hostnet pods
2072139 - vsphere driver has permissions to create/update PV objects
2072154 - Secondary Scheduler operator panics
2072171 - Test "[sig-network][Feature:EgressFirewall] EgressFirewall should have no impact outside its namespace [Suite:openshift/conformance/parallel]" fails
2072195 - machine api doesn't issue client cert when AWS DNS suffix missing
2072215 - Whereabouts ip-reconciler should be opt-in and not required
2072389 - CVO exits upgrade immediately rather than waiting for etcd backup
2072439 - openshift-cloud-network-config-controller reports wrong range of IP addresses for Azure worker nodes
2072455 - make bundle overwrites supported-nic-ids_v1_configmap.yaml
2072570 - The namespace titles for operator-install-single-namespace test keep changing
2072710 - Perfscale - pods time out waiting for OVS port binding (ovn-installed)
2072766 - Cluster Network Operator stuck in CrashLoopBackOff when scheduled to same master
2072780 - OVN kube-master does not clear NetworkUnavailableCondition on GCP BYOH Windows node
2072793 - Drop "Used Filesystem" from "Virtualization -> Overview"
2072805 - Observe > Dashboards: $__range variables cause PromQL query errors
2072807 - Observe > Dashboards: Missing panel.styles attribute for table panels causes JS error
2072842 - (release-4.11) Gather namespace names with overlapping UID ranges
2072883 - sometimes monitoring dashboards charts can not be loaded successfully
2072891 - Update gcp-pd-csi-driver to 1.5.1;
2072911 - panic observed in kubedescheduler operator
2072924 - periodic-ci-openshift-release-master-ci-4.11-e2e-azure-techpreview-serial
2072957 - ContainerCreateError loop leads to several thousand empty logfiles in the file system
2072998 - update aws-efs-csi-driver to the latest version
2072999 - Navigate from logs of selected Tekton task instead of last one
2073021 - [vsphere] Failed to update OS on master nodes
2073112 - Prometheus (uwm) externalLabels not showing always in alerts. 
2073113 - Warning is logged to the console: W0407 Defaulting of registry auth file to "${HOME}/.docker/config.json" is deprecated. 
2073176 - removing data in form does not remove data from yaml editor
2073197 - Error in Spoke/SNO agent: Source image rejected: A signature was required, but no signature exists
2073329 - Pipelines-plugin- Having different title for Pipeline Runs tab, on Pipeline Details page it's "PipelineRuns" and on Repository Details page it's "Pipeline Runs". 
2073373 - Update azure-disk-csi-driver to 1.16.0
2073378 - failed egressIP assignment - cloud-network-config-controller does not delete failed cloudprivateipconfig
2073398 - machine-api-provider-openstack does not clean up OSP ports after failed server provisioning
2073436 - Update azure-file-csi-driver to v1.14.0
2073437 - Topology performance: Firehose/useK8sWatchResources cache can return unexpected data format if isList differs on multiple calls
2073452 - [sig-network] pods should successfully create sandboxes by other - failed (add)
2073473 - [OVN SCALE][ovn-northd] Unnecessary SB record no-op changes added to SB transaction. 
2073522 - Update ibm-vpc-block-csi-driver to v4.2.0
2073525 - Update vpc-node-label-updater to v4.1.2
2073901 - Installation failed due to etcd operator Err:DefragControllerDegraded: failed to dial endpoint https://10.0.0.7:2379 with maintenance client: context canceled
2073937 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for UMW
2073938 - APIRemovedInNextEUSReleaseInUse alert for runtimeclasses
2073945 - APIRemovedInNextEUSReleaseInUse alert for podsecuritypolicies
2073972 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for platform monitoring
2074009 - [OVN] ovn-northd doesn't clean Chassis_Private record after scale down to 0 a machineSet
2074031 - Admins should be able to tune garbage collector aggressiveness (GOGC) for kube-apiserver if necessary
2074062 - Node Tuning Operator(NTO) - Cloud provider profile rollback doesn't work well
2074084 - CMO metrics not visible in the OCP webconsole UI
2074100 - CRD filtering according to name broken
2074210 - asia-south2, australia-southeast2, and southamerica-west1Missing from GCP regions
2074237 - oc new-app --image-stream flag behavior is unclear
2074243 - DefaultPlacement API allow empty enum value and remove default
2074447 - cluster-dashboard: CPU Utilisation iowait and steal
2074465 - PipelineRun fails in import from Git flow if "main" branch is default
2074471 - Cannot delete namespace with a LB type svc and Kuryr when ExternalCloudProvider is enabled
2074475 - [e2e][automation] kubevirt plugin cypress tests fail
2074483 - coreos-installer doesnt work on Dell machines
2074544 - e2e-metal-ipi-ovn-ipv6 failing due to recent CEO changes
2074585 - MCG standalone deployment page goes blank when the KMS option is enabled
2074606 - occm does not have permissions to annotate SVC objects
2074612 - Operator fails to install due to service name lookup failure
2074613 - nodeip-configuration container incorrectly attempts to relabel /etc/systemd/system
2074635 - Unable to start Web Terminal after deleting existing instance
2074659 - AWS installconfig ValidateForProvisioning always provides blank values to validate zone records
2074706 - Custom EC2 endpoint is not considered by AWS EBS CSI driver
2074710 - Transition to go-ovirt-client
2074756 - Namespace column provide wrong data in ClusterRole Details -> Rolebindings tab
2074767 - Metrics page show incorrect values due to metrics level config
2074807 - NodeFilesystemSpaceFillingUp alert fires even before kubelet GC kicks in
2074902 - oc debug node/nodename ? chroot /host somecommand should exit with non-zero when the sub-command failed
2075015 - etcd-guard connection refused event repeating pathologically (payload blocking)
2075024 - Metal upgrades permafailing on metal3 containers crash looping
2075050 - oc-mirror fails to calculate between two channels with different prefixes for the same version of OCP
2075091 - Symptom Detection.Undiagnosed panic detected in pod
2075117 - Developer catalog: Order dropdown (A-Z, Z-A) is miss-aligned (in a separate row)
2075149 - Trigger Translations When Extensions Are Updated
2075189 - Imports from dynamic-plugin-sdk lead to failed module resolution errors
2075459 - Set up cluster on aws with rootvolumn io2 failed due to no iops despite it being configured
2075475 - OVN-Kubernetes: egress router pod (redirect mode), access from pod on different worker-node (redirect) doesn't work
2075478 - Bump documentationBaseURL to 4.11
2075491 - nmstate operator cannot be upgraded on SNO
2075575 - Local Dev Env - Prometheus 404 Call errors spam the console
2075584 - improve clarity of build failure messages when using csi shared resources but tech preview is not enabled
2075592 - Regression - Top of the web terminal drawer is missing a stroke/dropshadow
2075621 - Cluster upgrade.[sig-mco] Machine config pools complete upgrade
2075647 - 'oc adm upgrade ...' POSTs ClusterVersion, clobbering any unrecognized spec properties
2075671 - Cluster Ingress Operator K8S API cache contains duplicate objects
2075778 - Fix failing TestGetRegistrySamples test
2075873 - Bump recommended FCOS to 35.20220327.3.0
2076193 - oc patch command for the liveness probe and readiness probe parameters of an OpenShift router deployment doesn't take effect
2076270 - [OCPonRHV] MachineSet scale down operation fails to delete the worker VMs
2076277 - [RFE] [OCPonRHV] Add storage domain ID valueto Compute/ControlPlain section in the machine object
2076290 - PTP operator readme missing documentation on BC setup via PTP config
2076297 - Router process ignores shutdown signal while starting up
2076323 - OLM blocks all operator installs if an openshift-marketplace catalogsource is unavailable
2076355 - The KubeletConfigController wrongly process multiple confs for a pool after having kubeletconfig in bootstrap
2076393 - [VSphere] survey fails to list datacenters
2076521 - Nodes in the same zone are not updated in the right order
2076527 - Pipeline Builder: Make unnecessary tekton hub API calls when the user types 'too fast'
2076544 - Whitespace (padding) is missing after an PatternFly update, already in 4.10
2076553 - Project access view replace group ref with user ref when updating their Role
2076614 - Missing Events component from the SDK API
2076637 - Configure metrics for vsphere driver to be reported
2076646 - openshift-install destroy unable to delete PVC disks in GCP if cluster identifier is longer than 22 characters
2076793 - CVO exits upgrade immediately rather than waiting for etcd backup
2076831 - [ocp4.11]Mem/cpu high utilization by apiserver/etcd for cluster stayed 10 hours
2076877 - network operator tracker to switch to use flowcontrol.apiserver.k8s.io/v1beta2 instead v1beta1 to be deprecated in k8s 1.26
2076880 - OKD: add cluster domain to the uploaded vm configs so that 30-local-dns-prepender can use it
2076975 - Metric unset during static route conversion in configure-ovs.sh
2076984 - TestConfigurableRouteNoConsumingUserNoRBAC fails in CI
2077050 - OCP should default to pd-ssd disk type on GCP
2077150 - Breadcrumbs on a few screens don't have correct top margin spacing
2077160 - Update owners for openshift/cluster-etcd-operator
2077357 - [release-4.11] 200ms packet delay with OVN controller turn on
2077373 - Accessibility warning on developer perspective
2077386 - Import page shows untranslated values for the route advanced routing>security options (devconsole~Edge)
2077457 - failure in test case "[sig-network][Feature:Router] The HAProxy router should serve the correct routes when running with the haproxy config manager"
2077497 - Rebase etcd to 3.5.3 or later
2077597 - machine-api-controller is not taking the proxy configuration when it needs to reach the RHV API
2077599 - OCP should alert users if they are on vsphere version <7.0.2
2077662 - AWS Platform Provisioning Check incorrectly identifies record as part of domain of cluster
2077797 - LSO pods don't have any resource requests
2077851 - "make vendor" target is not working
2077943 - If there is a service with multiple ports, and the route uses 8080, when editing the 8080 port isn't replaced, but a random port gets replaced and 8080 still stays
2077994 - Publish RHEL CoreOS AMIs in AWS ap-southeast-3 region
2078013 - drop multipathd.socket workaround
2078375 - When using the wizard with template using data source the resulting vm use pvc source
2078396 - [OVN AWS] EgressIP was not balanced to another egress node after original node was removed egress label
2078431 - [OCPonRHV] - ERROR failed to instantiate provider "openshift/local/ovirt" to obtain schema:  ERROR fork/exec
2078526 - Multicast breaks after master node reboot/sync
2078573 - SDN CNI -Fail to create nncp when vxlan is up
2078634 - CRI-O not killing Calico CNI stalled (zombie) processes. 
2078698 - search box may not completely remove content
2078769 - Different not translated filter group names (incl. Secret, Pipeline, PIpelineRun)
2078778 - [4.11] oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration fails and caused ?apiserver panic'd...http2: panic serving xxx.xx.xxx.21:49748: cannot deep copy int? when AllRequestBodies audit-profile is used. 
2078781 - PreflightValidation does not handle multiarch images
2078866 - [BM][IPI] Installation with bonds fail - DaemonSet "openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress
2078875 - OpenShift Installer fail to remove Neutron ports
2078895 - [OCPonRHV]-"cow" unsupported value in format field in install-config.yaml
2078910 - CNO spitting out ".spec.groups[0].rules[4].runbook_url: field not declared in schema"
2078945 - Ensure only one apiserver-watcher process is active on a node. 
2078954 - network-metrics-daemon makes costly global pod list calls scaling per node
2078969 - Avoid update races between old and new NTO operands during cluster upgrades
2079012 - egressIP not migrated to correct workers after deleting machineset it was assigned
2079062 - Test for console demo plugin toast notification needs to be increased for ci testing
2079197 - [RFE] alert when more than one default storage class is detected
2079216 - Partial cluster update reference doc link returns 404
2079292 - containers prometheus-operator/kube-rbac-proxy violate PodSecurity
2079315 - (release-4.11) Gather ODF config data with Insights
2079422 - Deprecated 1.25 API call
2079439 - OVN Pods Assigned Same IP Simultaneously
2079468 - Enhance the waitForIngressControllerCondition for better CI results
2079500 - okd-baremetal-install uses fcos for bootstrap but rhcos for cluster
2079610 - Opeatorhub status shows errors
2079663 - change default image features in RBD storageclass
2079673 - Add flags to disable migrated code
2079685 - Storageclass creation page with "Enable encryption" is not displaying saved KMS connection details when vaulttenantsa details are available in csi-kms-details config
2079724 - cluster-etcd-operator - disable defrag-controller as there is unpredictable impact on large OpenShift Container Platform 4 - Cluster
2079788 - Operator restarts while applying the acm-ice example
2079789 - cluster drops ImplicitlyEnabledCapabilities during upgrade
2079803 - Upgrade-triggered etcd backup will be skip during serial upgrade
2079805 - Secondary scheduler operator should comply to restricted pod security level
2079818 - Developer catalog installation overlay (modal?) shows a duplicated padding
2079837 - [RFE] Hub/Spoke example with daemonset
2079844 - EFS cluster csi driver status stuck in AWSEFSDriverCredentialsRequestControllerProgressing with sts installation
2079845 - The Event Sinks catalog page now has a blank space on the left
2079869 - Builds for multiple kernel versions should be ran in parallel when possible
2079913 - [4.10] APIRemovedInNextEUSReleaseInUse alert for OVN endpointslices
2079961 - The search results accordion has no spacing between it and the side navigation bar. 
2079965 - [rebase v1.24]  [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS [Suite:openshift/conformance/parallel] [Suite:k8s]
2080054 - TAGS arg for installer-artifacts images is not propagated to build images
2080153 - aws-load-balancer-operator-controller-manager pod stuck in ContainerCreating status
2080197 - etcd leader changes produce test churn during early stage of test
2080255 - EgressIP broken on AWS with OpenShiftSDN / latest nightly build
2080267 - [Fresh Installation] Openshift-machine-config-operator namespace is flooded with events related to clusterrole, clusterrolebinding
2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses
2080379 - Group all e2e tests as parallel or serial
2080387 - Visual connector not appear between the node if a node get created using "move connector" to a different application
2080416 - oc bash-completion problem
2080429 - CVO must ensure non-upgrade related changes are saved when desired payload fails to load
2080446 - Sync ironic images with latest bug fixes packages
2080679 - [rebase v1.24] [sig-cli] test failure
2080681 - [rebase v1.24]  [sig-cluster-lifecycle] CSRs from machines that are not recognized by the cloud provider are not approved [Suite:openshift/conformance/parallel]
2080687 - [rebase v1.24]  [sig-network][Feature:Router] tests are failing
2080873 - Topology graph crashes after update to 4.11 when Layout 2 (ColaForce) was selected previously
2080964 - Cluster operator special-resource-operator is always in Failing state with reason: "Reconciling simple-kmod"
2080976 - Avoid hooks config maps when hooks are empty
2081012 - [rebase v1.24]  [sig-devex][Feature:OpenShiftControllerManager] TestAutomaticCreationOfPullSecrets [Suite:openshift/conformance/parallel]
2081018 - [rebase v1.24] [sig-imageregistry][Feature:Image] oc tag should work when only imagestreams api is available
2081021 - [rebase v1.24] [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources
2081062 - Unrevert RHCOS back to 8.6
2081067 - admin dev-console /settings/cluster should point out history may be excerpted
2081069 - [sig-network] pods should successfully create sandboxes by adding pod to network
2081081 - PreflightValidation "odd number of arguments passed as key-value pairs for logging" error
2081084 - [rebase v1.24] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed
2081087 - [rebase v1.24] [sig-auth] ServiceAccounts should allow opting out of API token automount
2081119 - oc explain output of default overlaySize is outdated
2081172 - MetallLB: YAML view in webconsole does not show all the available key value pairs of all the objects
2081201 - cloud-init User check for Windows VM refuses to accept capitalized usernames
2081447 - Ingress operator performs spurious updates in response to API's defaulting of router deployment's router container's ports' protocol field
2081562 - lifecycle.posStart hook does not have network connectivity. 
2081685 - Typo in NNCE Conditions
2081743 - [e2e] tests failing
2081788 - MetalLB: the crds are not validated until metallb is deployed
2081821 - SpecialResourceModule CRD is not installed after deploying SRO operator using brew bundle image via OLM
2081895 - Use the managed resource (and not the manifest) for resource health checks
2081997 - disconnected insights operator remains degraded after editing pull secret
2082075 - Removing huge amount of ports takes a lot of time. 
2082235 - CNO exposes a generic apiserver that apparently does nothing
2082283 - Transition to new oVirt Terraform provider
2082360 - OCP 4.10.4, CNI: SDN; Whereabouts IPAM: Duplicate IP address with bond-cni
2082380 - [4.10.z] customize wizard is crashed
2082403 - [LSO] No new build local-storage-operator-metadata-container created
2082428 - oc patch healthCheckInterval with invalid "5 s" to the ingress-controller successfully
2082441 - [UPI] aws-load-balancer-operator-controller-manager failed to get VPC ID in UPI on AWS
2082492 - [IPI IBM]Can't create image-registry-private-configuration secret with error "specified resource key credentials does not contain HMAC keys"
2082535 - [OCPonRHV]-workers are cloned when "clone: false" is specified in install-config.yaml
2082538 - apirequests limits of Cluster CAPI Operator are too low for GCP platform
2082566 - OCP dashboard fails to load when the query to Prometheus takes more than 30s to return
2082604 - [IBMCloud][x86_64] IBM VPC does not properly support RHCOS Custom Image tagging
2082667 - No new machines provisioned while machineset controller drained old nodes for change to machineset
2082687 - [IBM Cloud][x86_64][CCCMO] IBM x86_64 CCM using unsupported --port argument
2082763 - Cluster install stuck on the applying for operatorhub "cluster"
2083149 - "Update blocked" label incorrectly displays on new minor versions in the "Other available paths" modal
2083153 - Unable to use application credentials for Manila PVC creation on OpenStack
2083154 - Dynamic plugin sdk tsdoc generation does not render docs for parameters
2083219 - DPU network operator doesn't deal with c1... inteface names
2083237 - [vsphere-ipi] Machineset scale up process delay
2083299 - SRO does not fetch mirrored DTK images in disconnected clusters
2083445 - [FJ OCP4.11 Bug]: RAID setting during IPI cluster deployment fails if iRMC port number is specified
2083451 - Update external serivces URLs to console.redhat.com
2083459 - Make numvfs > totalvfs error message more verbose
2083466 - Failed to create clusters on AWS C2S/SC2S due to image-registry MissingEndpoint error
2083514 - Operator ignores managementState Removed
2083641 - OpenShift Console Knative Eventing ContainerSource generates wrong api version when pointed to k8s Service
2083756 - Linkify not upgradeable message on ClusterSettings page
2083770 - Release image signature manifest filename extension is yaml
2083919 - openshift4/ose-operator-registry:4.10.0 having security vulnerabilities
2083942 - Learner promotion can temporarily fail with rpc not supported for learner errors
2083964 - Sink resources dropdown is not persisted in form yaml switcher in event source creation form
2083999 - "--prune-over-size-limit" is not working as expected
2084079 - prometheus route is not updated to "path: /api" after upgrade from 4.10 to 4.11
2084081 - nmstate-operator installed cluster on POWER shows issues while adding new dhcp interface
2084124 - The Update cluster modal includes a broken link
2084215 - Resource configmap "openshift-machine-api/kube-rbac-proxy" is defined by 2 manifests
2084249 - panic in ovn pod from an e2e-aws-single-node-serial nightly run
2084280 - GCP API Checks Fail if non-required APIs are not enabled
2084288 - "alert/Watchdog must have no gaps or changes" failing after bump
2084292 - Access to dashboard resources is needed in dynamic plugin SDK
2084331 - Resource with multiple capabilities included unless all capabilities are disabled
2084433 - Podsecurity violation error getting logged for ingresscontroller during deployment. 
2084438 - Change Ping source spec.jsonData (deprecated) field  to spec.data
2084441 - [IPI-Azure]fail to check the vm capabilities in install cluster
2084459 - Topology list view crashes when switching from chart view after moving sink from knative service to uri
2084463 - 5 control plane replica tests fail on ephemeral volumes
2084539 - update azure arm templates to support customer provided vnet
2084545 - [rebase v1.24] cluster-api-operator causes all techpreview tests to fail
2084580 - [4.10] No cluster name sanity validation - cluster name with a dot (".") character
2084615 - Add to navigation option on search page is not properly aligned
2084635 - PipelineRun creation from the GUI for a Pipeline with 2 workspaces hardcode the PVC storageclass
2084732 - A special resource that was created in OCP 4.9 can't be deleted after an upgrade to 4.10
2085187 - installer-artifacts fails to build with go 1.18
2085326 - kube-state-metrics is tripping APIRemovedInNextEUSReleaseInUse
2085336 - [IPI-Azure] Fail to create the worker node which HyperVGenerations is V2 or V1 and vmNetworkingType is Accelerated
2085380 - [IPI-Azure] Incorrect error prompt validate VM image and instance HyperV gen match when install cluster
2085407 - There is no Edit link/icon for labels on Node details page
2085721 - customization controller image name is wrong
2086056 - Missing doc for OVS HW offload
2086086 - Update Cluster Sample Operator dependencies and libraries for OCP 4.11
2086092 - update kube to v.24
2086143 - CNO uses too much memory
2086198 - Cluster CAPI Operator creates unnecessary defaulting webhooks
2086301 - kubernetes nmstate pods are not running after creating instance
2086408 - Podsecurity violation error getting logged for  externalDNS operand pods during deployment
2086417 - Pipeline created from add flow has GIT Revision as required field
2086437 - EgressQoS CRD not available
2086450 - aws-load-balancer-controller-cluster pod logged Podsecurity violation error during deployment
2086459 - oc adm inspect fails when one of resources not exist
2086461 - CNO probes MTU unnecessarily in Hypershift, making cluster startup take too long
2086465 - External identity providers should log login attempts in the audit trail
2086469 - No data about title 'API Request Duration by Verb - 99th Percentile' display on the dashboard 'API Performance'
2086483 - baremetal-runtimecfg k8s dependencies should be on a par with 1.24 rebase
2086505 - Update oauth-server images to be consistent with ART
2086519 - workloads must comply to restricted security policy
2086521 - Icons of Knative actions are not clearly visible on the context menu in the dark mode
2086542 - Cannot create service binding through drag and drop
2086544 - ovn-k master daemonset on hypershift shouldn't log token
2086546 - Service binding connector is not visible in the dark mode
2086718 - PowerVS destroy code does not work
2086728 - [hypershift] Move drain to controller
2086731 - Vertical pod autoscaler operator needs a 4.11 bump
2086734 - Update csi driver images to be consistent with ART
2086737 - cloud-provider-openstack rebase to kubernetes v1.24
2086754 - Cluster resource override operator needs a 4.11 bump
2086759 - [IPI] OCP-4.11 baremetal - boot partition is not mounted on temporary directory
2086791 - Azure: Validate UltraSSD instances in multi-zone regions
2086851 - pods with multiple external gateways may only be have ECMP routes for one gateway
2086936 - vsphere ipi should use cores by default instead of sockets
2086958 - flaky e2e in kube-controller-manager-operator TestPodDisruptionBudgetAtLimitAlert
2086959 - flaky e2e in kube-controller-manager-operator TestLogLevel
2086962 - oc-mirror publishes metadata with --dry-run when publishing to mirror
2086964 - oc-mirror fails on differential run when mirroring a package with multiple channels specified
2086972 - oc-mirror does not error invalid metadata is passed to the describe command
2086974 - oc-mirror does not work with headsonly for operator 4.8
2087024 - The oc-mirror result mapping.txt is not correct , can?t be used by oc image mirror command
2087026 - DTK's imagestream is missing from OCP 4.11 payload
2087037 - Cluster Autoscaler should use K8s 1.24 dependencies
2087039 - Machine API components should use K8s 1.24 dependencies
2087042 - Cloud providers components should use K8s 1.24 dependencies
2087084 - remove unintentional nic support
2087103 - "Updating to release image" from 'oc' should point out that the cluster-version operator hasn't accepted the update
2087114 - Add simple-procfs-kmod in modprobe example in README.md
2087213 - Spoke BMH stuck "inspecting" when deployed via ZTP in 4.11 OCP hub
2087271 - oc-mirror does not check for existing workspace when performing mirror2mirror synchronization
2087556 - Failed to render DPU ovnk manifests
2087579 - --keep-manifest-list=true does not work for oc adm release new , only pick up the linux/amd64 manifest from the manifest list
2087680 - [Descheduler] Sync with sigs.k8s.io/descheduler
2087684 - KCMO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile
2087685 - KASO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile
2087687 - MCO does not generate event when user applies Default -> LowUpdateSlowReaction WorkerLatencyProfile
2087764 - Rewrite the registry backend will hit error
2087771 - [tracker] NetworkManager 1.36.0 loses DHCP lease and doesn't try again
2087772 - Bindable badge causes some layout issues with the side panel of bindable operator backed services
2087942 - CNO references images that are divergent from ART
2087944 - KafkaSink Node visualized incorrectly
2087983 - remove etcd_perf before restore
2087993 - PreflightValidation many "msg":"TODO: preflight checks" in the operator log
2088130 - oc-mirror init does not allow for automated testing
2088161 - Match dockerfile image name with the name used in the release repo
2088248 - Create HANA VM does not use values from customized HANA templates
2088304 - ose-console: enable source containers for open source requirements
2088428 - clusteroperator/baremetal stays in progressing: Applying metal3 resources state on a fresh install
2088431 - AvoidBuggyIPs field of addresspool should be removed
2088483 - oc adm catalog mirror returns 0 even if there are errors
2088489 - Topology list does not allow selecting an application group anymore (again)
2088533 - CRDs for openshift.io should have subresource.status failes on sharedconfigmaps.sharedresource and sharedsecrets.sharedresource
2088535 - MetalLB: Enable debug log level for downstream CI
2088541 - Default CatalogSources in openshift-marketplace namespace keeps throwing pod security admission warnings would violate PodSecurity "restricted:v1.24"
2088561 - BMH unable to start inspection: File name too long
2088634 - oc-mirror does not fail when catalog is invalid
2088660 - Nutanix IPI installation inside container failed
2088663 - Better to change the default value of --max-per-registry to 6
2089163 - NMState CRD out of sync with code
2089191 - should remove grafana from cluster-monitoring-config configmap in hypershift cluster
2089224 - openshift-monitoring/cluster-monitoring-config configmap always revert to default setting
2089254 - CAPI operator: Rotate token secret if its older than 30 minutes
2089276 - origin tests for egressIP and azure fail
2089295 - [Nutanix]machine stuck in Deleting phase when delete a machineset whose replicas>=2 and machine is Provisioning phase on Nutanix
2089309 - [OCP 4.11] Ironic inspector image fails to clean disks that are part of a multipath setup if they are passive paths
2089334 - All cloud providers should use service account credentials
2089344 - Failed to deploy simple-kmod
2089350 - Rebase sdn to 1.24
2089387 - LSO not taking mpath. ignoring device
2089392 - 120 node baremetal upgrade from 4.9.29 --> 4.10.13  crashloops on machine-approver
2089396 - oc-mirror does not show pruned image plan
2089405 - New topology package shows gray build icons instead of green/red icons for builds and pipelines
2089419 - do not block 4.10 to 4.11 upgrades if an existing CSI driver is found. Instead, warn about presence of third party CSI driver
2089488 - Special resources are missing the managementState field
2089563 - Update Power VS MAPI to use api's from openshift/api repo
2089574 - UWM prometheus-operator pod can't start up due to no master node in hypershift cluster
2089675 - Could not move Serverless Service without Revision (or while starting?)
2089681 - [Hypershift] EgressIP doesn't work in hypershift guest cluster
2089682 - Installer expects all nutanix subnets to have a cluster reference which is not the case for e.g. overlay networks
2089687 - alert message of MCDDrainError needs to be updated for new drain controller
2089696 - CR reconciliation is stuck in daemonset lifecycle
2089716 - [4.11][reliability]one worker node became NotReady on which ovnkube-node pod's memory increased sharply
2089719 - acm-simple-kmod fails to build
2089720 - [Hypershift] ICSP doesn't work for the guest cluster
2089743 - acm-ice fails to deploy: helm chart does not appear to be a gzipped archive
2089773 - Pipeline status filter and status colors doesn't work correctly with non-english languages
2089775 - keepalived can keep ingress VIP on wrong node under certain circumstances
2089805 - Config duration metrics aren't exposed
2089827 - MetalLB CI - backward compatible tests are failing due to the order of delete
2089909 - PTP e2e testing not working on SNO cluster
2089918 - oc-mirror skip-missing still returns 404 errors when images do not exist
2089930 - Bump OVN to 22.06
2089933 - Pods do not post readiness status on termination
2089968 - Multus CNI daemonset should use hostPath mounts with type: directory
2089973 - bump libs to k8s 1.24 for OCP 4.11
2089996 - Unnecessary yarn install runs in e2e tests
2090017 - Enable source containers to meet open source requirements
2090049 - destroying GCP cluster which has a compute node without infra id in name would fail to delete 2 k8s firewall-rules and VPC network
2090092 - Will hit error if specify the channel not the latest
2090151 - [RHEL scale up] increase the wait time so that the node has enough time to get ready
2090178 - VM SSH command generated by UI points at api VIP
2090182 - [Nutanix]Create a machineset with invalid image, machine stuck in "Provisioning" phase
2090236 - Only reconcile annotations and status for clusters
2090266 - oc adm release extract is failing on mutli arch image
2090268 - [AWS EFS] Operator not getting installed successfully on Hypershift Guest cluster
2090336 - Multus logging should be disabled prior to release
2090343 - Multus debug logging should be enabled temporarily for debugging podsandbox creation failures. 
2090358 - Initiating drain log message is displayed before the drain actually starts
2090359 - Nutanix mapi-controller: misleading error message when the failure is caused by wrong credentials
2090405 - [tracker] weird port mapping with asymmetric traffic [rhel-8.6.0.z]
2090430 - gofmt code
2090436 - It takes 30min-60min to update the machine count in custom MachineConfigPools (MCPs) when a node is removed from the pool
2090437 - Bump CNO to k8s 1.24
2090465 - golang version mismatch
2090487 - Change default SNO Networking Type and disallow OpenShiftSDN a supported networking Type
2090537 - failure in ovndb migration when db is not ready in HA mode
2090549 - dpu-network-operator shall be able to run on amd64 arch platform
2090621 - Metal3 plugin does not work properly with updated NodeMaintenance CRD
2090627 - Git commit and branch are empty in MetalLB log
2090692 - Bump to latest 1.24 k8s release
2090730 - must-gather should include multus logs. 
2090731 - nmstate deploys two instances of webhook on a single-node cluster
2090751 - oc image mirror skip-missing flag does not skip images
2090755 - MetalLB: BGPAdvertisement validation allows duplicate entries for ip pool selector, ip address pools, node selector and bgp peers
2090774 - Add Readme to plugin directory
2090794 - MachineConfigPool cannot apply a configuration after fixing the pods that caused a drain alert
2090809 - gm.ClockClass  invalid syntax parse error in linux ptp daemon logs
2090816 - OCP 4.8 Baremetal IPI installation failure: "Bootstrap failed to complete: timed out waiting for the condition"
2090819 - oc-mirror does not catch invalid registry input when a namespace is specified
2090827 - Rebase CoreDNS to 1.9.2 and k8s 1.24
2090829 - Bump OpenShift router to k8s 1.24
2090838 - Flaky test: ignore flapping host interface 'tunbr'
2090843 - addLogicalPort() performance/scale optimizations
2090895 - Dynamic plugin nav extension "startsWith" property does not work
2090929 - [etcd] cluster-backup.sh script has a conflict to use the '/etc/kubernetes/static-pod-certs' folder if a custom API certificate is defined
2090993 - [AI Day2] Worker node overview page crashes in Openshift console with TypeError
2091029 - Cancel rollout action only appears when rollout is completed
2091030 - Some BM may fail booting with default bootMode strategy
2091033 - [Descheduler]: provide ability to override included/excluded namespaces
2091087 - ODC Helm backend Owners file needs updates
2091106 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3
2091142 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3
2091167 - IPsec runtime enabling not work in hypershift
2091218 - Update Dev Console Helm backend to use helm 3.9.0
2091433 - Update AWS instance types
2091542 - Error Loading/404 not found page shown after clicking "Current namespace only"
2091547 - Internet connection test with proxy permanently fails
2091567 - oVirt CSI driver should use latest go-ovirt-client
2091595 - Alertmanager configuration can't use OpsGenie's entity field when AlertmanagerConfig is enabled
2091599 - PTP Dual Nic  | Extend Events 4.11 - Up/Down master interface affects all the other interface in the same NIC accoording the events and metric
2091603 - WebSocket connection restarts when switching tabs in WebTerminal
2091613 - simple-kmod fails to build due to missing KVC
2091634 - OVS 2.15 stops handling traffic once ovs-dpctl(2.17.2) is used against it
2091730 - MCO e2e tests are failing with "No token found in openshift-monitoring secrets"
2091746 - "Oh no! Something went wrong" shown after user creates MCP without 'spec'
2091770 - CVO gets stuck downloading an upgrade, with the version pod complaining about invalid options
2091854 - clusteroperator status filter doesn't match all values in Status column
2091901 - Log stream paused right after updating log lines in Web Console in OCP4.10
2091902 - unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server has received too many requests and has asked us to try again later
2091990 - wrong external-ids for ovn-controller lflow-cache-limit-kb
2092003 - PR 3162 | BZ 2084450 - invalid URL schema for AWS causes tests to perma fail and break the cloud-network-config-controller
2092041 - Bump cluster-dns-operator to k8s 1.24
2092042 - Bump cluster-ingress-operator to k8s 1.24
2092047 - Kube 1.24 rebase for cloud-network-config-controller
2092137 - Search doesn't show all entries when name filter is cleared
2092296 - Change Default MachineCIDR of Power VS Platform from 10.x to 192.168.0.0/16
2092390 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and 'Overview' tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown
2092395 - etcdHighNumberOfFailedGRPCRequests alerts with wrong results
2092408 - Wrong icon is used in the virtualization overview permissions card
2092414 - In virtualization overview "running vm per templates" template list can be improved
2092442 - Minimum time between drain retries is not the expected one
2092464 - marketplace catalog defaults to v4.10
2092473 - libovsdb performance backports
2092495 - ovn: use up to 4 northd threads in non-SNO clusters
2092502 - [azure-file-csi-driver] Stop shipping a NFS StorageClass
2092509 - Invalid memory address error if non existing caBundle is configured in DNS-over-TLS using ForwardPlugins
2092572 - acm-simple-kmod chart should create the namespace on the spoke cluster
2092579 - Don't retry pod deletion if objects are not existing
2092650 - [BM IPI with Provisioning Network] Worker nodes are not provisioned: ironic-agent is stuck before writing into disks
2092703 - Incorrect mount propagation information in container status
2092815 - can't delete the unwanted image from registry by oc-mirror
2092851 - [Descheduler]: allow to customize the LowNodeUtilization strategy thresholds
2092867 - make repository name unique in acm-ice/acm-simple-kmod examples
2092880 - etcdHighNumberOfLeaderChanges returns incorrect number of leadership changes
2092887 - oc-mirror list releases command uses filter-options flag instead of filter-by-os
2092889 - Incorrect updating of EgressACLs using direction "from-lport"
2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3)
2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3)
2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3)
2092928 - CVE-2022-26945 go-getter: command injection vulnerability
2092937 - WebScale: OVN-k8s forwarding to external-gw over the secondary interfaces failing
2092966 - [OCP 4.11] [azure] /etc/udev/rules.d/66-azure-storage.rules missing from initramfs
2093044 - Azure machine-api-provider-azure Availability Set Name Length Limit
2093047 - Dynamic Plugins: Generated API markdown duplicates checkAccess and useAccessReview doc
2093126 - [4.11] Bootimage bump tracker
2093236 - DNS operator stopped reconciling after 4.10 to 4.11 upgrade | 4.11 nightly to 4.11 nightly upgrade
2093288 - Default catalogs fails liveness/readiness probes
2093357 - Upgrading sno spoke with acm-ice, causes the sno to get unreachable
2093368 - Installer orphans FIPs created for LoadBalancer Services on cluster destroy
2093396 - Remove node-tainting for too-small MTU
2093445 - ManagementState reconciliation breaks SR
2093454 - Router proxy protocol doesn't work with dual-stack (IPv4 and IPv6) clusters
2093462 - Ingress Operator isn't reconciling the ingress cluster operator object
2093586 - Topology: Ctrl+space opens the quick search modal, but doesn't close it again
2093593 - Import from Devfile shows configuration options that shoudn't be there
2093597 - Import: Advanced option sentence is splited into two parts and headlines has no padding
2093600 - Project access tab should apply new permissions before it delete old ones
2093601 - Project access page doesn't allow the user to update the settings twice (without manually reload the content)
2093783 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.24
2093797 - 'oc registry login' with serviceaccount function need update
2093819 - An etcd member for a new machine was never added to the cluster
2093930 - Gather console helm install  totals metric
2093957 - Oc-mirror write dup metadata to registry backend
2093986 - Podsecurity violation error getting logged for pod-identity-webhook
2093992 - Cluster version operator acknowledges upgrade failing on periodic-ci-openshift-release-master-nightly-4.11-e2e-metal-ipi-upgrade-ovn-ipv6
2094023 - Add Git Flow - Template Labels for Deployment show as DeploymentConfig
2094024 - bump oauth-apiserver deps to include 1.23.1 k8s that fixes etcd blips
2094039 - egressIP panics with nil pointer dereference
2094055 - Bump coreos-installer for s390x Secure Execution
2094071 - No runbook created for SouthboundStale alert
2094088 - Columns in NBDB may never be updated by OVNK
2094104 - Demo dynamic plugin image tests should be skipped when testing console-operator
2094152 - Alerts in the virtualization overview status card aren't filtered
2094196 - Add default and validating webhooks for Power VS MAPI
2094227 - Topology: Create Service Binding should not be the last option (even under delete)
2094239 - custom pool Nodes with 0 nodes are always populated in progress bar
2094303 - If og is configured with sa, operator installation will be failed. 
2094335 - [Nutanix] - debug logs are enabled by default in machine-controller
2094342 - apirequests limits of Cluster CAPI Operator are too low for Azure platform
2094438 - Make AWS URL parsing more lenient for GetNodeEgressIPConfiguration
2094525 - Allow automatic upgrades for efs operator
2094532 - ovn-windows CI jobs are broken
2094675 - PTP Dual Nic  | Extend Events 4.11 - when kill the phc2sys We have notification for the ptp4l physical master moved to free run
2094694 - [Nutanix] No cluster name sanity validation - cluster name with a dot (".") character
2094704 - Verbose log activated on kube-rbac-proxy in deployment prometheus-k8s
2094801 - Kuryr controller keep restarting when handling IPs with leading zeros
2094806 - Machine API oVrit component should use K8s 1.24 dependencies
2094816 - Kuryr controller restarts when over quota
2094833 - Repository overview page does not show default PipelineRun template for developer user
2094857 - CloudShellTerminal loops indefinitely if DevWorkspace CR goes into failed state
2094864 - Rebase CAPG to latest changes
2094866 - oc-mirror does not always delete all manifests associated with an image during pruning
2094896 - Run 'openshift-install agent create image' has segfault exception if cluster-manifests directory missing
2094902 - Fix installer cross-compiling
2094932 - MGMT-10403 Ingress should enable single-node cluster expansion on upgraded clusters
2095049 - managed-csi StorageClass does not create PVs
2095071 - Backend tests fails after devfile registry update
2095083 - Observe > Dashboards: Graphs may change a lot on automatic refresh
2095110 - [ovn] northd container termination script must use bash
2095113 - [ovnkube] bump to openvswitch2.17-2.17.0-22.el8fdp
2095226 - Added changes to verify cloud connection and dhcpservices quota of a powervs instance
2095229 - ingress-operator pod in CrashLoopBackOff in 4.11 after upgrade starting in 4.6 due to go panic
2095231 - Kafka Sink sidebar in topology is empty
2095247 - Event sink form doesn't show channel as sink until app is refreshed
2095248 - [vSphere-CSI-Driver] does not report volume count limits correctly caused pod with multi volumes maybe schedule to not satisfied volume count node
2095256 - Samples Owner needs to be Updated
2095264 - ovs-configuration.service fails with Error: Failed to modify connection 'ovs-if-br-ex': failed to update connection: error writing to file '/etc/NetworkManager/systemConnectionsMerged/ovs-if-br-ex.nmconnection'
2095362 - oVirt CSI driver operator should use latest go-ovirt-client
2095574 - e2e-agnostic CI job fails
2095687 - Debug Container shown for build logs and on click ui breaks
2095703 - machinedeletionhooks doesn't work in vsphere cluster and BM cluster
2095716 - New PSA component for Pod Security Standards enforcement is refusing openshift-operators ns
2095756 - CNO panics with concurrent map read/write
2095772 - Memory requests for ovnkube-master containers are over-sized
2095917 - Nutanix set osDisk with diskSizeGB rather than diskSizeMiB
2095941 - DNS Traffic not kept local to zone or node when Calico SDN utilized
2096053 - Builder Image icons in Git Import flow are hard to see in Dark mode
2096226 - crio fails to bind to tentative IP, causing service failure since RHOCS was rebased on RHEL 8.6
2096315 - NodeClockNotSynchronising alert's severity should be critical
2096350 - Web console doesn't display webhook errors for upgrades
2096352 - Collect whole journal in gather
2096380 - acm-simple-kmod references deprecated KVC example
2096392 - Topology node icons are not properly visible in Dark mode
2096394 - Add page Card items background color does not match with column background color in Dark mode
2096413 - br-ex not created due to default bond interface having a different mac address than expected
2096496 - FIPS issue on OCP SNO with RT Kernel via performance profile
2096605 - [vsphere] no validation checking for diskType
2096691 - [Alibaba 4.11] Specifying ResourceGroup id in install-config.yaml, New pv are still getting created to default ResourceGroups
2096855 - oc adm release new failed with error when use  an existing  multi-arch release image as input
2096905 - Openshift installer should not use the prism client embedded in nutanix terraform provider
2096908 - Dark theme issue in pipeline builder, Helm rollback form, and Git import
2097000 - KafkaConnections disappear from Topology after creating KafkaSink in Topology
2097043 - No clean way to specify operand issues to KEDA OLM operator
2097047 - MetalLB:  matchExpressions used in CR like L2Advertisement, BGPAdvertisement, BGPPeers allow duplicate entries
2097067 - ClusterVersion history pruner does not always retain initial completed update entry
2097153 - poor performance on API call to vCenter ListTags with thousands of tags
2097186 - PSa autolabeling in 4.11 env upgraded from 4.10 does not work due to missing RBAC objects
2097239 - Change Lower CPU limits for Power VS cloud
2097246 - Kuryr: verify and unit jobs failing due to upstream OpenStack dropping py36 support
2097260 - openshift-install create manifests failed for Power VS platform
2097276 - MetalLB CI deploys the operator via manifests and not using the csv
2097282 - chore: update external-provisioner to the latest upstream release
2097283 - chore: update external-snapshotter to the latest upstream release
2097284 - chore: update external-attacher to the latest upstream release
2097286 - chore: update node-driver-registrar to the latest upstream release
2097334 - oc plugin help shows 'kubectl'
2097346 - Monitoring must-gather doesn't seem to be working anymore in 4.11
2097400 - Shared Resource CSI Driver needs additional permissions for validation webhook
2097454 - Placeholder bug for OCP 4.11.0 metadata release
2097503 - chore: rebase against latest external-resizer
2097555 - IngressControllersNotUpgradeable: load balancer service has been modified; changes must be reverted before upgrading
2097607 - Add Power VS support to Webhooks tests in actuator e2e test
2097685 - Ironic-agent can't restart because of existing container
2097716 - settings under httpConfig is dropped with AlertmanagerConfig v1beta1
2097810 - Required Network tools missing for Testing e2e PTP
2097832 - clean up unused IPv6DualStackNoUpgrade feature gate
2097940 - openshift-install destroy cluster traps if vpcRegion not specified
2097954 - 4.11 installation failed at monitoring and network clusteroperators with error "conmon: option parsing failed: Unknown option --log-global-size-max" making all jobs failing
2098172 - oc-mirror does not validatethe registry in the storage config
2098175 - invalid license in python-dataclasses-0.8-2.el8 spec
2098177 - python-pint-0.10.1-2.el8 has unused Patch0 in spec file
2098242 - typo in SRO specialresourcemodule
2098243 - Add error check to Platform create for Power VS
2098392 - [OCP 4.11] Ironic cannot match "wwn" rootDeviceHint for a multipath device
2098508 - Control-plane-machine-set-operator report panic
2098610 - No need to check the push permission with ?manifests-only option
2099293 - oVirt cluster API provider should use latest go-ovirt-client
2099330 - Edit application grouping is shown to user with view only access in a cluster
2099340 - CAPI e2e tests for AWS are missing
2099357 - ovn-kubernetes needs explicit RBAC coordination leases for 1.24 bump
2099358 - Dark mode+Topology update: Unexpected selected+hover border and background colors for app groups
2099528 - Layout issue: No spacing in delete modals
2099561 - Prometheus returns HTTP 500 error on /favicon.ico
2099582 - Format and update Repository overview content
2099611 - Failures on etcd-operator watch channels
2099637 - Should print error when use --keep-manifest-list\xfalse for manifestlist image
2099654 - Topology performance: Endless rerender loop when showing a Http EventSink (KameletBinding)
2099668 - KubeControllerManager should degrade when GC stops working
2099695 - Update CAPG after rebase
2099751 - specialresourcemodule stacktrace while looping over build status
2099755 - EgressIP node's mgmtIP reachability configuration option
2099763 - Update icons for event sources and sinks in topology, Add page, and context menu
2099811 - UDP Packet loss in OpenShift using IPv6 [upcall]
2099821 - exporting a pointer for the loop variable
2099875 - The speaker won't start if there's another component on the host listening on 8080
2099899 - oc-mirror looks for layers in the wrong repository when searching for release images during publishing
2099928 - [FJ OCP4.11 Bug]: Add unit tests to image_customization_test file
2099968 - [Azure-File-CSI] failed to provisioning volume in ARO cluster
2100001 - Sync upstream v1.22.0 downstream
2100007 - Run bundle-upgrade failed from the traditional File-Based Catalog installed operator
2100033 - OCP 4.11 IPI - Some csr remain "Pending" post deployment
2100038 - failure to update special-resource-lifecycle table during update Event
2100079 - SDN needs explicit RBAC coordination leases for 1.24 bump
2100138 - release info --bugs has no differentiator between Jira and Bugzilla
2100155 - kube-apiserver-operator should raise an alert when there is a Pod Security admission violation
2100159 - Dark theme: Build icon for pending status is not inverted in topology sidebar
2100323 - Sqlit-based catsrc cannot be ready due to "Error: open ./db-xxxx: permission denied"
2100347 - KASO retains old config values when switching from Medium/Default to empty worker latency profile
2100356 - Remove Condition tab and create option from console as it is deprecated in OSP-1.8
2100439 - [gce-pd] GCE PD in-tree storage plugin tests not running
2100496 - [OCPonRHV]-oVirt API returns affinity groups without a description field
2100507 - Remove redundant log lines from obj_retry.go
2100536 - Update API to allow EgressIP node reachability check
2100601 - Update CNO to allow EgressIP node reachability check
2100643 - [Migration] [GCP]OVN can not rollback to SDN
2100644 - openshift-ansible FTBFS on RHEL8
2100669 - Telemetry should not log the full path if it contains a username
2100749 - [OCP 4.11] multipath support needs multipath modules
2100825 - Update machine-api-powervs go modules to latest version
2100841 - tiny openshift-install usability fix for setting KUBECONFIG
2101460 - An etcd member for a new machine was never added to the cluster
2101498 - Revert Bug 2082599: add upper bound to number of failed attempts
2102086 - The base image is still 4.10 for operator-sdk 1.22
2102302 - Dummy bug for 4.10 backports
2102362 - Valid regions should be allowed in GCP install config
2102500 - Kubernetes NMState pods can not evict due to PDB on an SNO cluster
2102639 - Drain happens before other image-registry pod is ready to service requests, causing disruption
2102782 - topolvm-controller get into CrashLoopBackOff few minutes after install
2102834 - [cloud-credential-operator]container has runAsNonRoot and image will run as root
2102947 - [VPA] recommender is logging errors for pods with init containers
2103053 - [4.11] Backport Prow CI improvements from master
2103075 - Listing secrets in all namespaces with a specific labelSelector does not work properly
2103080 - br-ex not created due to default bond interface having a different mac address than expected
2103177 - disabling ipv6 router advertisements using "all" does not disable it on secondary interfaces
2103728 - Carry HAProxy patch 'BUG/MEDIUM: h2: match absolute-path not path-absolute for :path'
2103749 - MachineConfigPool is not getting updated
2104282 - heterogeneous arch: oc adm extract encodes arch specific release payload pullspec rather than the manifestlisted pullspec
2104432 - [dpu-network-operator] Updating images to be consistent with ART
2104552 - kube-controller-manager operator 4.11.0-rc.0 degraded on disabled monitoring stack
2104561 - 4.10 to 4.11 update: Degraded node: unexpected on-disk state: mode mismatch for file: "/etc/crio/crio.conf.d/01-ctrcfg-pidsLimit"; expected: -rw-r--r--/420/0644; received: ----------/0/0
2104589 - must-gather namespace should have ?privileged? warn and audit pod security labels besides enforce
2104701 - In CI 4.10 HAProxy must-gather takes longer than 10 minutes
2104717 - NetworkPolicies: ovnkube-master pods crashing due to panic: "invalid memory address or nil pointer dereference"
2104727 - Bootstrap node should honor http proxy
2104906 - Uninstall fails with Observed a panic: runtime.boundsError
2104951 - Web console doesn't display webhook errors for upgrades
2104991 - Completed pods may not be correctly cleaned up
2105101 - NodeIP is used instead of EgressIP if egressPod is recreated within 60 seconds
2105106 - co/node-tuning: Waiting for 15/72 Profiles to be applied
2105146 - Degraded=True noise with: UpgradeBackupControllerDegraded: unable to retrieve cluster version, no completed update was found in cluster version status history
2105167 - BuildConfig throws error when using a label with a / in it
2105334 - vmware-vsphere-csi-driver-controller can't use host port error on e2e-vsphere-serial
2105382 - Add a validation webhook for Nutanix machine provider spec in Machine API Operator
2105468 - The ccoctl does not seem to know how to leverage the VMs service account to talk to GCP APIs. 
2105937 - telemeter golangci-lint outdated blocking ART PRs that update to Go1.18
2106051 - Unable to deploy acm-ice using latest SRO 4.11 build
2106058 - vSphere defaults to SecureBoot on; breaks installation of out-of-tree drivers [4.11.0]
2106062 - [4.11] Bootimage bump tracker
2106116 - IngressController spec.tuningOptions.healthCheckInterval validation allows invalid values such as "0abc"
2106163 - Samples ImageStreams vs. registry.redhat.io: unsupported: V2 schema 1 manifest digests are no longer supported for image pulls
2106313 - bond-cni: backport bond-cni GA items to 4.11
2106543 - Typo in must-gather release-4.10
2106594 - crud/other-routes.spec.ts Cypress test failing at a high rate in CI
2106723 - [4.11] Upgrade from 4.11.0-rc0 -> 4.11.0-rc.1 failed. rpm-ostree status shows No space left on device
2106855 - [4.11.z] externalTrafficPolicy=Local is not working in local gateway mode if ovnkube-node is restarted
2107493 - ReplicaSet prometheus-operator-admission-webhook has timed out progressing
2107501 - metallb greenwave tests failure
2107690 - Driver Container builds fail with "error determining starting point for build: no FROM statement found"
2108175 - etcd backup seems to not be triggered in 4.10.18-->4.10.20 upgrade
2108617 - [oc adm release] extraction of the installer against a manifestlisted payload referenced by tag leads to a bad release image reference
2108686 - rpm-ostreed: start limit hit easily
2110505 - [Upgrade]deployment openshift-machine-api/machine-api-operator has a replica failure FailedCreate
2110715 - openshift-controller-manager(-operator) namespace should clear run-level annotations
2111055 - dummy bug for 4.10.z bz2110938

  1. References:

https://access.redhat.com/security/cve/CVE-2018-25009 https://access.redhat.com/security/cve/CVE-2018-25010 https://access.redhat.com/security/cve/CVE-2018-25012 https://access.redhat.com/security/cve/CVE-2018-25013 https://access.redhat.com/security/cve/CVE-2018-25014 https://access.redhat.com/security/cve/CVE-2018-25032 https://access.redhat.com/security/cve/CVE-2019-5827 https://access.redhat.com/security/cve/CVE-2019-13750 https://access.redhat.com/security/cve/CVE-2019-13751 https://access.redhat.com/security/cve/CVE-2019-17594 https://access.redhat.com/security/cve/CVE-2019-17595 https://access.redhat.com/security/cve/CVE-2019-18218 https://access.redhat.com/security/cve/CVE-2019-19603 https://access.redhat.com/security/cve/CVE-2019-20838 https://access.redhat.com/security/cve/CVE-2020-13435 https://access.redhat.com/security/cve/CVE-2020-14155 https://access.redhat.com/security/cve/CVE-2020-17541 https://access.redhat.com/security/cve/CVE-2020-19131 https://access.redhat.com/security/cve/CVE-2020-24370 https://access.redhat.com/security/cve/CVE-2020-28493 https://access.redhat.com/security/cve/CVE-2020-35492 https://access.redhat.com/security/cve/CVE-2020-36330 https://access.redhat.com/security/cve/CVE-2020-36331 https://access.redhat.com/security/cve/CVE-2020-36332 https://access.redhat.com/security/cve/CVE-2021-3481 https://access.redhat.com/security/cve/CVE-2021-3580 https://access.redhat.com/security/cve/CVE-2021-3634 https://access.redhat.com/security/cve/CVE-2021-3672 https://access.redhat.com/security/cve/CVE-2021-3695 https://access.redhat.com/security/cve/CVE-2021-3696 https://access.redhat.com/security/cve/CVE-2021-3697 https://access.redhat.com/security/cve/CVE-2021-3737 https://access.redhat.com/security/cve/CVE-2021-4115 https://access.redhat.com/security/cve/CVE-2021-4156 https://access.redhat.com/security/cve/CVE-2021-4189 https://access.redhat.com/security/cve/CVE-2021-20095 https://access.redhat.com/security/cve/CVE-2021-20231 https://access.redhat.com/security/cve/CVE-2021-20232 https://access.redhat.com/security/cve/CVE-2021-23177 https://access.redhat.com/security/cve/CVE-2021-23566 https://access.redhat.com/security/cve/CVE-2021-23648 https://access.redhat.com/security/cve/CVE-2021-25219 https://access.redhat.com/security/cve/CVE-2021-31535 https://access.redhat.com/security/cve/CVE-2021-31566 https://access.redhat.com/security/cve/CVE-2021-36084 https://access.redhat.com/security/cve/CVE-2021-36085 https://access.redhat.com/security/cve/CVE-2021-36086 https://access.redhat.com/security/cve/CVE-2021-36087 https://access.redhat.com/security/cve/CVE-2021-38185 https://access.redhat.com/security/cve/CVE-2021-38593 https://access.redhat.com/security/cve/CVE-2021-40528 https://access.redhat.com/security/cve/CVE-2021-41190 https://access.redhat.com/security/cve/CVE-2021-41617 https://access.redhat.com/security/cve/CVE-2021-42771 https://access.redhat.com/security/cve/CVE-2021-43527 https://access.redhat.com/security/cve/CVE-2021-43818 https://access.redhat.com/security/cve/CVE-2021-44225 https://access.redhat.com/security/cve/CVE-2021-44906 https://access.redhat.com/security/cve/CVE-2022-0235 https://access.redhat.com/security/cve/CVE-2022-0778 https://access.redhat.com/security/cve/CVE-2022-1012 https://access.redhat.com/security/cve/CVE-2022-1215 https://access.redhat.com/security/cve/CVE-2022-1271 https://access.redhat.com/security/cve/CVE-2022-1292 https://access.redhat.com/security/cve/CVE-2022-1586 https://access.redhat.com/security/cve/CVE-2022-1621 https://access.redhat.com/security/cve/CVE-2022-1629 https://access.redhat.com/security/cve/CVE-2022-1706 https://access.redhat.com/security/cve/CVE-2022-1729 https://access.redhat.com/security/cve/CVE-2022-2068 https://access.redhat.com/security/cve/CVE-2022-2097 https://access.redhat.com/security/cve/CVE-2022-21698 https://access.redhat.com/security/cve/CVE-2022-22576 https://access.redhat.com/security/cve/CVE-2022-23772 https://access.redhat.com/security/cve/CVE-2022-23773 https://access.redhat.com/security/cve/CVE-2022-23806 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/cve/CVE-2022-24675 https://access.redhat.com/security/cve/CVE-2022-24903 https://access.redhat.com/security/cve/CVE-2022-24921 https://access.redhat.com/security/cve/CVE-2022-25313 https://access.redhat.com/security/cve/CVE-2022-25314 https://access.redhat.com/security/cve/CVE-2022-26691 https://access.redhat.com/security/cve/CVE-2022-26945 https://access.redhat.com/security/cve/CVE-2022-27191 https://access.redhat.com/security/cve/CVE-2022-27774 https://access.redhat.com/security/cve/CVE-2022-27776 https://access.redhat.com/security/cve/CVE-2022-27782 https://access.redhat.com/security/cve/CVE-2022-28327 https://access.redhat.com/security/cve/CVE-2022-28733 https://access.redhat.com/security/cve/CVE-2022-28734 https://access.redhat.com/security/cve/CVE-2022-28735 https://access.redhat.com/security/cve/CVE-2022-28736 https://access.redhat.com/security/cve/CVE-2022-28737 https://access.redhat.com/security/cve/CVE-2022-29162 https://access.redhat.com/security/cve/CVE-2022-29810 https://access.redhat.com/security/cve/CVE-2022-29824 https://access.redhat.com/security/cve/CVE-2022-30321 https://access.redhat.com/security/cve/CVE-2022-30322 https://access.redhat.com/security/cve/CVE-2022-30323 https://access.redhat.com/security/cve/CVE-2022-32250 https://access.redhat.com/security/updates/classification/#important

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYvOfk9zjgjWX9erEAQhJ/w//UlbBGKBBFBAyfEmQf9Zu0yyv6MfZW0Zl iO1qXVIl9UQUFjTY5ejerx7cP8EBWLhKaiiqRRjbjtj+w+ENGB4LLj6TEUrSM5oA YEmhnX3M+GUKF7Px61J7rZfltIOGhYBvJ+qNZL2jvqz1NciVgI4/71cZWnvDbGpa 02w3Dn0JzhTSR9znNs9LKcV/anttJ3NtOYhqMXnN8EpKdtzQkKRazc7xkOTxfxyl jRiER2Z0TzKDE6dMoVijS2Sv5j/JF0LRwetkZl6+oh8ehKh5GRV3lPg3eVkhzDEo /gp0P9GdLMHi6cS6uqcREbod//waSAa7cssgULoycFwjzbDK3L2c+wMuWQIgXJca RYuP6wvrdGwiI1mgUi/226EzcZYeTeoKxnHkp7AsN9l96pJYafj0fnK1p9NM/8g3 jBE/W4K8jdDNVd5l1Z5O0Nyxk6g4P8MKMe10/w/HDXFPSgufiCYIGX4TKqb+ESIR SuYlSMjoGsB4mv1KMDEUJX6d8T05lpEwJT0RYNdZOouuObYMtcHLpRQHH9mkj86W pHdma5aGG/mTMvSMW6l6L05uT41Azm6fVimTv+E5WvViBni2480CVH+9RexKKSyL XcJX1gaLdo+72I/gZrtT+XE5tcJ3Sf5fmfsenQeY4KFum/cwzbM6y7RGn47xlEWB xBWKPzRxz0Q=9r0B -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202305-16


                                       https://security.gentoo.org/

Severity: Low Title: Vim, gVim: Multiple Vulnerabilities Date: May 03, 2023 Bugs: #851231, #861092, #869359, #879257, #883681, #889730 ID: 202305-16


Synopsis

Multiple vulnerabilities have been found in Vim, the worst of which could result in denial of service. gVim is the GUI version of Vim.

Affected packages

-------------------------------------------------------------------
 Package              /     Vulnerable     /            Unaffected
-------------------------------------------------------------------

1 app-editors/gvim < 9.0.1157 >= 9.0.1157 2 app-editors/vim < 9.0.1157 >= 9.0.1157 3 app-editors/vim-core < 9.0.1157 >= 9.0.1157

Description

Multiple vulnerabilities have been discovered in Vim, gVim. Please review the CVE identifiers referenced below for details.

Impact

Please review the referenced CVE identifiers for details.

Workaround

There is no known workaround at this time.

Resolution

All Vim users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=app-editors/vim-9.0.1157"

All gVim users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=app-editors/gvim-9.0.1157"

All vim-core users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=app-editors/vim-core-9.0.1157"

References

[ 1 ] CVE-2022-1154 https://nvd.nist.gov/vuln/detail/CVE-2022-1154 [ 2 ] CVE-2022-1160 https://nvd.nist.gov/vuln/detail/CVE-2022-1160 [ 3 ] CVE-2022-1381 https://nvd.nist.gov/vuln/detail/CVE-2022-1381 [ 4 ] CVE-2022-1420 https://nvd.nist.gov/vuln/detail/CVE-2022-1420 [ 5 ] CVE-2022-1616 https://nvd.nist.gov/vuln/detail/CVE-2022-1616 [ 6 ] CVE-2022-1619 https://nvd.nist.gov/vuln/detail/CVE-2022-1619 [ 7 ] CVE-2022-1620 https://nvd.nist.gov/vuln/detail/CVE-2022-1620 [ 8 ] CVE-2022-1621 https://nvd.nist.gov/vuln/detail/CVE-2022-1621 [ 9 ] CVE-2022-1629 https://nvd.nist.gov/vuln/detail/CVE-2022-1629 [ 10 ] CVE-2022-1674 https://nvd.nist.gov/vuln/detail/CVE-2022-1674 [ 11 ] CVE-2022-1720 https://nvd.nist.gov/vuln/detail/CVE-2022-1720 [ 12 ] CVE-2022-1725 https://nvd.nist.gov/vuln/detail/CVE-2022-1725 [ 13 ] CVE-2022-1733 https://nvd.nist.gov/vuln/detail/CVE-2022-1733 [ 14 ] CVE-2022-1735 https://nvd.nist.gov/vuln/detail/CVE-2022-1735 [ 15 ] CVE-2022-1769 https://nvd.nist.gov/vuln/detail/CVE-2022-1769 [ 16 ] CVE-2022-1771 https://nvd.nist.gov/vuln/detail/CVE-2022-1771 [ 17 ] CVE-2022-1785 https://nvd.nist.gov/vuln/detail/CVE-2022-1785 [ 18 ] CVE-2022-1796 https://nvd.nist.gov/vuln/detail/CVE-2022-1796 [ 19 ] CVE-2022-1851 https://nvd.nist.gov/vuln/detail/CVE-2022-1851 [ 20 ] CVE-2022-1886 https://nvd.nist.gov/vuln/detail/CVE-2022-1886 [ 21 ] CVE-2022-1897 https://nvd.nist.gov/vuln/detail/CVE-2022-1897 [ 22 ] CVE-2022-1898 https://nvd.nist.gov/vuln/detail/CVE-2022-1898 [ 23 ] CVE-2022-1927 https://nvd.nist.gov/vuln/detail/CVE-2022-1927 [ 24 ] CVE-2022-1942 https://nvd.nist.gov/vuln/detail/CVE-2022-1942 [ 25 ] CVE-2022-1968 https://nvd.nist.gov/vuln/detail/CVE-2022-1968 [ 26 ] CVE-2022-2000 https://nvd.nist.gov/vuln/detail/CVE-2022-2000 [ 27 ] CVE-2022-2042 https://nvd.nist.gov/vuln/detail/CVE-2022-2042 [ 28 ] CVE-2022-2124 https://nvd.nist.gov/vuln/detail/CVE-2022-2124 [ 29 ] CVE-2022-2125 https://nvd.nist.gov/vuln/detail/CVE-2022-2125 [ 30 ] CVE-2022-2126 https://nvd.nist.gov/vuln/detail/CVE-2022-2126 [ 31 ] CVE-2022-2129 https://nvd.nist.gov/vuln/detail/CVE-2022-2129 [ 32 ] CVE-2022-2175 https://nvd.nist.gov/vuln/detail/CVE-2022-2175 [ 33 ] CVE-2022-2182 https://nvd.nist.gov/vuln/detail/CVE-2022-2182 [ 34 ] CVE-2022-2183 https://nvd.nist.gov/vuln/detail/CVE-2022-2183 [ 35 ] CVE-2022-2206 https://nvd.nist.gov/vuln/detail/CVE-2022-2206 [ 36 ] CVE-2022-2207 https://nvd.nist.gov/vuln/detail/CVE-2022-2207 [ 37 ] CVE-2022-2208 https://nvd.nist.gov/vuln/detail/CVE-2022-2208 [ 38 ] CVE-2022-2210 https://nvd.nist.gov/vuln/detail/CVE-2022-2210 [ 39 ] CVE-2022-2231 https://nvd.nist.gov/vuln/detail/CVE-2022-2231 [ 40 ] CVE-2022-2257 https://nvd.nist.gov/vuln/detail/CVE-2022-2257 [ 41 ] CVE-2022-2264 https://nvd.nist.gov/vuln/detail/CVE-2022-2264 [ 42 ] CVE-2022-2284 https://nvd.nist.gov/vuln/detail/CVE-2022-2284 [ 43 ] CVE-2022-2285 https://nvd.nist.gov/vuln/detail/CVE-2022-2285 [ 44 ] CVE-2022-2286 https://nvd.nist.gov/vuln/detail/CVE-2022-2286 [ 45 ] CVE-2022-2287 https://nvd.nist.gov/vuln/detail/CVE-2022-2287 [ 46 ] CVE-2022-2288 https://nvd.nist.gov/vuln/detail/CVE-2022-2288 [ 47 ] CVE-2022-2289 https://nvd.nist.gov/vuln/detail/CVE-2022-2289 [ 48 ] CVE-2022-2304 https://nvd.nist.gov/vuln/detail/CVE-2022-2304 [ 49 ] CVE-2022-2343 https://nvd.nist.gov/vuln/detail/CVE-2022-2343 [ 50 ] CVE-2022-2344 https://nvd.nist.gov/vuln/detail/CVE-2022-2344 [ 51 ] CVE-2022-2345 https://nvd.nist.gov/vuln/detail/CVE-2022-2345 [ 52 ] CVE-2022-2522 https://nvd.nist.gov/vuln/detail/CVE-2022-2522 [ 53 ] CVE-2022-2816 https://nvd.nist.gov/vuln/detail/CVE-2022-2816 [ 54 ] CVE-2022-2817 https://nvd.nist.gov/vuln/detail/CVE-2022-2817 [ 55 ] CVE-2022-2819 https://nvd.nist.gov/vuln/detail/CVE-2022-2819 [ 56 ] CVE-2022-2845 https://nvd.nist.gov/vuln/detail/CVE-2022-2845 [ 57 ] CVE-2022-2849 https://nvd.nist.gov/vuln/detail/CVE-2022-2849 [ 58 ] CVE-2022-2862 https://nvd.nist.gov/vuln/detail/CVE-2022-2862 [ 59 ] CVE-2022-2874 https://nvd.nist.gov/vuln/detail/CVE-2022-2874 [ 60 ] CVE-2022-2889 https://nvd.nist.gov/vuln/detail/CVE-2022-2889 [ 61 ] CVE-2022-2923 https://nvd.nist.gov/vuln/detail/CVE-2022-2923 [ 62 ] CVE-2022-2946 https://nvd.nist.gov/vuln/detail/CVE-2022-2946 [ 63 ] CVE-2022-2980 https://nvd.nist.gov/vuln/detail/CVE-2022-2980 [ 64 ] CVE-2022-2982 https://nvd.nist.gov/vuln/detail/CVE-2022-2982 [ 65 ] CVE-2022-3016 https://nvd.nist.gov/vuln/detail/CVE-2022-3016 [ 66 ] CVE-2022-3099 https://nvd.nist.gov/vuln/detail/CVE-2022-3099 [ 67 ] CVE-2022-3134 https://nvd.nist.gov/vuln/detail/CVE-2022-3134 [ 68 ] CVE-2022-3153 https://nvd.nist.gov/vuln/detail/CVE-2022-3153 [ 69 ] CVE-2022-3234 https://nvd.nist.gov/vuln/detail/CVE-2022-3234 [ 70 ] CVE-2022-3235 https://nvd.nist.gov/vuln/detail/CVE-2022-3235 [ 71 ] CVE-2022-3256 https://nvd.nist.gov/vuln/detail/CVE-2022-3256 [ 72 ] CVE-2022-3278 https://nvd.nist.gov/vuln/detail/CVE-2022-3278 [ 73 ] CVE-2022-3296 https://nvd.nist.gov/vuln/detail/CVE-2022-3296 [ 74 ] CVE-2022-3297 https://nvd.nist.gov/vuln/detail/CVE-2022-3297 [ 75 ] CVE-2022-3324 https://nvd.nist.gov/vuln/detail/CVE-2022-3324 [ 76 ] CVE-2022-3352 https://nvd.nist.gov/vuln/detail/CVE-2022-3352 [ 77 ] CVE-2022-3491 https://nvd.nist.gov/vuln/detail/CVE-2022-3491 [ 78 ] CVE-2022-3520 https://nvd.nist.gov/vuln/detail/CVE-2022-3520 [ 79 ] CVE-2022-3591 https://nvd.nist.gov/vuln/detail/CVE-2022-3591 [ 80 ] CVE-2022-3705 https://nvd.nist.gov/vuln/detail/CVE-2022-3705 [ 81 ] CVE-2022-4141 https://nvd.nist.gov/vuln/detail/CVE-2022-4141 [ 82 ] CVE-2022-4292 https://nvd.nist.gov/vuln/detail/CVE-2022-4292 [ 83 ] CVE-2022-4293 https://nvd.nist.gov/vuln/detail/CVE-2022-4293 [ 84 ] CVE-2022-47024 https://nvd.nist.gov/vuln/detail/CVE-2022-47024 [ 85 ] CVE-2023-0049 https://nvd.nist.gov/vuln/detail/CVE-2023-0049 [ 86 ] CVE-2023-0051 https://nvd.nist.gov/vuln/detail/CVE-2023-0051 [ 87 ] CVE-2023-0054 https://nvd.nist.gov/vuln/detail/CVE-2023-0054

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202305-16

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2023 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5 . ========================================================================== Ubuntu Security Notice USN-5613-2 September 19, 2022

vim regression

A security issue affects these releases of Ubuntu and its derivatives:

  • Ubuntu 20.04 LTS

Summary:

USN-5613-1 caused a regression in Vim.

Software Description: - vim: Vi IMproved - enhanced vi editor

Details:

USN-5613-1 fixed vulnerabilities in Vim. Unfortunately that update failed to include binary packages for some architectures. This update fixes that regression.

We apologize for the inconvenience.

Original advisory details:

It was discovered that Vim was not properly performing bounds checks when executing spell suggestion commands. An attacker could possibly use this issue to cause a denial of service or execute arbitrary code. (CVE-2022-0943)

It was discovered that Vim was using freed memory when dealing with regular expressions through its old regular expression engine. If a user were tricked into opening a specially crafted file, an attacker could crash the application, leading to a denial of service, or possibly achieve code execution. (CVE-2022-1154)

It was discovered that Vim was not properly performing checks on name of lambda functions. An attacker could possibly use this issue to cause a denial of service. This issue affected only Ubuntu 22.04 LTS. (CVE-2022-1420)

It was discovered that Vim was incorrectly performing bounds checks when processing invalid commands with composing characters in Ex mode. An attacker could possibly use this issue to cause a denial of service or execute arbitrary code. (CVE-2022-1616)

It was discovered that Vim was not properly processing latin1 data when issuing Ex commands. An attacker could possibly use this issue to cause a denial of service or execute arbitrary code. (CVE-2022-1619)

It was discovered that Vim was not properly performing memory management when dealing with invalid regular expression patterns in buffers. An attacker could possibly use this issue to cause a denial of service. (CVE-2022-1620)

It was discovered that Vim was not properly processing invalid bytes when performing spell check operations. An attacker could possibly use this issue to cause a denial of service or execute arbitrary code. (CVE-2022-1621)

Update instructions:

The problem can be corrected by updating your system to the following package versions:

Ubuntu 20.04 LTS: vim 2:8.1.2269-1ubuntu5.9

In general, a standard system update will make all the necessary changes. Solution:

OSP 16.2 Release - OSP Director Operator Containers tech preview

  1. Bugs fixed (https://bugzilla.redhat.com/):

2011007 - CVE-2021-41103 containerd: insufficiently restricted permissions on container root and plugin directories 2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic 2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3) 2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3) 2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3) 2092928 - CVE-2022-26945 go-getter: command injection vulnerability

  1. Bugs fixed (https://bugzilla.redhat.com/):

2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS

5

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202205-0855",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "vim",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "vim",
        "version": "8.2.4919"
      },
      {
        "model": "macos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.0"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "10.0"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "35"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "9.0"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "34"
      },
      {
        "model": "fedora",
        "scope": null,
        "trust": 0.8,
        "vendor": "fedora",
        "version": null
      },
      {
        "model": "macos",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "vim",
        "scope": null,
        "trust": 0.8,
        "vendor": "vim",
        "version": null
      },
      {
        "model": "gnu/linux",
        "scope": null,
        "trust": 0.8,
        "vendor": "debian",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-010779"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-1621"
      }
    ]
  },
  "configurations": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/configurations#",
      "children": {
        "@container": "@list"
      },
      "cpe_match": {
        "@container": "@list"
      },
      "data": {
        "@container": "@list"
      },
      "nodes": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "CVE_data_version": "4.0",
        "nodes": [
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:a:vim:vim:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "8.2.4919",
                "vulnerable": true
              }
            ],
            "operator": "OR"
          },
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              }
            ],
            "operator": "OR"
          },
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:34:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:35:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              }
            ],
            "operator": "OR"
          },
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "13.0",
                "vulnerable": true
              }
            ],
            "operator": "OR"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-1621"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "167666"
      },
      {
        "db": "PACKETSTORM",
        "id": "168042"
      },
      {
        "db": "PACKETSTORM",
        "id": "167778"
      },
      {
        "db": "PACKETSTORM",
        "id": "167984"
      }
    ],
    "trust": 0.4
  },
  "cve": "CVE-2022-1621",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "acInsufInfo": false,
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "NVD",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "obtainAllPrivilege": false,
            "obtainOtherPrivilege": false,
            "obtainUserPrivilege": false,
            "severity": "MEDIUM",
            "trust": 1.0,
            "userInteractionRequired": true,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "acInsufInfo": null,
            "accessComplexity": "Medium",
            "accessVector": "Network",
            "authentication": "None",
            "author": "NVD",
            "availabilityImpact": "Partial",
            "baseScore": 6.8,
            "confidentialityImpact": "Partial",
            "exploitabilityScore": null,
            "id": "CVE-2022-1621",
            "impactScore": null,
            "integrityImpact": "Partial",
            "obtainAllPrivilege": null,
            "obtainOtherPrivilege": null,
            "obtainUserPrivilege": null,
            "severity": "Medium",
            "trust": 0.8,
            "userInteractionRequired": null,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "VHN-419734",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:P/I:P/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "LOCAL",
            "author": "NVD",
            "availabilityImpact": "HIGH",
            "baseScore": 7.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 1.8,
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "LOW",
            "attackVector": "LOCAL",
            "author": "security@huntr.dev",
            "availabilityImpact": "HIGH",
            "baseScore": 7.3,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "LOW",
            "exploitabilityScore": 1.8,
            "impactScore": 5.5,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.0/AV:L/AC:L/PR:N/UI:R/S:U/C:L/I:H/A:H",
            "version": "3.0"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Local",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 7.8,
            "baseSeverity": "High",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2022-1621",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "Required",
            "vectorString": "CVSS:3.0/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "NVD",
            "id": "CVE-2022-1621",
            "trust": 1.8,
            "value": "HIGH"
          },
          {
            "author": "security@huntr.dev",
            "id": "CVE-2022-1621",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202205-2826",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "VULHUB",
            "id": "VHN-419734",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-419734"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-010779"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202205-2826"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-1621"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-1621"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Heap buffer overflow in vim_strncpy find_word in GitHub repository vim/vim prior to 8.2.4919. This vulnerability is capable of crashing software, Bypass Protection Mechanism, Modify Memory, and possible remote execution. vim/vim Exists in an out-of-bounds write vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 9) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. \n\nSecurity Fix(es):\n\n* vim: Use of Out-of-range Pointer Offset in vim (CVE-2022-0554)\n\n* vim: Heap-based Buffer Overflow occurs in vim (CVE-2022-0943)\n\n* vim: Out-of-range Pointer Offset (CVE-2022-1420)\n\n* vim: heap buffer overflow (CVE-2022-1621)\n\n* vim: buffer over-read (CVE-2022-1629)\n\n* vim: use after free in utf_ptr2char (CVE-2022-1154)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2058483 - CVE-2022-0554 vim: Use of Out-of-range Pointer Offset in vim\n2064064 - CVE-2022-0943 vim: Heap-based Buffer Overflow occurs in vim\n2073013 - CVE-2022-1154 vim: use after free in utf_ptr2char\n2077734 - CVE-2022-1420 vim: Out-of-range Pointer Offset\n2083924 - CVE-2022-1621 vim: heap buffer overflow\n2083931 - CVE-2022-1629 vim: buffer over-read\n\n6. Package List:\n\nRed Hat Enterprise Linux AppStream (v.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Important: OpenShift Container Platform 4.11.0 bug fix and security update\nAdvisory ID:       RHSA-2022:5069-01\nProduct:           Red Hat OpenShift Enterprise\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:5069\nIssue date:        2022-08-10\nCVE Names:         CVE-2018-25009 CVE-2018-25010 CVE-2018-25012\n                   CVE-2018-25013 CVE-2018-25014 CVE-2018-25032\n                   CVE-2019-5827 CVE-2019-13750 CVE-2019-13751\n                   CVE-2019-17594 CVE-2019-17595 CVE-2019-18218\n                   CVE-2019-19603 CVE-2019-20838 CVE-2020-13435\n                   CVE-2020-14155 CVE-2020-17541 CVE-2020-19131\n                   CVE-2020-24370 CVE-2020-28493 CVE-2020-35492\n                   CVE-2020-36330 CVE-2020-36331 CVE-2020-36332\n                   CVE-2021-3481 CVE-2021-3580 CVE-2021-3634\n                   CVE-2021-3672 CVE-2021-3695 CVE-2021-3696\n                   CVE-2021-3697 CVE-2021-3737 CVE-2021-4115\n                   CVE-2021-4156 CVE-2021-4189 CVE-2021-20095\n                   CVE-2021-20231 CVE-2021-20232 CVE-2021-23177\n                   CVE-2021-23566 CVE-2021-23648 CVE-2021-25219\n                   CVE-2021-31535 CVE-2021-31566 CVE-2021-36084\n                   CVE-2021-36085 CVE-2021-36086 CVE-2021-36087\n                   CVE-2021-38185 CVE-2021-38593 CVE-2021-40528\n                   CVE-2021-41190 CVE-2021-41617 CVE-2021-42771\n                   CVE-2021-43527 CVE-2021-43818 CVE-2021-44225\n                   CVE-2021-44906 CVE-2022-0235 CVE-2022-0778\n                   CVE-2022-1012 CVE-2022-1215 CVE-2022-1271\n                   CVE-2022-1292 CVE-2022-1586 CVE-2022-1621\n                   CVE-2022-1629 CVE-2022-1706 CVE-2022-1729\n                   CVE-2022-2068 CVE-2022-2097 CVE-2022-21698\n                   CVE-2022-22576 CVE-2022-23772 CVE-2022-23773\n                   CVE-2022-23806 CVE-2022-24407 CVE-2022-24675\n                   CVE-2022-24903 CVE-2022-24921 CVE-2022-25313\n                   CVE-2022-25314 CVE-2022-26691 CVE-2022-26945\n                   CVE-2022-27191 CVE-2022-27774 CVE-2022-27776\n                   CVE-2022-27782 CVE-2022-28327 CVE-2022-28733\n                   CVE-2022-28734 CVE-2022-28735 CVE-2022-28736\n                   CVE-2022-28737 CVE-2022-29162 CVE-2022-29810\n                   CVE-2022-29824 CVE-2022-30321 CVE-2022-30322\n                   CVE-2022-30323 CVE-2022-32250\n====================================================================\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.11.0 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nThis release includes a security update for Red Hat OpenShift Container\nPlatform 4.11. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.11.0. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:5068\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\nSecurity Fix(es):\n\n* go-getter: command injection vulnerability (CVE-2022-26945)\n* go-getter: unsafe download (issue 1 of 3) (CVE-2022-30321)\n* go-getter: unsafe download (issue 2 of 3) (CVE-2022-30322)\n* go-getter: unsafe download (issue 3 of 3) (CVE-2022-30323)\n* nanoid: Information disclosure via valueOf() function (CVE-2021-23566)\n* sanitize-url: XSS (CVE-2021-23648)\n* minimist: prototype pollution (CVE-2021-44906)\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n* prometheus/client_golang: Denial of service using\nInstrumentHandlerCounter (CVE-2022-21698)\n* golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)\n* go-getter: writes SSH credentials into logfile, exposing sensitive\ncredentials to local uses (CVE-2022-29810)\n* opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-x86_64\n\nThe image digest is\nsha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4\n\n(For aarch64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-aarch64\n\nThe image digest is\nsha256:29fa8419da2afdb64b5475d2b43dad8cc9205e566db3968c5738e7a91cf96dfe\n\n(For s390x architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-s390x\n\nThe image digest is\nsha256:015d6180238b4024d11dfef6751143619a0458eccfb589f2058ceb1a6359dd46\n\n(For ppc64le architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-ppc64le\n\nThe image digest is\nsha256:5052f8d5597c6656ca9b6bfd3de521504c79917aa80feb915d3c8546241f86ca\n\nAll OpenShift Container Platform 4.11 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.11 see the following documentation,\nwhich will be updated shortly for this release, for important instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1817075 - MCC \u0026 MCO don\u0027t free leader leases during shut down -\u003e 10 minutes of leader election timeouts\n1822752 - cluster-version operator stops applying manifests when blocked by a precondition check\n1823143 - oc adm release extract --command, --tools doesn\u0027t pull from localregistry when given a localregistry/image\n1858418 - [OCPonRHV] OpenShift installer fails when Blank template is missing in oVirt/RHV\n1859153 - [AWS] An IAM error occurred occasionally during the installation phase: Invalid IAM Instance Profile name\n1896181 - [ovirt] install fails: due to terraform error \"Cannot run VM. VM is being updated\" on vm resource\n1898265 - [OCP 4.5][AWS] Installation failed: error updating LB Target Group\n1902307 - [vSphere] cloud labels management via cloud provider makes nodes not ready\n1905850 - `oc adm policy who-can` failed to check the `operatorcondition/status` resource\n1916279 - [OCPonRHV] Sometimes terraform installation fails on -failed to fetch Cluster(another terraform bug)\n1917898 - [ovirt] install fails: due to terraform error \"Tag not matched: expect \u003cfault\u003e but got \u003chtml\u003e\" on vm resource\n1918005 - [vsphere] If there are multiple port groups with the same name installation fails\n1918417 - IPv6 errors after exiting crictl\n1918690 - Should update the KCM resource-graph timely with the latest configure\n1919980 - oVirt installer fails due to terraform error \"Failed to wait for Templte(...) to become ok\"\n1921182 - InspectFailed: kubelet Failed to inspect image: rpc error: code = DeadlineExceeded desc = context deadline exceeded\n1923536 - Image pullthrough does not pass 429 errors back to capable clients\n1926975 - [aws-c2s] kube-apiserver crashloops due to missing cloud config\n1928932 - deploy/route_crd.yaml in openshift/router uses deprecated v1beta1 CRD API\n1932812 - Installer uses the terraform-provider in the Installer\u0027s directory if it exists\n1934304 - MemoryPressure Top Pod Consumers seems to be 2x expected value\n1943937 - CatalogSource incorrect parsing validation\n1944264 - [ovn] CNO should gracefully terminate OVN databases\n1944851 - List of ingress routes not cleaned up when routers no longer exist - take 2\n1945329 - In k8s 1.21 bump conntrack \u0027should drop INVALID conntrack entries\u0027 tests are disabled\n1948556 - Cannot read property \u0027apiGroup\u0027 of undefined error viewing operator CSV\n1949827 - Kubelet bound to incorrect IPs, referring to incorrect NICs in 4.5.x\n1957012 - Deleting the KubeDescheduler CR does not remove the corresponding deployment or configmap\n1957668 - oc login does not show link to console\n1958198 - authentication operator takes too long to pick up a configuration change\n1958512 - No 1.25 shown in REMOVEDINRELEASE for apis audited with k8s.io/removed-release 1.25 and k8s.io/deprecated true\n1961233 - Add CI test coverage for DNS availability during upgrades\n1961844 - baremetal ClusterOperator installed by CVO does not have relatedObjects\n1965468 - [OSP] Delete volume snapshots based on cluster ID in their metadata\n1965934 - can not get new result with \"Refresh off\" if click \"Run queries\" again\n1965969 - [aws] the public hosted zone id is not correct in the destroy log, while destroying a cluster which is using BYO private hosted zone. \n1968253 - GCP CSI driver can provision volume with access mode ROX\n1969794 - [OSP] Document how to use image registry PVC backend with custom availability zones\n1975543 - [OLM] Remove stale cruft installed by CVO in earlier releases\n1976111 - [tracker] multipathd.socket is missing start conditions\n1976782 - Openshift registry starts to segfault after S3 storage configuration\n1977100 - Pod failed to start with message \"set CPU load balancing: readdirent /proc/sys/kernel/sched_domain/cpu66/domain0: no such file or directory\"\n1978303 - KAS pod logs show: [SHOULD NOT HAPPEN] ...failed to convert new object...CertificateSigningRequest) to smd typed: .status.conditions: duplicate entries for key [type=\\\"Approved\\\"]\n1978798 - [Network Operator] Upgrade: The configuration to enable network policy ACL logging is missing on the cluster upgraded from 4.7-\u003e4.8\n1979671 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning\n1982737 - OLM does not warn on invalid CSV\n1983056 - IP conflict while recreating Pod with fixed name\n1984785 - LSO CSV does not contain disconnected annotation\n1989610 - Unsupported data types should not be rendered on operand details page\n1990125 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit\n1990384 - 502 error on \"Observe -\u003e Alerting\" UI after disabled local alertmanager\n1992553 - all the alert rules\u0027 annotations \"summary\" and \"description\" should comply with the OpenShift alerting guidelines\n1994117 - Some hardcodes are detected at the code level in orphaned code\n1994820 - machine controller doesn\u0027t send vCPU quota failed messages to cluster install logs\n1995953 - Ingresscontroller change the replicas to scaleup first time will be rolling update for all the ingress pods\n1996544 - AWS region ap-northeast-3 is missing in installer prompt\n1996638 - Helm operator manager container restart when CR is creating\u0026deleting\n1997120 - test_recreate_pod_in_namespace fails - Timed out waiting for namespace\n1997142 - OperatorHub: Filtering the OperatorHub catalog is extremely slow\n1997704 - [osp][octavia lb] given loadBalancerIP is ignored when creating a LoadBalancer type svc\n1999325 - FailedMount MountVolume.SetUp failed for volume \"kube-api-access\" : object \"openshift-kube-scheduler\"/\"kube-root-ca.crt\" not registered\n1999529 - Must gather fails to gather logs for all the namespace if server doesn\u0027t have volumesnapshotclasses resource\n1999891 - must-gather collects backup data even when Pods fails to be created\n2000653 - Add hypershift namespace to exclude namespaces list in descheduler configmap\n2002009 - IPI Baremetal, qemu-convert takes to long to save image into drive on slow/large disks\n2002602 - Storageclass creation page goes blank when \"Enable encryption\" is clicked if there is a syntax error in the configmap\n2002868 - Node exporter not able to scrape OVS metrics\n2005321 - Web Terminal is not opened on Stage of DevSandbox when terminal instance is not created yet\n2005694 - Removing proxy object takes up to 10 minutes for the changes to propagate to the MCO\n2006067 - Objects are not valid as a React child\n2006201 - ovirt-csi-driver-node pods are crashing intermittently\n2007246 - Openshift Container Platform - Ingress Controller does not set allowPrivilegeEscalation in the router deployment\n2007340 - Accessibility issues on topology - list view\n2007611 - TLS issues with the internal registry and AWS S3 bucket\n2007647 - oc adm release info --changes-from does not show changes in repos that squash-merge\n2008486 - Double scroll bar shows up on dragging the task quick search to the bottom\n2009345 - Overview page does not load from openshift console for some set of users after upgrading to 4.7.19\n2009352 - Add image-registry usage metrics to telemeter\n2009845 - Respect overrides changes during installation\n2010361 - OpenShift Alerting Rules Style-Guide Compliance\n2010364 - OpenShift Alerting Rules Style-Guide Compliance\n2010393 - [sig-arch][Late] clients should not use APIs that are removed in upcoming releases [Suite:openshift/conformance/parallel]\n2011525 - Rate-limit incoming BFD to prevent ovn-controller DoS\n2011895 - Details about cloud errors are missing from PV/PVC errors\n2012111 - LSO still try to find localvolumeset which is already deleted\n2012969 - need to figure out why osupdatedstart to reboot is zero seconds\n2013144 - Developer catalog category links could not be open in a new tab (sharing and open a deep link works fine)\n2013461 - Import deployment from Git with s2i expose always port 8080 (Service and Pod template, not Route) if another Route port is selected by the user\n2013734 - unable to label downloads route in openshift-console namespace\n2013822 - ensure that the `container-tools` content comes from the RHAOS plashets\n2014161 - PipelineRun logs are delayed and stuck on a high log volume\n2014240 - Image registry uses ICSPs only when source exactly matches image\n2014420 - Topology page is crashed\n2014640 - Cannot change storage class of boot disk when cloning from template\n2015023 - Operator objects are re-created even after deleting it\n2015042 - Adding a template from the catalog creates a secret that is not owned by the TemplateInstance\n2015356 - Different status shows on VM list page and details page\n2015375 - PVC creation for ODF/IBM Flashsystem shows incorrect types\n2015459 - [azure][openstack]When image registry configure an invalid proxy, registry pods are CrashLoopBackOff\n2015800 - [IBM]Shouldn\u0027t change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value\n2016425 - Adoption controller generating invalid metadata.Labels for an already adopted Subscription resource\n2016534 - externalIP does not work when egressIP is also present\n2017001 - Topology context menu for Serverless components always open downwards\n2018188 - VRRP ID conflict between keepalived-ipfailover and cluster VIPs\n2018517 - [sig-arch] events should not repeat pathologically expand_less failures -  s390x CI\n2019532 - Logger object in LSO does not log source location accurately\n2019564 - User settings resources (ConfigMap, Role, RB) should be deleted when a user is deleted\n2020483 - Parameter $__auto_interval_period is in Period drop-down list\n2020622 - e2e-aws-upi and e2e-azure-upi jobs are not working\n2021041 - [vsphere] Not found TagCategory when destroying ipi cluster\n2021446 - openshift-ingress-canary is not reporting DEGRADED state, even though the canary route is not available and accessible\n2022253 - Web terminal view is broken\n2022507 - Pods stuck in OutOfpods state after running cluster-density\n2022611 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment\n2022745 - Cluster reader is not able to list NodeNetwork* objects\n2023295 - Must-gather tool gathering data from custom namespaces. \n2023691 - ClusterIP internalTrafficPolicy does not work for ovn-kubernetes\n2024427 - oc completion zsh doesn\u0027t auto complete\n2024708 - The form for creating operational CRs is badly rendering filed names (\"obsoleteCPUs\" -\u003e \"Obsolete CP Us\" )\n2024821 - [Azure-File-CSI] need more clear info when requesting pvc with volumeMode Block\n2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion\n2025624 - Ingress router metrics endpoint serving old certificates after certificate rotation\n2026356 - [IPI on Azure] The bootstrap machine type should be same as master\n2026461 - Completed pods in Openshift cluster not releasing IP addresses and results in err: range is full unless manually deleted\n2027603 - [UI] Dropdown doesn\u0027t close on it\u0027s own after arbiter zone selection on \u0027Capacity and nodes\u0027 page\n2027613 - Users can\u0027t silence alerts from the dev console\n2028493 - OVN-migration failed - ovnkube-node: error waiting for node readiness: timed out waiting for the condition\n2028532 - noobaa-pg-db-0 pod stuck in Init:0/2\n2028821 - Misspelled label in ODF management UI - MCG performance view\n2029438 - Bootstrap node cannot resolve api-int because NetworkManager replaces resolv.conf\n2029470 - Recover from suddenly appearing old operand revision WAS: kube-scheduler-operator test failure: Node\u0027s not achieving new revision\n2029797 - Uncaught exception: ResizeObserver loop limit exceeded\n2029835 - CSI migration for vSphere: Inline-volume tests failing\n2030034 - prometheusrules.openshift.io: dial tcp: lookup prometheus-operator.openshift-monitoring.svc on 172.30.0.10:53: no such host\n2030530 - VM created via customize wizard has single quotation marks surrounding its password\n2030733 - wrong IP selected to connect to the nodes when ExternalCloudProvider enabled\n2030776 - e2e-operator always uses quay master images during presubmit tests\n2032559 - CNO allows migration to dual-stack in unsupported configurations\n2032717 - Unable to download ignition after coreos-installer install --copy-network\n2032924 - PVs are not being cleaned up after PVC deletion\n2033482 - [vsphere] two variables in tf are undeclared and get warning message during installation\n2033575 - monitoring targets are down after the cluster run for more than 1 day\n2033711 - IBM VPC operator needs e2e csi tests for ibmcloud\n2033862 - MachineSet is not scaling up due to an OpenStack error trying to create multiple ports with the same MAC address\n2034147 - OpenShift VMware IPI Installation fails with Resource customization when corespersocket is unset and vCPU count is not a multiple of 4\n2034296 - Kubelet and Crio fails to start during upgrde to 4.7.37\n2034411 - [Egress Router] No NAT rules for ipv6 source and destination created in ip6tables-save\n2034688 - Allow Prometheus/Thanos to return 401 or 403 when the request isn\u0027t authenticated\n2034958 - [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready\n2035005 - MCD is not always removing in progress taint after a successful update\n2035334 - [RFE] [OCPonRHV] Provision machines with preallocated disks\n2035899 - Operator-sdk run bundle doesn\u0027t support arm64 env\n2036202 - Bump podman to \u003e= 3.3.0 so that  setup of multiple credentials for a single registry which can be distinguished by their path  will work\n2036594 - [MAPO] Machine goes to failed state due to a momentary error of the cluster etcd\n2036948 - SR-IOV Network Device Plugin should handle offloaded VF instead of supporting only PF\n2037190 - dns operator status flaps between True/False/False and True/True/(False|True) after updating dnses.operator.openshift.io/default\n2037447 - Ingress Operator is not closing TCP connections. \n2037513 - I/O metrics from the Kubernetes/Compute Resources/Cluster Dashboard show as no datapoints found\n2037542 - Pipeline Builder footer is not sticky and yaml tab doesn\u0027t use full height\n2037610 - typo for the Terminated message from thanos-querier pod description info\n2037620 - Upgrade playbook should quit directly when trying to upgrade RHEL-7 workers to 4.10\n2037625 - AppliedClusterResourceQuotas can not be shown on project overview\n2037626 - unable to fetch ignition file when scaleup rhel worker nodes on cluster enabled Tang disk encryption\n2037628 - Add test id to kms flows for automation\n2037721 - PodDisruptionBudgetAtLimit alert fired in SNO cluster\n2037762 - Wrong ServiceMonitor definition is causing failure during Prometheus configuration reload and preventing changes from being applied\n2037841 - [RFE] use /dev/ptp_hyperv on Azure/AzureStack\n2038115 - Namespace and application bar is not sticky anymore\n2038244 - Import from git ignore the given servername and could not validate On-Premises GitHub and BitBucket installations\n2038405 - openshift-e2e-aws-workers-rhel-workflow in CI step registry broken\n2038774 - IBM-Cloud OVN IPsec fails, IKE UDP ports and  ESP protocol not in security group\n2039135 - the error message is not clear when using \"opm index prune\" to prune a file-based index image\n2039161 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected\n2039253 - ovnkube-node crashes on duplicate endpoints\n2039256 - Domain validation fails when TLD contains a digit. \n2039277 - Topology list view items are not highlighted on keyboard navigation\n2039462 - Application tab in User Preferences dropdown menus are too wide. \n2039477 - validation icon is missing from Import from git\n2039589 - The toolbox command always ignores [command] the first time\n2039647 - Some developer perspective links are not deep-linked causes developer to sometimes delete/modify resources in the wrong project\n2040180 - Bug when adding a new table panel to a dashboard for OCP UI with only one value column\n2040195 - Ignition fails to enable systemd units with backslash-escaped characters in their names\n2040277 - ThanosRuleNoEvaluationFor10Intervals alert description is wrong\n2040488 - OpenShift-Ansible BYOH Unit Tests are Broken\n2040635 - CPU Utilisation is negative number for \"Kubernetes / Compute Resources / Cluster\" dashboard\n2040654 - \u0027oc adm must-gather -- some_script\u0027 should exit with same non-zero code as the failed \u0027some_script\u0027 exits\n2040779 - Nodeport svc not accessible when the backend pod is on a window node\n2040933 - OCP 4.10 nightly build will fail to install if multiple NICs are defined on KVM nodes\n2041133 - \u0027oc explain route.status.ingress.conditions\u0027 shows type \u0027Currently only Ready\u0027 but actually is \u0027Admitted\u0027\n2041454 - Garbage values accepted for `--reference-policy` in `oc import-image` without any error\n2041616 - Ingress operator tries to manage DNS of additional ingresscontrollers that are not under clusters basedomain, which can\u0027t work\n2041769 - Pipeline Metrics page not showing data for normal user\n2041774 - Failing git detection should not recommend Devfiles as import strategy\n2041814 - The KubeletConfigController wrongly process multiple confs for a pool\n2041940 - Namespace pre-population not happening till a Pod is created\n2042027 - Incorrect feedback for \"oc label pods --all\"\n2042348 - Volume ID is missing in output message when expanding volume which is not mounted. \n2042446 - CSIWithOldVSphereHWVersion alert recurring despite upgrade to vmx-15\n2042501 - use lease for leader election\n2042587 - ocm-operator: Improve reconciliation of CA ConfigMaps\n2042652 - Unable to deploy hw-event-proxy operator\n2042838 - The status of container is not consistent on Container details and pod details page\n2042852 - Topology toolbars are unaligned to other toolbars\n2042999 - A pod cannot reach kubernetes.default.svc.cluster.local cluster IP\n2043035 - Wrong error code provided when request contains invalid argument\n2043068 - \u003cx\u003e available of \u003cy\u003e text disappears in Utilization item if x is 0\n2043080 - openshift-installer intermittent failure on AWS with Error: InvalidVpcID.NotFound: The vpc ID \u0027vpc-123456789\u0027 does not exist\n2043094 - ovnkube-node not deleting stale conntrack entries when endpoints go away\n2043118 - Host should transition through Preparing when HostFirmwareSettings changed\n2043132 - Add a metric when vsphere csi storageclass creation fails\n2043314 - `oc debug node` does not meet compliance requirement\n2043336 - Creating multi SriovNetworkNodePolicy cause the worker always be draining\n2043428 - Address Alibaba CSI driver operator review comments\n2043533 - Update ironic, inspector, and ironic-python-agent to latest bugfix release\n2043672 - [MAPO] root volumes not working\n2044140 - When \u0027oc adm upgrade --to-image ...\u0027 rejects an update as not recommended, it should mention --allow-explicit-upgrade\n2044207 - [KMS] The data in the text box does not get cleared on switching the authentication method\n2044227 - Test Managed cluster should only include cluster daemonsets that have maxUnavailable update of 10 or 33 percent fails\n2044412 - Topology list misses separator lines and hover effect let the list jump 1px\n2044421 - Topology list does not allow selecting an application group anymore\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2044803 - Unify button text style on VM tabs\n2044824 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]\n2045065 - Scheduled pod has nodeName changed\n2045073 - Bump golang and build images for local-storage-operator\n2045087 - Failed to apply sriov policy on intel nics\n2045551 - Remove enabled FeatureGates from TechPreviewNoUpgrade\n2045559 - API_VIP moved when kube-api container on another master node was stopped\n2045577 - [ocp 4.9 | ovn-kubernetes] ovsdb_idl|WARN|transaction error: {\"details\":\"cannot delete Datapath_Binding row 29e48972-xxxx because of 2 remaining reference(s)\",\"error\":\"referential integrity violation\n2045872 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2046133 - [MAPO]IPI proxy installation failed\n2046156 - Network policy: preview of affected pods for non-admin shows empty popup\n2046157 - Still uses pod-security.admission.config.k8s.io/v1alpha1 in admission plugin config\n2046191 - Opeartor pod is missing correct qosClass and priorityClass\n2046277 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the module.vpc.aws_subnet.private_subnet[0] resource\n2046319 - oc debug cronjob command failed with error \"unable to extract pod template from type *v1.CronJob\". \n2046435 - Better Devfile Import Strategy support in the \u0027Import from Git\u0027 flow\n2046496 - Awkward wrapping of project toolbar on mobile\n2046497 - Re-enable TestMetricsEndpoint test case in console operator e2e tests\n2046498 - \"All Projects\" and \"all applications\" use different casing on topology page\n2046591 - Auto-update boot source is not available while create new template from it\n2046594 - \"Requested template could not be found\" while creating VM from user-created template\n2046598 - Auto-update boot source size unit is byte on customize wizard\n2046601 - Cannot create VM from template\n2046618 - Start last run action should contain current user name in the started-by annotation of the PLR\n2046662 - Should upgrade the go version to be 1.17 for example go operator memcached-operator\n2047197 - Sould upgrade the operator_sdk.util version to \"0.4.0\" for the \"osdk_metric\" module\n2047257 - [CP MIGRATION] Node drain failure during control plane node migration\n2047277 - Storage status is missing from status card of virtualization overview\n2047308 - Remove metrics and events for master port offsets\n2047310 - Running VMs per template card needs empty state when no VMs exist\n2047320 - New route annotation to show another URL or hide topology URL decorator doesn\u0027t work for Knative Services\n2047335 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2047362 - Removing prometheus UI access breaks origin test\n2047445 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure\n2047670 - Installer should pre-check that the hosted zone is not associated with the VPC and throw the error message. \n2047702 - Issue described on bug #2013528 reproduced: mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2047710 - [OVN] ovn-dbchecker CrashLoopBackOff and sbdb jsonrpc unix socket receive error\n2047732 - [IBM]Volume is not deleted after destroy cluster\n2047741 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the module.masters.aws_network_interface.master[1] resource\n2047790 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2047799 - release-openshift-ocp-installer-e2e-aws-upi-4.9\n2047870 - Prevent redundant queries of BIOS settings in HostFirmwareController\n2047895 - Fix architecture naming in oc adm release mirror for aarch64\n2047911 - e2e: Mock CSI tests fail on IBM ROKS clusters\n2047913 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2047925 - [FJ OCP4.10 Bug]: IRONIC_KERNEL_PARAMS does not contain coreos_kernel_params during iPXE boot\n2047935 - [4.11] Bootimage bump tracker\n2047998 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*\n2048059 - Service Level Agreement (SLA) always show \u0027Unknown\u0027\n2048067 - [IPI on Alibabacloud] \"Platform Provisioning Check\" tells \u0027\"ap-southeast-6\": enhanced NAT gateway is not supported\u0027, which seems false\n2048186 - Image registry operator panics when finalizes config deletion\n2048214 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud\n2048219 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool\n2048221 - Capitalization of titles in the VM details page is inconsistent. \n2048222 - [AWS GovCloud] Cluster can not be installed on AWS GovCloud regions via terminal interactive UI. \n2048276 - Cypress E2E tests fail due to a typo in test-cypress.sh\n2048333 - prometheus-adapter becomes inaccessible during rollout\n2048352 - [OVN] node does not recover after NetworkManager restart, NotReady and unreachable\n2048442 - [KMS] UI does not have option to specify kube auth path and namespace for cluster wide encryption\n2048451 - Custom serviceEndpoints in install-config are reported to be unreachable when environment uses a proxy\n2048538 - Network policies are not implemented or updated by OVN-Kubernetes\n2048541 - incorrect rbac check for install operator quick starts\n2048563 - Leader election conventions for cluster topology\n2048575 - IP reconciler cron job failing on single node\n2048686 - Check MAC address provided on the install-config.yaml file\n2048687 - All bare metal jobs are failing now due to End of Life of centos 8\n2048793 - Many Conformance tests are failing in OCP 4.10 with Kuryr\n2048803 - CRI-O seccomp profile out of date\n2048824 - [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class\n2048841 - [ovn] Missing lr-policy-list and snat rules for egressip when new pods are added\n2048955 - Alibaba Disk CSI Driver does not have CI\n2049073 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured\n2049078 - Bond CNI: Failed to  attach Bond NAD to pod\n2049108 - openshift-installer intermittent failure on AWS with \u0027Error: Error waiting for NAT Gateway (nat-xxxxx) to become available\u0027\n2049117 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently\n2049133 - oc adm catalog mirror throws \u0027missing signature key\u0027 error when using file://local/index\n2049142 - Missing \"app\" label\n2049169 - oVirt CSI driver should use the trusted CA bundle when cluster proxy is configured\n2049234 - ImagePull fails with error  \"unable to pull manifest from example.com/busy.box:v5  invalid reference format\"\n2049410 - external-dns-operator creates provider section, even when not requested\n2049483 - Sidepanel for Connectors/workloads in topology shows invalid tabs\n2049613 - MTU migration on SDN IPv4 causes API alerts\n2049671 - system:serviceaccount:openshift-cluster-csi-drivers:aws-ebs-csi-driver-operator trying to GET and DELETE /api/v1/namespaces/openshift-cluster-csi-drivers/configmaps/kube-cloud-config which does not exist\n2049687 - superfluous apirequestcount entries in audit log\n2049775 - cloud-provider-config change not applied when ExternalCloudProvider enabled\n2049787 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs\n2049832 - ContainerCreateError when trying to launch large (\u003e500) numbers of pods across nodes\n2049872 - cluster storage operator AWS credentialsrequest lacks KMS privileges\n2049889 - oc new-app --search nodejs warns about access to sample content on quay.io\n2050005 - Plugin module IDs can clash with console module IDs causing runtime errors\n2050011 - Observe \u003e Metrics page: Timespan text input and dropdown do not align\n2050120 - Missing metrics in kube-state-metrics\n2050146 - Installation on PSI fails with: \u0027openstack platform does not have the required standard-attr-tag network extension\u0027\n2050173 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0\n2050180 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2\n2050300 - panic in cluster-storage-operator while updating status\n2050332 - Malformed ClusterClaim lifetimes cause the clusterclaims-controller to silently fail to reconcile all clusterclaims\n2050335 - azure-disk failed to mount with error special device does not exist\n2050345 - alert data for burn budget needs to be updated to prevent regression\n2050407 - revert \"force cert rotation every couple days for development\" in 4.11\n2050409 - ip-reconcile job is failing consistently\n2050452 - Update osType and hardware version used by RHCOS OVA to indicate it is a RHEL 8 guest\n2050466 - machine config update with invalid container runtime config should be more robust\n2050637 - Blog Link not re-directing to the intented website in the last modal in the Dev Console Onboarding Tour\n2050698 - After upgrading the cluster the console still show 0 of N, 0% progress for worker nodes\n2050707 - up test for prometheus pod look to far in the past\n2050767 - Vsphere upi tries to access vsphere during manifests generation phase\n2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function\n2050882 - Crio appears to be coredumping in some scenarios\n2050902 - not all resources created during import have common labels\n2050946 - Cluster-version operator fails to notice TechPreviewNoUpgrade featureSet change after initialization-lookup error\n2051320 - Need to build ose-aws-efs-csi-driver-operator-bundle-container image for 4.11\n2051333 - [aws] records in public hosted zone and BYO private hosted zone were not deleted. \n2051377 - Unable to switch vfio-pci to netdevice in policy\n2051378 - Template wizard is crashed when there are no templates existing\n2051423 - migrate loadbalancers from amphora to ovn not working\n2051457 - [RFE] PDB for cloud-controller-manager to avoid going too many replicas down\n2051470 - prometheus: Add validations for relabel configs\n2051558 - RoleBinding in project without subject is causing \"Project access\" page to fail\n2051578 - Sort is broken for the Status and Version columns on the Cluster Settings \u003e ClusterOperators page\n2051583 - sriov must-gather image doesn\u0027t work\n2051593 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line\n2051611 - Remove Check which enforces summary_interval must match logSyncInterval\n2051642 - Remove \"Tech-Preview\" Label for the Web Terminal GA release\n2051657 - Remove \u0027Tech preview\u0027 from minnimal deployment Storage System creation\n2051718 - MetaLLB: Validation Webhook: BGPPeer hold time is allowed to be set to less than 3s\n2051722 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2051881 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2051954 - Allow changing of policyAuditConfig ratelimit post-deployment\n2051969 - Need to build local-storage-operator-metadata-container image for 4.11\n2051985 - An APIRequestCount without dots in the name can cause a panic\n2052016 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n2052034 - Can\u0027t start correct debug pod using pod definition yaml in OCP 4.8\n2052055 - Whereabouts should implement client-go 1.22+\n2052056 - Static pod installer should throttle creating new revisions\n2052071 - local storage operator metrics target down after upgrade\n2052095 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1\n2052270 - FSyncControllerDegraded has \"treshold\" -\u003e \"threshold\" typos\n2052309 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052332 - Probe failures and pod restarts during 4.7 to 4.8 upgrade\n2052393 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052398 - 4.9 to 4.10 upgrade fails for ovnkube-masters\n2052415 - Pod density test causing problems when using kube-burner\n2052513 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. \n2052578 - Create new app from a private git repository using \u0027oc new app\u0027 with basic auth does not work. \n2052595 - Remove dev preview badge from IBM FlashSystem deployment windows\n2052618 - Node reboot causes duplicate persistent volumes\n2052671 - Add Sprint 214 translations\n2052674 - Remove extra spaces\n2052700 - kube-controller-manger should use configmap lease\n2052701 - kube-scheduler should use configmap lease\n2052814 - go fmt fails in OSM after migration to go 1.17\n2052840 - IMAGE_BUILDER=docker make test-e2e-operator-ocp runs with podman instead of docker\n2052953 - Observe dashboard always opens for last viewed workload instead of the selected one\n2052956 - Installing virtualization operator duplicates the first action on workloads in topology\n2052975 - High cpu load on Juniper Qfx5120 Network switches after upgrade to Openshift 4.8.26\n2052986 - Console crashes when Mid cycle hook in Recreate strategy(edit deployment/deploymentConfig) selects Lifecycle strategy as \"Tags the current image as an image stream tag if the deployment succeeds\"\n2053006 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11\n2053104 - [vSphere CSI driver Operator] hw_version_total metric update wrong value after upgrade nodes hardware version from `vmx-13` to  `vmx-15`\n2053112 - nncp status is unknown when nnce is Progressing\n2053118 - nncp Available condition reason should be exposed in `oc get`\n2053168 - Ensure the core dynamic plugin SDK package has correct types and code\n2053205 - ci-openshift-cluster-network-operator-master-e2e-agnostic-upgrade is failing most of the time\n2053304 - Debug terminal no longer works in admin console\n2053312 - requestheader IDP test doesn\u0027t wait for cleanup, causing high failure rates\n2053334 - rhel worker scaleup playbook failed because missing some dependency of podman\n2053343 - Cluster Autoscaler not scaling down nodes which seem to qualify for scale-down\n2053491 - nmstate interprets interface names as float64 and subsequently crashes on state update\n2053501 - Git import detection does not happen for private repositories\n2053582 - inability to detect static lifecycle failure\n2053596 - [IBM Cloud] Storage IOPS limitations and lack of IPI ETCD deployment options trigger leader election during cluster initialization\n2053609 - LoadBalancer SCTP service leaves stale conntrack entry that causes issues if service is recreated\n2053622 - PDB warning alert when CR replica count is set to zero\n2053685 - Topology performance: Immutable .toJSON consumes a lot of CPU time when rendering a large topology graph (~100 nodes)\n2053721 - When using RootDeviceHint rotational setting the host can fail to provision\n2053922 - [OCP 4.8][OVN] pod interface: error while waiting on OVS.Interface.external-ids\n2054095 - [release-4.11] Gather images.conifg.openshift.io cluster resource definiition\n2054197 - The ProjectHelmChartRepositrory schema has merged but has not been initialized in the cluster yet\n2054200 - Custom created services in openshift-ingress removed even though the services are not of type LoadBalancer\n2054238 - console-master-e2e-gcp-console is broken\n2054254 - vSphere test failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n2054285 - Services other than knative service also shows as KSVC in add subscription/trigger modal\n2054319 - must-gather | gather_metallb_logs can\u0027t detect metallb pod\n2054351 - Rrestart of ptp4l/phc2sys  on change of PTPConfig  generates more than one times, socket error in event frame work\n2054385 - redhat-operatori ndex image build failed with AMQ brew build - amq-interconnect-operator-metadata-container-1.10.13\n2054564 - DPU network operator 4.10 branch need to sync with master\n2054630 - cancel create silence from kebab menu of alerts page will navigated to the previous page\n2054693 - Error deploying HorizontalPodAutoscaler with oc new-app command in OpenShift 4\n2054701 - [MAPO] Events are not created for MAPO machines\n2054705 - [tracker] nf_reinject calls nf_queue_entry_free on an already freed entry-\u003estate\n2054735 - Bad link in CNV console\n2054770 - IPI baremetal deployment metal3 pod crashes when using capital letters in hosts bootMACAddress\n2054787 - SRO controller goes to CrashLoopBackOff status when the pull-secret does not have the correct permissions\n2054950 - A large number is showing on disk size field\n2055305 - Thanos Querier high CPU and memory usage till OOM\n2055386 - MetalLB changes the shared external IP of a service upon updating the externalTrafficPolicy definition\n2055433 - Unable to create br-ex as gateway is not found\n2055470 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation\n2055492 - The default YAML on vm wizard is not latest\n2055601 - installer did not destroy *.app dns recored in a IPI on ASH install\n2055702 - Enable Serverless tests in CI\n2055723 - CCM operator doesn\u0027t deploy resources after enabling TechPreviewNoUpgrade feature set. \n2055729 - NodePerfCheck fires and stays active on momentary high latency\n2055814 - Custom dynamic exntension point causes runtime and compile time error\n2055861 - cronjob collect-profiles failed leads node reach to OutOfpods status\n2055980 - [dynamic SDK][internal] console plugin SDK does not support table actions\n2056454 - Implement preallocated disks for oVirt in the cluster API provider\n2056460 - Implement preallocated disks for oVirt in the OCP installer\n2056496 - If image does not exists for builder image then upload jar form crashes\n2056519 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies\n2056607 - Running kubernetes-nmstate handler e2e tests stuck on OVN clusters\n2056752 - Better to named the oc-mirror version info with more information like the `oc version --client`\n2056802 - \"enforcedLabelLimit|enforcedLabelNameLengthLimit|enforcedLabelValueLengthLimit\" do not take effect\n2056841 - [UI] [DR] Web console update is available pop-up is seen multiple times on Hub cluster where ODF operator is not installed and unnecessarily it pop-up on the Managed cluster as well where ODF operator is installed\n2056893 - incorrect warning for --to-image in oc adm upgrade help\n2056967 - MetalLB: speaker metrics is not updated when deleting a service\n2057025 - Resource requests for the init-config-reloader container of prometheus-k8s-* pods are too high\n2057054 - SDK: k8s methods resolves into Response instead of the Resource\n2057079 - [cluster-csi-snapshot-controller-operator] CI failure: events should not repeat pathologically\n2057101 - oc commands working with images print an incorrect and inappropriate warning\n2057160 - configure-ovs selects wrong interface on reboot\n2057183 - OperatorHub: Missing \"valid subscriptions\" filter\n2057251 - response code for Pod count graph changed from 422 to 200 periodically for about 30 minutes if pod is rescheduled\n2057358 - [Secondary Scheduler] - cannot build bundle index image using the secondary scheduler operator bundle\n2057387 - [Secondary Scheduler] - olm.skiprange, com.redhat.openshift.versions is incorrect and no minkubeversion\n2057403 - CMO logs show forbidden: User \"system:serviceaccount:openshift-monitoring:cluster-monitoring-operator\" cannot get resource \"replicasets\" in API group \"apps\" in the namespace \"openshift-monitoring\"\n2057495 - Alibaba Disk CSI driver does not provision small PVCs\n2057558 - Marketplace operator polls too frequently for cluster operator status changes\n2057633 - oc rsync reports misleading error when container is not found\n2057642 - ClusterOperator status.conditions[].reason \"etcd disk metrics exceeded...\" should be a CamelCase slug\n2057644 - FSyncControllerDegraded latches True, even after fsync latency recovers on all members\n2057696 - Removing console still blocks OCP install from completing\n2057762 - ingress operator should report Upgradeable False to remind user before upgrade to 4.10 when Non-SAN certs are used\n2057832 - expr for record rule: \"cluster:telemetry_selected_series:count\" is improper\n2057967 - KubeJobCompletion does not account for possible job states\n2057990 - Add extra debug information to image signature workflow test\n2057994 - SRIOV-CNI failed to load netconf: LoadConf(): failed to get VF information\n2058030 - On OCP 4.10+ using OVNK8s on BM IPI, nodes register as localhost.localdomain\n2058217 - [vsphere-problem-detector-operator] \u0027vsphere_rwx_volumes_total\u0027 metric name make confused\n2058225 - openshift_csi_share_* metrics are not found from telemeter server\n2058282 - Websockets stop updating during cluster upgrades\n2058291 - CI builds should have correct version of Kube without needing to push tags everytime\n2058368 - Openshift OVN-K got restarted mutilple times with the error  \" ovsdb-server/memory-trim-on-compaction on\u0027\u0027 failed: exit status 1   and \" ovndbchecker.go:118] unable to turn on memory       trimming for SB DB, stderr \"  , cluster  unavailable\n2058370 - e2e-aws-driver-toolkit CI job is failing\n2058421 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install\n2058424 - ConsolePlugin proxy always passes Authorization header even if `authorize` property is omitted or false\n2058623 - Bootstrap server dropdown menu in Create Event Source- KafkaSource form is empty even if it\u0027s created\n2058626 - Multiple Azure upstream kube fsgroupchangepolicy tests are permafailing expecting gid \"1000\" but geting \"root\"\n2058671 - whereabouts IPAM CNI ip-reconciler cronjob specification requires hostnetwork, api-int lb usage \u0026 proper backoff\n2058692 - [Secondary Scheduler] Creating secondaryscheduler instance fails with error \"key failed with : secondaryschedulers.operator.openshift.io \"secondary-scheduler\" not found\"\n2059187 - [Secondary Scheduler] -  key failed with : serviceaccounts \"secondary-scheduler\" is forbidden\n2059212 - [tracker] Backport https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa\n2059213 - ART cannot build installer images due to missing terraform binaries for some architectures\n2059338 - A fully upgraded 4.10 cluster defaults to HW-13 hardware version even if HW-15 is default (and supported)\n2059490 - The operator image in CSV file of the ART DPU network operator bundle is incorrect\n2059567 - vMedia based IPI installation of OpenShift fails on Nokia servers due to issues with virtual media attachment and boot source override\n2059586 - (release-4.11) Insights operator doesn\u0027t reconcile clusteroperator status condition messages\n2059654 - Dynamic demo plugin proxy example out of date\n2059674 - Demo plugin fails to build\n2059716 - cloud-controller-manager flaps operator version during 4.9 -\u003e 4.10 update\n2059791 - [vSphere CSI driver Operator] didn\u0027t update \u0027vsphere_csi_driver_error\u0027 metric value when fixed the error manually\n2059840 - [LSO]Could not gather logs for pod diskmaker-discovery and diskmaker-manager\n2059943 - MetalLB: Move CI config files to metallb repo from dev-scripts repo\n2060037 - Configure logging level of FRR containers\n2060083 - CMO doesn\u0027t react to changes in clusteroperator console\n2060091 - CMO produces invalid alertmanager statefulset if console cluster .status.consoleURL is unset\n2060133 - [OVN RHEL upgrade] could not find IP addresses: failed to lookup link br-ex: Link not found\n2060147 - RHEL8 Workers Need to Ensure libseccomp is up to date at install time\n2060159 - LGW: External-\u003eService of type ETP=Cluster doesn\u0027t go to the node\n2060329 - Detect unsupported amount of workloads before rendering a lazy or crashing topology\n2060334 - Azure VNET lookup fails when the NIC subnet is in a different resource group\n2060361 - Unable to enumerate NICs due to missing the \u0027primary\u0027 field due to security restrictions\n2060406 - Test \u0027operators should not create watch channels very often\u0027 fails\n2060492 - Update PtpConfigSlave source-crs to use network_transport L2 instead of UDPv4\n2060509 - Incorrect installation of ibmcloud vpc csi driver in IBM Cloud ROKS 4.10\n2060532 - LSO e2e tests are run against default image and namespace\n2060534 - openshift-apiserver pod in crashloop due to unable to reach kubernetes svc ip\n2060549 - ErrorAddingLogicalPort: duplicate IP found in ECMP Pod route cache!\n2060553 - service domain can\u0027t be resolved when networkpolicy is used in OCP 4.10-rc\n2060583 - Remove Console internal-kubevirt plugin SDK package\n2060605 - Broken access to public images: Unable to connect to the server: no basic auth credentials\n2060617 - IBMCloud destroy DNS regex not strict enough\n2060687 - Azure Ci:  SubscriptionDoesNotSupportZone  - does not support availability zones at location \u0027westus\u0027\n2060697 - [AWS] partitionNumber cannot work for specifying Partition number\n2060714 - [DOCS] Change source_labels to sourceLabels in \"Configuring remote write storage\" section\n2060837 - [oc-mirror] Catalog merging error when two or more bundles does not have a set Replace field\n2060894 - Preceding/Trailing Whitespaces In Form Elements on the add page\n2060924 - Console white-screens while using debug terminal\n2060968 - Installation failing due to ironic-agent.service not starting properly\n2060970 - Bump recommended FCOS to 35.20220213.3.0\n2061002 - Conntrack entry is not removed for LoadBalancer IP\n2061301 - Traffic Splitting Dialog is Confusing With Only One Revision\n2061303 - Cachito request failure with vendor directory is out of sync with go.mod/go.sum\n2061304 - workload info gatherer - don\u0027t serialize empty images map\n2061333 - White screen for Pipeline builder page\n2061447 - [GSS] local pv\u0027s are in terminating state\n2061496 - etcd RecentBackup=Unknown ControllerStarted contains no message string\n2061527 - [IBMCloud] infrastructure asset missing CloudProviderType\n2061544 - AzureStack is hard-coded to use Standard_LRS for the disk type\n2061549 - AzureStack install with internal publishing does not create api DNS record\n2061611 - [upstream] The marker of KubeBuilder doesn\u0027t work if it is close to the code\n2061732 - Cinder CSI crashes when API is not available\n2061755 - Missing breadcrumb on the resource creation page\n2061833 - A single worker can be assigned to multiple baremetal hosts\n2061891 - [IPI on IBMCLOUD]  missing ?br-sao? region in openshift installer\n2061916 - mixed ingress and egress policies can result in half-isolated pods\n2061918 - Topology Sidepanel style is broken\n2061919 - Egress Ip entry stays on node\u0027s primary NIC post deletion from hostsubnet\n2062007 - MCC bootstrap command lacks template flag\n2062126 - IPfailover pod is crashing during creation showing keepalived_script doesn\u0027t exist\n2062151 - Add RBAC for \u0027infrastructures\u0027 to operator bundle\n2062355 - kubernetes-nmstate resources and logs not included in must-gathers\n2062459 - Ingress pods scheduled on the same node\n2062524 - [Kamelet Sink] Topology crashes on click of Event sink node if the resource is created source to Uri over ref\n2062558 - Egress IP with openshift sdn in not functional on worker node. \n2062568 - CVO does not trigger new upgrade again after fail to update to unavailable payload\n2062645 - configure-ovs: don\u0027t restart networking if not necessary\n2062713 - Special Resource Operator(SRO) - No sro_used_nodes metric\n2062849 - hw event proxy is not binding on ipv6 local address\n2062920 - Project selector is too tall with only a few projects\n2062998 - AWS GovCloud regions are recognized as the unknown regions\n2063047 - Configuring a full-path query log file in CMO breaks Prometheus with the latest version of the operator\n2063115 - ose-aws-efs-csi-driver has invalid dependency in go.mod\n2063164 - metal-ipi-ovn-ipv6 Job Permafailing and Blocking OpenShift 4.11 Payloads: insights operator is not available\n2063183 - DefragDialTimeout is set to low for large scale OpenShift Container Platform - Cluster\n2063194 - cluster-autoscaler-default will fail when automated etcd defrag is running on large scale OpenShift Container Platform 4 - Cluster\n2063321 - [OVN]After reboot egress node,  lr-policy-list was not correct, some duplicate records or missed internal IPs\n2063324 - MCO template output directories created with wrong mode causing render failure in unprivileged container environments\n2063375 - ptp operator upgrade from 4.9 to 4.10 stuck at pending due to service account requirements not met\n2063414 - on OKD 4.10, when image-registry is enabled, the /etc/hosts entry is missing on some nodes\n2063699 - Builds - Builds - Logs: i18n misses. \n2063708 - Builds - Builds - Logs: translation correction needed. \n2063720 - Metallb EBGP neighbor stuck in active until adding ebgp-multihop (directly connected neighbors)\n2063732 - Workloads - StatefulSets : I18n misses\n2063747 - When building a bundle, the push command fails because is passes a redundant \"IMG=\" on the the CLI\n2063753 - User Preferences - Language - Language selection : Page refresh rquired to change the UI into selected Language. \n2063756 - User Preferences - Applications - Insecure traffic : i18n misses\n2063795 - Remove go-ovirt-client go.mod replace directive\n2063829 - During an IPI install with the 4.10.4 installer on vSphere, getting \"Check\": platform.vsphere.network: Invalid value: \"VLAN_3912\": unable to find network provided\"\n2063831 - etcd quorum pods landing on same node\n2063897 - Community tasks not shown in pipeline builder page\n2063905 - PrometheusOperatorWatchErrors alert may fire shortly in case of transient errors from the API server\n2063938 - sing the hard coded rest-mapper in library-go\n2063955 - cannot download operator catalogs due to missing images\n2063957 - User Management - Users : While Impersonating user, UI is not switching into user\u0027s set language\n2064024 - SNO OCP upgrade with DU workload stuck at waiting for kube-apiserver static pod\n2064170 - [Azure] Missing punctuation in the  installconfig.controlPlane.platform.azure.osDisk explain\n2064239 - Virtualization Overview page turns into blank page\n2064256 - The Knative traffic distribution doesn\u0027t update percentage in sidebar\n2064553 - UI should prefer to use the virtio-win configmap than v2v-vmware configmap for windows creation\n2064596 - Fix the hubUrl docs link in pipeline quicksearch modal\n2064607 - Pipeline builder makes too many (100+) API calls upfront\n2064613 - [OCPonRHV]- after few days that cluster is alive we got error in storage operator\n2064693 - [IPI][OSP] Openshift-install fails to find the shiftstack cloud defined in clouds.yaml in the current directory\n2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server\n2064705 - the alertmanagerconfig validation catches the wrong value for invalid field\n2064744 - Errors trying to use the Debug Container feature\n2064984 - Update error message for label limits\n2065076 - Access monitoring Routes based on monitoring-shared-config creates wrong URL\n2065160 - Possible leak of load balancer targets on AWS Machine API Provider\n2065224 - Configuration for cloudFront in image-registry operator configuration is ignored \u0026 duration is corrupted\n2065290 - CVE-2021-23648 sanitize-url: XSS\n2065338 - VolumeSnapshot creation date sorting is broken\n2065507 - `oc adm upgrade` should return ReleaseAccepted condition to show upgrade status. \n2065510 - [AWS] failed to create cluster on ap-southeast-3\n2065513 - Dev Perspective -\u003e Project Dashboard shows Resource Quotas which are a bit misleading, and too many decimal places\n2065547 - (release-4.11) Gather kube-controller-manager pod logs with garbage collector errors\n2065552 - [AWS] Failed to install cluster on AWS ap-southeast-3 region due to image-registry panic error\n2065577 - user with user-workload-monitoring-config-edit role can not create user-workload-monitoring-config configmap\n2065597 - Cinder CSI is not configurable\n2065682 - Remote write relabel config adds label __tmp_openshift_cluster_id__ to all metrics\n2065689 - Internal Image registry with GCS backend does not redirect client\n2065749 - Kubelet slowly leaking memory and pods eventually unable to start\n2065785 - ip-reconciler job does not complete, halts node drain\n2065804 - Console backend check for Web Terminal Operator incorrectly returns HTTP 204\n2065806 - stop considering Mint mode as supported on Azure\n2065840 - the cronjob object is created  with a  wrong api version batch/v1beta1 when created  via the openshift console\n2065893 - [4.11] Bootimage bump tracker\n2066009 - CVE-2021-44906 minimist: prototype pollution\n2066232 - e2e-aws-workers-rhel8 is failing on ansible check\n2066418 - [4.11] Update channels information link is taking to a 404 error page\n2066444 - The \"ingress\" clusteroperator\u0027s relatedObjects field has kind names instead of resource names\n2066457 - Prometheus CI failure: 503 Service Unavailable\n2066463 - [IBMCloud] failed to list DNS zones: Exactly one of ApiKey or RefreshToken must be specified\n2066605 - coredns template block matches cluster API to loose\n2066615 - Downstream OSDK still use upstream image for Hybird type operator\n2066619 - The GitCommit of the `oc-mirror version` is not correct\n2066665 - [ibm-vpc-block] Unable to change default storage class\n2066700 - [node-tuning-operator] - Minimize wildcard/privilege Usage in Cluster and Local Roles\n2066754 - Cypress reports for core tests are not captured\n2066782 - Attached disk keeps in loading status when add disk to a power off VM by non-privileged user\n2066865 - Flaky test: In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies\n2066886 - openshift-apiserver pods never going NotReady\n2066887 - Dependabot alert: Path traversal in github.com/valyala/fasthttp\n2066889 - Dependabot alert: Path traversal in github.com/valyala/fasthttp\n2066923 - No rule to make target \u0027docker-push\u0027 when building the SRO bundle\n2066945 - SRO appends \"arm64\" instead of \"aarch64\" to the kernel name and it doesn\u0027t match the DTK\n2067004 - CMO contains grafana image though grafana is removed\n2067005 - Prometheus rule contains grafana though grafana is removed\n2067062 - should update prometheus-operator resources version\n2067064 - RoleBinding in Developer Console is dropping all subjects when editing\n2067155 - Incorrect operator display name shown in pipelines quickstart in devconsole\n2067180 - Missing i18n translations\n2067298 - Console 4.10 operand form refresh\n2067312 - PPT event source is lost when received by the consumer\n2067384 - OCP 4.10 should be firing APIRemovedInNextEUSReleaseInUse for APIs removed in 1.25\n2067456 - OCP 4.11 should be firing APIRemovedInNextEUSReleaseInUse and APIRemovedInNextReleaseInUse for APIs removed in 1.25\n2067995 - Internal registries with a big number of images delay pod creation due to recursive SELinux file context relabeling\n2068115 - resource tab extension fails to show up\n2068148 - [4.11] /etc/redhat-release symlink is broken\n2068180 - OCP UPI on AWS with STS enabled is breaking the Ingress operator\n2068181 - Event source powered with kamelet type source doesn\u0027t show associated deployment in resources tab\n2068490 - OLM descriptors integration test failing\n2068538 - Crashloop back-off popover visual spacing defects\n2068601 - Potential etcd inconsistent revision and data occurs\n2068613 - ClusterRoleUpdated/ClusterRoleBindingUpdated Spamming Event Logs\n2068908 - Manual blog link change needed\n2069068 - reconciling Prometheus Operator Deployment failed while upgrading from 4.7.46 to 4.8.35\n2069075 - [Alibaba 4.11.0-0.nightly] cluster storage component in Progressing state\n2069181 - Disabling community tasks is not working\n2069198 - Flaky CI test in e2e/pipeline-ci\n2069307 - oc mirror hangs when processing the Red Hat 4.10 catalog\n2069312 - extend rest mappings with \u0027job\u0027 definition\n2069457 - Ingress operator has superfluous finalizer deletion logic for LoadBalancer-type services\n2069577 - ConsolePlugin example proxy authorize is wrong\n2069612 - Special Resource Operator (SRO) - Crash when nodeSelector does not match any nodes\n2069632 - Not able to download previous container logs from console\n2069643 - ConfigMaps leftovers while uninstalling SpecialResource with configmap\n2069654 - Creating VMs with YAML on Openshift Virtualization UI is missing labels `flavor`, `os` and `workload`\n2069685 - UI crashes on load if a pinned resource model does not exist\n2069705 - prometheus target \"serviceMonitor/openshift-metallb-system/monitor-metallb-controller/0\" has a failure with \"server returned HTTP status 502 Bad Gateway\"\n2069740 - On-prem loadbalancer ports conflict with kube node port range\n2069760 - In developer perspective divider does not show up in navigation\n2069904 - Sync upstream 1.18.1 downstream\n2069914 - Application Launcher groupings are not case-sensitive\n2069997 - [4.11] should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces\n2070000 - Add warning alerts for installing standalone k8s-nmstate\n2070020 - InContext doesn\u0027t work for Event Sources\n2070047 - Kuryr: Prometheus when installed on the cluster shouldn\u0027t report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured\n2070160 - Copy-to-clipboard and \u003cpre\u003e elements cause display issues for ACM dynamic plugins\n2070172 - SRO uses the chart\u0027s name as Helm release, not the SpecialResource\u0027s\n2070181 - [MAPO] serverGroupName ignored\n2070457 - Image vulnerability Popover overflows from the visible area\n2070674 - [GCP] Routes get timed out and nonresponsive after creating 2K service routes\n2070703 - some ipv6 network policy tests consistently failing\n2070720 - [UI] Filter reset doesn\u0027t work on Pods/Secrets/etc pages and complete list disappears\n2070731 - details switch label is not clickable on add page\n2070791 - [GCP]Image registry are crash on cluster with GCP workload identity enabled\n2070792 - service \"openshift-marketplace/marketplace-operator-metrics\" is not annotated with capability\n2070805 - ClusterVersion: could not download the update\n2070854 - cv.status.capabilities.enabledCapabilities doesn?t show the day-2 enabled caps when there are errors on resources update\n2070887 - Cv condition ImplicitlyEnabledCapabilities doesn?t complain about the disabled capabilities which is previously enabled\n2070888 - Cannot bind driver vfio-pci when apply sriovnodenetworkpolicy with type vfio-pci\n2070929 - OVN-Kubernetes: EgressIP breaks access from a pod with EgressIP to other host networked pods on different nodes\n2071019 - rebase vsphere csi driver 2.5\n2071021 - vsphere driver has snapshot support missing\n2071033 - conditionally relabel volumes given annotation not working - SELinux context match is wrong\n2071139 - Ingress pods scheduled on the same node\n2071364 - All image building tests are broken with \"            error: build error: attempting to convert BUILD_LOGLEVEL env var value \"\" to integer: strconv.Atoi: parsing \"\": invalid syntax\n2071578 - Monitoring navigation should not be shown if monitoring is not available (CRC)\n2071599 - RoleBidings are not getting updated for ClusterRole in OpenShift Web Console\n2071614 - Updating EgressNetworkPolicy rejecting with error UnsupportedMediaType\n2071617 - remove Kubevirt extensions in favour of dynamic plugin\n2071650 - ovn-k ovn_db_cluster metrics are not exposed for SNO\n2071691 - OCP Console global PatternFly overrides adds padding to breadcrumbs\n2071700 - v1 events show \"Generated from\" message without the source/reporting component\n2071715 - Shows 404 on Environment nav in Developer console\n2071719 - OCP Console global PatternFly overrides link button whitespace\n2071747 - Link to documentation from the overview page goes to a missing link\n2071761 - Translation Keys Are Not Namespaced\n2071799 - Multus CNI should exit cleanly on CNI DEL when the API server is unavailable\n2071859 - ovn-kube pods spec.dnsPolicy should be Default\n2071914 - cloud-network-config-controller 4.10.5:  Error building cloud provider client, err: %vfailed to initialize Azure environment: autorest/azure: There is no cloud environment matching the name \"\"\n2071998 - Cluster-version operator should share details of signature verification when it fails in \u0027Force: true\u0027 updates\n2072106 - cluster-ingress-operator tests do not build on go 1.18\n2072134 - Routes are not accessible within cluster from hostnet pods\n2072139 - vsphere driver has permissions to create/update PV objects\n2072154 - Secondary Scheduler operator panics\n2072171 - Test \"[sig-network][Feature:EgressFirewall] EgressFirewall should have no impact outside its namespace [Suite:openshift/conformance/parallel]\" fails\n2072195 - machine api doesn\u0027t issue client cert when AWS DNS suffix missing\n2072215 - Whereabouts ip-reconciler should be opt-in and not required\n2072389 - CVO exits upgrade immediately rather than waiting for etcd backup\n2072439 - openshift-cloud-network-config-controller reports wrong range of IP addresses for Azure worker nodes\n2072455 - make bundle overwrites supported-nic-ids_v1_configmap.yaml\n2072570 - The namespace titles for operator-install-single-namespace test keep changing\n2072710 - Perfscale - pods time out waiting for OVS port binding (ovn-installed)\n2072766 - Cluster Network Operator stuck in CrashLoopBackOff when scheduled to same master\n2072780 - OVN kube-master does not clear NetworkUnavailableCondition on GCP BYOH Windows node\n2072793 - Drop \"Used Filesystem\" from \"Virtualization -\u003e Overview\"\n2072805 - Observe \u003e Dashboards: $__range variables cause PromQL query errors\n2072807 - Observe \u003e Dashboards: Missing `panel.styles` attribute for table panels causes JS error\n2072842 - (release-4.11) Gather namespace names with overlapping UID ranges\n2072883 - sometimes monitoring dashboards charts can not be loaded successfully\n2072891 - Update gcp-pd-csi-driver to 1.5.1;\n2072911 - panic observed in kubedescheduler operator\n2072924 - periodic-ci-openshift-release-master-ci-4.11-e2e-azure-techpreview-serial\n2072957 - ContainerCreateError loop leads to several thousand empty logfiles in the file system\n2072998 - update aws-efs-csi-driver to the latest version\n2072999 - Navigate from logs of selected Tekton task instead of last one\n2073021 - [vsphere] Failed to update OS on master nodes\n2073112 - Prometheus (uwm) externalLabels not showing always in alerts. \n2073113 - Warning is logged to the console: W0407 Defaulting of registry auth file to \"${HOME}/.docker/config.json\" is deprecated. \n2073176 - removing data in form does not remove data from yaml editor\n2073197 - Error in Spoke/SNO agent: Source image rejected: A signature was required, but no signature exists\n2073329 - Pipelines-plugin- Having different title for Pipeline Runs tab, on Pipeline Details page it\u0027s \"PipelineRuns\" and on Repository Details page it\u0027s \"Pipeline Runs\". \n2073373 - Update azure-disk-csi-driver to 1.16.0\n2073378 - failed egressIP assignment - cloud-network-config-controller does not delete failed cloudprivateipconfig\n2073398 - machine-api-provider-openstack does not clean up OSP ports after failed server provisioning\n2073436 - Update azure-file-csi-driver to v1.14.0\n2073437 - Topology performance: Firehose/useK8sWatchResources cache can return unexpected data format if isList differs on multiple calls\n2073452 - [sig-network] pods should successfully create sandboxes by other - failed (add)\n2073473 - [OVN SCALE][ovn-northd] Unnecessary SB record no-op changes added to SB transaction. \n2073522 - Update ibm-vpc-block-csi-driver to v4.2.0\n2073525 - Update vpc-node-label-updater to v4.1.2\n2073901 - Installation failed due to etcd operator Err:DefragControllerDegraded: failed to dial endpoint https://10.0.0.7:2379 with maintenance client: context canceled\n2073937 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for UMW\n2073938 - APIRemovedInNextEUSReleaseInUse alert for runtimeclasses\n2073945 - APIRemovedInNextEUSReleaseInUse alert for podsecuritypolicies\n2073972 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for platform monitoring\n2074009 - [OVN] ovn-northd doesn\u0027t clean Chassis_Private record after scale down to 0 a machineSet\n2074031 - Admins should be able to tune garbage collector aggressiveness (GOGC) for kube-apiserver if necessary\n2074062 - Node Tuning Operator(NTO) - Cloud provider profile rollback doesn\u0027t work well\n2074084 - CMO metrics not visible in the OCP webconsole UI\n2074100 - CRD filtering according to name broken\n2074210 - asia-south2, australia-southeast2, and southamerica-west1Missing from GCP regions\n2074237 - oc new-app --image-stream flag behavior is unclear\n2074243 - DefaultPlacement API allow empty enum value and remove default\n2074447 - cluster-dashboard: CPU Utilisation iowait and steal\n2074465 - PipelineRun fails in import from Git flow if \"main\" branch is default\n2074471 - Cannot delete namespace with a LB type svc and Kuryr when ExternalCloudProvider is enabled\n2074475 - [e2e][automation] kubevirt plugin cypress tests fail\n2074483 - coreos-installer doesnt work on Dell machines\n2074544 - e2e-metal-ipi-ovn-ipv6 failing due to recent CEO changes\n2074585 - MCG standalone deployment page goes blank when the KMS option is enabled\n2074606 - occm does not have permissions to annotate SVC objects\n2074612 - Operator fails to install due to service name lookup failure\n2074613 - nodeip-configuration container incorrectly attempts to relabel /etc/systemd/system\n2074635 - Unable to start Web Terminal after deleting existing instance\n2074659 - AWS installconfig ValidateForProvisioning always provides blank values to validate zone records\n2074706 - Custom EC2 endpoint is not considered by AWS EBS CSI driver\n2074710 - Transition to go-ovirt-client\n2074756 - Namespace column provide wrong data in ClusterRole Details -\u003e Rolebindings tab\n2074767 - Metrics page show incorrect values due to metrics level config\n2074807 - NodeFilesystemSpaceFillingUp alert fires even before kubelet GC kicks in\n2074902 - `oc debug node/nodename ? chroot /host somecommand` should exit with non-zero when the sub-command failed\n2075015 - etcd-guard connection refused event repeating pathologically (payload blocking)\n2075024 - Metal upgrades permafailing on metal3 containers crash looping\n2075050 - oc-mirror fails to calculate between two channels with different prefixes for the same version of OCP\n2075091 - Symptom Detection.Undiagnosed panic detected in pod\n2075117 - Developer catalog: Order dropdown (A-Z, Z-A) is miss-aligned (in a separate row)\n2075149 - Trigger Translations When Extensions Are Updated\n2075189 - Imports from dynamic-plugin-sdk lead to failed module resolution errors\n2075459 - Set up cluster on aws with rootvolumn io2 failed due to no iops despite it being configured\n2075475 - OVN-Kubernetes: egress router pod (redirect mode), access from pod on different worker-node (redirect) doesn\u0027t work\n2075478 - Bump documentationBaseURL to 4.11\n2075491 - nmstate operator cannot be upgraded on SNO\n2075575 - Local Dev Env - Prometheus 404 Call errors spam the console\n2075584 - improve clarity of build failure messages when using csi shared resources but tech preview is not enabled\n2075592 - Regression - Top of the web terminal drawer is missing a stroke/dropshadow\n2075621 - Cluster upgrade.[sig-mco] Machine config pools complete upgrade\n2075647 - \u0027oc adm upgrade ...\u0027 POSTs ClusterVersion, clobbering any unrecognized spec properties\n2075671 - Cluster Ingress Operator K8S API cache contains duplicate objects\n2075778 - Fix failing TestGetRegistrySamples test\n2075873 - Bump recommended FCOS to 35.20220327.3.0\n2076193 - oc patch command for the liveness probe and readiness probe parameters of an OpenShift router deployment doesn\u0027t take effect\n2076270 - [OCPonRHV] MachineSet scale down operation fails to delete the worker VMs\n2076277 - [RFE] [OCPonRHV] Add storage domain ID valueto Compute/ControlPlain section in the machine object\n2076290 - PTP operator readme missing documentation on BC setup via PTP config\n2076297 - Router process ignores shutdown signal while starting up\n2076323 - OLM blocks all operator installs if an openshift-marketplace catalogsource is unavailable\n2076355 - The KubeletConfigController wrongly process multiple confs for a pool after having kubeletconfig in bootstrap\n2076393 - [VSphere] survey fails to list datacenters\n2076521 - Nodes in the same zone are not updated in the right order\n2076527 - Pipeline Builder: Make unnecessary tekton hub API calls when the user types \u0027too fast\u0027\n2076544 - Whitespace (padding) is missing after an PatternFly update, already in 4.10\n2076553 - Project access view replace group ref with user ref when updating their Role\n2076614 - Missing Events component from the SDK API\n2076637 - Configure metrics for vsphere driver to be reported\n2076646 - openshift-install destroy unable to delete PVC disks in GCP if cluster identifier is longer than 22 characters\n2076793 - CVO exits upgrade immediately rather than waiting for etcd backup\n2076831 - [ocp4.11]Mem/cpu high utilization by apiserver/etcd for cluster stayed 10 hours\n2076877 - network operator tracker to switch to use flowcontrol.apiserver.k8s.io/v1beta2 instead v1beta1 to be deprecated in k8s 1.26\n2076880 - OKD: add cluster domain to the uploaded vm configs so that 30-local-dns-prepender can use it\n2076975 - Metric unset during static route conversion in configure-ovs.sh\n2076984 - TestConfigurableRouteNoConsumingUserNoRBAC fails in CI\n2077050 - OCP should default to pd-ssd disk type on GCP\n2077150 - Breadcrumbs on a few screens don\u0027t have correct top margin spacing\n2077160 - Update owners for openshift/cluster-etcd-operator\n2077357 - [release-4.11] 200ms packet delay with OVN controller turn on\n2077373 - Accessibility warning on developer perspective\n2077386 - Import page shows untranslated values for the route advanced routing\u003esecurity options (devconsole~Edge)\n2077457 - failure in test case \"[sig-network][Feature:Router] The HAProxy router should serve the correct routes when running with the haproxy config manager\"\n2077497 - Rebase etcd to 3.5.3 or later\n2077597 - machine-api-controller is not taking the proxy configuration when it needs to reach the RHV API\n2077599 - OCP should alert users if they are on vsphere version \u003c7.0.2\n2077662 - AWS Platform Provisioning Check incorrectly identifies record as part of domain of cluster\n2077797 - LSO pods don\u0027t have any resource requests\n2077851 - \"make vendor\" target is not working\n2077943 - If there is a service with multiple ports, and the route uses 8080, when editing the 8080 port isn\u0027t replaced, but a random port gets replaced and 8080 still stays\n2077994 - Publish RHEL CoreOS AMIs in AWS ap-southeast-3 region\n2078013 - drop multipathd.socket workaround\n2078375 - When using the wizard with template using data source the resulting vm use pvc source\n2078396 - [OVN AWS] EgressIP was not balanced to another egress node after original node was removed egress label\n2078431 - [OCPonRHV] - ERROR failed to instantiate provider \"openshift/local/ovirt\" to obtain schema:  ERROR fork/exec\n2078526 - Multicast breaks after master node reboot/sync\n2078573 - SDN CNI -Fail to create nncp when vxlan is up\n2078634 - CRI-O not killing Calico CNI stalled (zombie) processes. \n2078698 - search box may not completely remove content\n2078769 - Different not translated filter group names (incl. Secret, Pipeline, PIpelineRun)\n2078778 - [4.11] oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration fails and caused ?apiserver panic\u0027d...http2: panic serving xxx.xx.xxx.21:49748: cannot deep copy int? when AllRequestBodies audit-profile is used. \n2078781 - PreflightValidation does not handle multiarch images\n2078866 - [BM][IPI] Installation with bonds fail - DaemonSet \"openshift-ovn-kubernetes/ovnkube-node\" rollout is not making progress\n2078875 - OpenShift Installer fail to remove Neutron ports\n2078895 - [OCPonRHV]-\"cow\" unsupported value in format field in install-config.yaml\n2078910 - CNO spitting out \".spec.groups[0].rules[4].runbook_url: field not declared in schema\"\n2078945 - Ensure only one apiserver-watcher process is active on a node. \n2078954 - network-metrics-daemon makes costly global pod list calls scaling per node\n2078969 - Avoid update races between old and new NTO operands during cluster upgrades\n2079012 - egressIP not migrated to correct workers after deleting machineset it was assigned\n2079062 - Test for console demo plugin toast notification needs to be increased for ci testing\n2079197 - [RFE] alert when more than one default storage class is detected\n2079216 - Partial cluster update reference doc link returns 404\n2079292 - containers prometheus-operator/kube-rbac-proxy violate PodSecurity\n2079315 - (release-4.11) Gather ODF config data with Insights\n2079422 - Deprecated 1.25 API call\n2079439 - OVN Pods Assigned Same IP Simultaneously\n2079468 - Enhance the waitForIngressControllerCondition for better CI results\n2079500 - okd-baremetal-install uses fcos for bootstrap but rhcos for cluster\n2079610 - Opeatorhub status shows errors\n2079663 - change default image features in RBD storageclass\n2079673 - Add flags to disable migrated code\n2079685 - Storageclass creation page with \"Enable encryption\" is not displaying saved KMS connection details when vaulttenantsa details are available in csi-kms-details config\n2079724 - cluster-etcd-operator - disable defrag-controller as there is unpredictable impact on large OpenShift Container Platform 4 - Cluster\n2079788 - Operator restarts while applying the acm-ice example\n2079789 - cluster drops ImplicitlyEnabledCapabilities during upgrade\n2079803 - Upgrade-triggered etcd backup will be skip during serial upgrade\n2079805 - Secondary scheduler operator should comply to restricted pod security level\n2079818 - Developer catalog installation overlay (modal?) shows a duplicated padding\n2079837 - [RFE] Hub/Spoke example with daemonset\n2079844 - EFS cluster csi driver status stuck in AWSEFSDriverCredentialsRequestControllerProgressing with sts installation\n2079845 - The Event Sinks catalog page now has a blank space on the left\n2079869 - Builds for multiple kernel versions should be ran in parallel when possible\n2079913 - [4.10] APIRemovedInNextEUSReleaseInUse alert for OVN endpointslices\n2079961 - The search results accordion has no spacing between it and the side navigation bar. \n2079965 - [rebase v1.24]  [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn\u0027t match pod\u0027s OS [Suite:openshift/conformance/parallel] [Suite:k8s]\n2080054 - TAGS arg for installer-artifacts images is not propagated to build images\n2080153 - aws-load-balancer-operator-controller-manager pod stuck in ContainerCreating status\n2080197 - etcd leader changes produce test churn during early stage of test\n2080255 - EgressIP broken on AWS with OpenShiftSDN / latest nightly build\n2080267 - [Fresh Installation] Openshift-machine-config-operator namespace is flooded with events related to clusterrole, clusterrolebinding\n2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses\n2080379 - Group all e2e tests as parallel or serial\n2080387 - Visual connector not appear between the node if a node get created using \"move connector\" to a different application\n2080416 - oc bash-completion problem\n2080429 - CVO must ensure non-upgrade related changes are saved when desired payload fails to load\n2080446 - Sync ironic images with latest bug fixes packages\n2080679 - [rebase v1.24] [sig-cli] test failure\n2080681 - [rebase v1.24]  [sig-cluster-lifecycle] CSRs from machines that are not recognized by the cloud provider are not approved [Suite:openshift/conformance/parallel]\n2080687 - [rebase v1.24]  [sig-network][Feature:Router] tests are failing\n2080873 - Topology graph crashes after update to 4.11 when Layout 2 (ColaForce) was selected previously\n2080964 - Cluster operator special-resource-operator is always in Failing state with reason: \"Reconciling simple-kmod\"\n2080976 - Avoid hooks config maps when hooks are empty\n2081012 - [rebase v1.24]  [sig-devex][Feature:OpenShiftControllerManager] TestAutomaticCreationOfPullSecrets [Suite:openshift/conformance/parallel]\n2081018 - [rebase v1.24] [sig-imageregistry][Feature:Image] oc tag should work when only imagestreams api is available\n2081021 - [rebase v1.24] [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources\n2081062 - Unrevert RHCOS back to 8.6\n2081067 - admin dev-console /settings/cluster should point out history may be excerpted\n2081069 - [sig-network] pods should successfully create sandboxes by adding pod to network\n2081081 - PreflightValidation \"odd number of arguments passed as key-value pairs for logging\" error\n2081084 - [rebase v1.24] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed\n2081087 - [rebase v1.24] [sig-auth] ServiceAccounts should allow opting out of API token automount\n2081119 - `oc explain` output of default overlaySize is outdated\n2081172 - MetallLB: YAML view in webconsole does not show all the available key value pairs of all the objects\n2081201 - cloud-init User check for Windows VM refuses to accept capitalized usernames\n2081447 - Ingress operator performs spurious updates in response to API\u0027s defaulting of router deployment\u0027s router container\u0027s ports\u0027 protocol field\n2081562 - lifecycle.posStart hook does not have network connectivity. \n2081685 - Typo in NNCE Conditions\n2081743 - [e2e] tests failing\n2081788 - MetalLB: the crds are not validated until metallb is deployed\n2081821 - SpecialResourceModule CRD is not installed after deploying SRO operator using brew bundle image via OLM\n2081895 - Use the managed resource (and not the manifest) for resource health checks\n2081997 - disconnected insights operator remains degraded after editing pull secret\n2082075 - Removing huge amount of ports takes a lot of time. \n2082235 - CNO exposes a generic apiserver that apparently does nothing\n2082283 - Transition to new oVirt Terraform provider\n2082360 - OCP 4.10.4, CNI: SDN; Whereabouts IPAM: Duplicate IP address with bond-cni\n2082380 - [4.10.z] customize wizard is crashed\n2082403 - [LSO] No new build local-storage-operator-metadata-container created\n2082428 - oc patch healthCheckInterval with invalid \"5 s\" to the ingress-controller successfully\n2082441 - [UPI] aws-load-balancer-operator-controller-manager failed to get VPC ID in UPI on AWS\n2082492 - [IPI IBM]Can\u0027t create image-registry-private-configuration secret with error \"specified resource key credentials does not contain HMAC keys\"\n2082535 - [OCPonRHV]-workers are cloned when \"clone: false\" is specified in install-config.yaml\n2082538 - apirequests limits of Cluster CAPI Operator are too low for GCP platform\n2082566 - OCP dashboard fails to load when the query to Prometheus takes more than 30s to return\n2082604 - [IBMCloud][x86_64] IBM VPC does not properly support RHCOS Custom Image tagging\n2082667 - No new machines provisioned while machineset controller drained old nodes for change to machineset\n2082687 - [IBM Cloud][x86_64][CCCMO] IBM x86_64 CCM using unsupported --port argument\n2082763 - Cluster install stuck on the applying for operatorhub \"cluster\"\n2083149 - \"Update blocked\" label incorrectly displays on new minor versions in the \"Other available paths\" modal\n2083153 - Unable to use application credentials for Manila PVC creation on OpenStack\n2083154 - Dynamic plugin sdk tsdoc generation does not render docs for parameters\n2083219 - DPU network operator doesn\u0027t deal with c1... inteface names\n2083237 - [vsphere-ipi] Machineset scale up process delay\n2083299 - SRO does not fetch mirrored DTK images in disconnected clusters\n2083445 - [FJ OCP4.11 Bug]: RAID setting during IPI cluster deployment fails if iRMC port number is specified\n2083451 - Update external serivces URLs to console.redhat.com\n2083459 - Make numvfs \u003e totalvfs error message more verbose\n2083466 - Failed to create clusters on AWS C2S/SC2S due to image-registry MissingEndpoint error\n2083514 - Operator ignores managementState Removed\n2083641 - OpenShift Console Knative Eventing ContainerSource generates wrong api version when pointed to k8s Service\n2083756 - Linkify not upgradeable message on ClusterSettings page\n2083770 - Release image signature manifest filename extension is yaml\n2083919 - openshift4/ose-operator-registry:4.10.0 having security vulnerabilities\n2083942 - Learner promotion can temporarily fail with rpc not supported for learner errors\n2083964 - Sink resources dropdown is not persisted in form yaml switcher in event source creation form\n2083999 - \"--prune-over-size-limit\" is not working as expected\n2084079 - prometheus route is not updated to \"path: /api\" after upgrade from 4.10 to 4.11\n2084081 - nmstate-operator installed cluster on POWER shows issues while adding new dhcp interface\n2084124 - The Update cluster modal includes a broken link\n2084215 - Resource configmap \"openshift-machine-api/kube-rbac-proxy\" is defined by 2 manifests\n2084249 - panic in ovn pod from an e2e-aws-single-node-serial nightly run\n2084280 - GCP API Checks Fail if non-required APIs are not enabled\n2084288 - \"alert/Watchdog must have no gaps or changes\" failing after bump\n2084292 - Access to dashboard resources is needed in dynamic plugin SDK\n2084331 - Resource with multiple capabilities included unless all capabilities are disabled\n2084433 - Podsecurity violation error getting logged for ingresscontroller during deployment. \n2084438 - Change Ping source spec.jsonData (deprecated) field  to spec.data\n2084441 - [IPI-Azure]fail to check the vm capabilities in install cluster\n2084459 - Topology list view crashes when switching from chart view after moving sink from knative service to uri\n2084463 - 5 control plane replica tests fail on ephemeral volumes\n2084539 - update azure arm templates to support customer provided vnet\n2084545 - [rebase v1.24] cluster-api-operator causes all techpreview tests to fail\n2084580 - [4.10] No cluster name sanity validation - cluster name with a dot (\".\") character\n2084615 - Add to navigation option on search page is not properly aligned\n2084635 - PipelineRun creation from the GUI for a Pipeline with 2 workspaces hardcode the PVC storageclass\n2084732 - A special resource that was created in OCP 4.9 can\u0027t be deleted after an upgrade to 4.10\n2085187 - installer-artifacts fails to build with go 1.18\n2085326 - kube-state-metrics is tripping APIRemovedInNextEUSReleaseInUse\n2085336 - [IPI-Azure] Fail to create the worker node which HyperVGenerations is V2 or V1 and vmNetworkingType is Accelerated\n2085380 - [IPI-Azure] Incorrect error prompt validate VM image and instance HyperV gen match when install cluster\n2085407 - There is no Edit link/icon for labels on Node details page\n2085721 - customization controller image name is wrong\n2086056 - Missing doc for OVS HW offload\n2086086 - Update Cluster Sample Operator dependencies and libraries for OCP 4.11\n2086092 - update kube to v.24\n2086143 - CNO uses too much memory\n2086198 - Cluster CAPI Operator creates unnecessary defaulting webhooks\n2086301 - kubernetes nmstate pods are not running after creating instance\n2086408 - Podsecurity violation error getting logged for  externalDNS operand pods during deployment\n2086417 - Pipeline created from add flow has GIT Revision as required field\n2086437 - EgressQoS CRD not available\n2086450 - aws-load-balancer-controller-cluster pod logged Podsecurity violation error during deployment\n2086459 - oc adm inspect fails when one of resources not exist\n2086461 - CNO probes MTU unnecessarily in Hypershift, making cluster startup take too long\n2086465 - External identity providers should log login attempts in the audit trail\n2086469 - No data about title \u0027API Request Duration by Verb - 99th Percentile\u0027 display on the dashboard \u0027API Performance\u0027\n2086483 - baremetal-runtimecfg k8s dependencies should be on a par with 1.24 rebase\n2086505 - Update oauth-server images to be consistent with ART\n2086519 - workloads must comply to restricted security policy\n2086521 - Icons of Knative actions are not clearly visible on the context menu in the dark mode\n2086542 - Cannot create service binding through drag and drop\n2086544 - ovn-k master daemonset on hypershift shouldn\u0027t log token\n2086546 - Service binding connector is not visible in the dark mode\n2086718 - PowerVS destroy code does not work\n2086728 - [hypershift] Move drain to controller\n2086731 - Vertical pod autoscaler operator needs a 4.11 bump\n2086734 - Update csi driver images to be consistent with ART\n2086737 - cloud-provider-openstack rebase to kubernetes v1.24\n2086754 - Cluster resource override operator needs a 4.11 bump\n2086759 - [IPI] OCP-4.11 baremetal - boot partition is not mounted on temporary directory\n2086791 - Azure: Validate UltraSSD instances in multi-zone regions\n2086851 - pods with multiple external gateways may only be have ECMP routes for one gateway\n2086936 - vsphere ipi should use cores by default instead of sockets\n2086958 - flaky e2e in kube-controller-manager-operator TestPodDisruptionBudgetAtLimitAlert\n2086959 - flaky e2e in kube-controller-manager-operator TestLogLevel\n2086962 - oc-mirror publishes metadata with --dry-run when publishing to mirror\n2086964 - oc-mirror fails on differential run when mirroring a package with multiple channels specified\n2086972 - oc-mirror does not error invalid metadata is passed to the describe command\n2086974 - oc-mirror does not work with headsonly for operator 4.8\n2087024 - The oc-mirror result mapping.txt is not correct , can?t be used by `oc image mirror` command\n2087026 - DTK\u0027s imagestream is missing from OCP 4.11 payload\n2087037 - Cluster Autoscaler should use K8s 1.24 dependencies\n2087039 - Machine API components should use K8s 1.24 dependencies\n2087042 - Cloud providers components should use K8s 1.24 dependencies\n2087084 - remove unintentional nic support\n2087103 - \"Updating to release image\" from \u0027oc\u0027 should point out that the cluster-version operator hasn\u0027t accepted the update\n2087114 - Add simple-procfs-kmod in modprobe example in README.md\n2087213 - Spoke BMH stuck \"inspecting\" when deployed via ZTP in 4.11 OCP hub\n2087271 - oc-mirror does not check for existing workspace when performing mirror2mirror synchronization\n2087556 - Failed to render DPU ovnk manifests\n2087579 - ` --keep-manifest-list=true` does not work for `oc adm release new` , only pick up the linux/amd64 manifest from the manifest list\n2087680 - [Descheduler] Sync with sigs.k8s.io/descheduler\n2087684 - KCMO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile\n2087685 - KASO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile\n2087687 - MCO does not generate event when user applies Default -\u003e LowUpdateSlowReaction WorkerLatencyProfile\n2087764 - Rewrite the registry backend will hit error\n2087771 - [tracker] NetworkManager 1.36.0 loses DHCP lease and doesn\u0027t try again\n2087772 - Bindable badge causes some layout issues with the side panel of bindable operator backed services\n2087942 - CNO references images that are divergent from ART\n2087944 - KafkaSink Node visualized incorrectly\n2087983 - remove etcd_perf before restore\n2087993 - PreflightValidation many \"msg\":\"TODO: preflight checks\" in the operator log\n2088130 - oc-mirror init does not allow for automated testing\n2088161 - Match dockerfile image name with the name used in the release repo\n2088248 - Create HANA VM does not use values from customized HANA templates\n2088304 - ose-console: enable source containers for open source requirements\n2088428 - clusteroperator/baremetal stays in progressing: Applying metal3 resources state on a fresh install\n2088431 - AvoidBuggyIPs field of addresspool should be removed\n2088483 - oc adm catalog mirror returns 0 even if there are errors\n2088489 - Topology list does not allow selecting an application group anymore (again)\n2088533 - CRDs for openshift.io should have subresource.status failes on sharedconfigmaps.sharedresource and sharedsecrets.sharedresource\n2088535 - MetalLB: Enable debug log level for downstream CI\n2088541 - Default CatalogSources in openshift-marketplace namespace keeps throwing pod security admission warnings `would violate PodSecurity \"restricted:v1.24\"`\n2088561 - BMH unable to start inspection: File name too long\n2088634 - oc-mirror does not fail when catalog is invalid\n2088660 - Nutanix IPI installation inside container failed\n2088663 - Better to change the default value of --max-per-registry to 6\n2089163 - NMState CRD out of sync with code\n2089191 - should remove grafana from cluster-monitoring-config configmap in hypershift cluster\n2089224 - openshift-monitoring/cluster-monitoring-config configmap always revert to default setting\n2089254 - CAPI operator: Rotate token secret if its older than 30 minutes\n2089276 - origin tests for egressIP and azure fail\n2089295 - [Nutanix]machine stuck in Deleting phase when delete a machineset whose replicas\u003e=2 and machine is Provisioning phase on Nutanix\n2089309 - [OCP 4.11] Ironic inspector image fails to clean disks that are part of a multipath setup if they are passive paths\n2089334 - All cloud providers should use service account credentials\n2089344 - Failed to deploy simple-kmod\n2089350 - Rebase sdn to 1.24\n2089387 - LSO not taking mpath. ignoring device\n2089392 - 120 node baremetal upgrade from 4.9.29 --\u003e 4.10.13  crashloops on machine-approver\n2089396 - oc-mirror does not show pruned image plan\n2089405 - New topology package shows gray build icons instead of green/red icons for builds and pipelines\n2089419 - do not block 4.10 to 4.11 upgrades if an existing CSI driver is found. Instead, warn about presence of third party CSI driver\n2089488 - Special resources are missing the managementState field\n2089563 - Update Power VS MAPI to use api\u0027s from openshift/api repo\n2089574 - UWM prometheus-operator pod can\u0027t start up due to no master node in hypershift cluster\n2089675 - Could not move Serverless Service without Revision (or while starting?)\n2089681 - [Hypershift] EgressIP doesn\u0027t work in hypershift guest cluster\n2089682 - Installer expects all nutanix subnets to have a cluster reference which is not the case for e.g. overlay networks\n2089687 - alert message of MCDDrainError needs to be updated for new drain controller\n2089696 - CR reconciliation is stuck in daemonset lifecycle\n2089716 - [4.11][reliability]one worker node became NotReady on which ovnkube-node pod\u0027s memory increased sharply\n2089719 - acm-simple-kmod fails to build\n2089720 - [Hypershift] ICSP doesn\u0027t work for the guest cluster\n2089743 - acm-ice fails to deploy: helm chart does not appear to be a gzipped archive\n2089773 - Pipeline status filter and status colors doesn\u0027t work correctly with non-english languages\n2089775 - keepalived can keep ingress VIP on wrong node under certain circumstances\n2089805 - Config duration metrics aren\u0027t exposed\n2089827 - MetalLB CI - backward compatible tests are failing due to the order of delete\n2089909 - PTP e2e testing not working on SNO cluster\n2089918 - oc-mirror skip-missing still returns 404 errors when images do not exist\n2089930 - Bump OVN to 22.06\n2089933 - Pods do not post readiness status on termination\n2089968 - Multus CNI daemonset should use hostPath mounts with type: directory\n2089973 - bump libs to k8s 1.24 for OCP 4.11\n2089996 - Unnecessary yarn install runs in e2e tests\n2090017 - Enable source containers to meet open source requirements\n2090049 - destroying GCP cluster which has a compute node without infra id in name would fail to delete 2 k8s firewall-rules and VPC network\n2090092 - Will hit error if specify the channel not the latest\n2090151 - [RHEL scale up] increase the wait time so that the node has enough time to get ready\n2090178 - VM SSH command generated by UI points at api VIP\n2090182 - [Nutanix]Create a machineset with invalid image, machine stuck in \"Provisioning\" phase\n2090236 - Only reconcile annotations and status for clusters\n2090266 - oc adm release extract is failing on mutli arch image\n2090268 - [AWS EFS] Operator not getting installed successfully on Hypershift Guest cluster\n2090336 - Multus logging should be disabled prior to release\n2090343 - Multus debug logging should be enabled temporarily for debugging podsandbox creation failures. \n2090358 - Initiating drain log message is displayed before the drain actually starts\n2090359 - Nutanix mapi-controller: misleading error message when the failure is caused by wrong credentials\n2090405 - [tracker] weird port mapping with asymmetric traffic [rhel-8.6.0.z]\n2090430 - gofmt code\n2090436 - It takes 30min-60min to update the machine count in custom MachineConfigPools (MCPs) when a node is removed from the pool\n2090437 - Bump CNO to k8s 1.24\n2090465 - golang version mismatch\n2090487 - Change default SNO Networking Type and disallow OpenShiftSDN a supported networking Type\n2090537 - failure in ovndb migration when db is not ready in HA mode\n2090549 - dpu-network-operator shall be able to run on amd64 arch platform\n2090621 - Metal3 plugin does not work properly with updated NodeMaintenance CRD\n2090627 - Git commit and branch are empty in MetalLB log\n2090692 - Bump to latest 1.24 k8s release\n2090730 - must-gather should include multus logs. \n2090731 - nmstate deploys two instances of webhook on a single-node cluster\n2090751 - oc image mirror skip-missing flag does not skip images\n2090755 - MetalLB: BGPAdvertisement validation allows duplicate entries for ip pool selector, ip address pools, node selector and bgp peers\n2090774 - Add Readme to plugin directory\n2090794 - MachineConfigPool cannot apply a configuration after fixing the pods that caused a drain alert\n2090809 - gm.ClockClass  invalid syntax parse error in linux ptp daemon logs\n2090816 - OCP 4.8 Baremetal IPI installation failure: \"Bootstrap failed to complete: timed out waiting for the condition\"\n2090819 - oc-mirror does not catch invalid registry input when a namespace is specified\n2090827 - Rebase CoreDNS to 1.9.2 and k8s 1.24\n2090829 - Bump OpenShift router to k8s 1.24\n2090838 - Flaky test: ignore flapping host interface \u0027tunbr\u0027\n2090843 - addLogicalPort() performance/scale optimizations\n2090895 - Dynamic plugin nav extension \"startsWith\" property does not work\n2090929 - [etcd] cluster-backup.sh script has a conflict to use the \u0027/etc/kubernetes/static-pod-certs\u0027 folder if a custom API certificate is defined\n2090993 - [AI Day2] Worker node overview page crashes in Openshift console with TypeError\n2091029 - Cancel rollout action only appears when rollout is completed\n2091030 - Some BM may fail booting with default bootMode strategy\n2091033 - [Descheduler]: provide ability to override included/excluded namespaces\n2091087 - ODC Helm backend Owners file needs updates\n2091106 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3\n2091142 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3\n2091167 - IPsec runtime enabling not work in hypershift\n2091218 - Update Dev Console Helm backend to use helm 3.9.0\n2091433 - Update AWS instance types\n2091542 - Error Loading/404 not found page shown after clicking \"Current namespace only\"\n2091547 - Internet connection test with proxy permanently fails\n2091567 - oVirt CSI driver should use latest go-ovirt-client\n2091595 - Alertmanager configuration can\u0027t use OpsGenie\u0027s entity field when AlertmanagerConfig is enabled\n2091599 - PTP Dual Nic  | Extend Events 4.11 - Up/Down master interface affects all the other interface in the same NIC accoording the events and metric\n2091603 - WebSocket connection restarts when switching tabs in WebTerminal\n2091613 - simple-kmod fails to build due to missing KVC\n2091634 - OVS 2.15 stops handling traffic once ovs-dpctl(2.17.2) is used against it\n2091730 - MCO e2e tests are failing with \"No token found in openshift-monitoring secrets\"\n2091746 - \"Oh no! Something went wrong\" shown after user creates MCP without \u0027spec\u0027\n2091770 - CVO gets stuck downloading an upgrade, with the version pod complaining about invalid options\n2091854 - clusteroperator status filter doesn\u0027t match all values in Status column\n2091901 - Log stream paused right after updating log lines in Web Console in OCP4.10\n2091902 - unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server has received too many requests and has asked us to try again later\n2091990 - wrong external-ids for ovn-controller lflow-cache-limit-kb\n2092003 - PR 3162 | BZ 2084450 - invalid URL schema for AWS causes tests to perma fail and break the cloud-network-config-controller\n2092041 - Bump cluster-dns-operator to k8s 1.24\n2092042 - Bump cluster-ingress-operator to k8s 1.24\n2092047 - Kube 1.24 rebase for cloud-network-config-controller\n2092137 - Search doesn\u0027t show all entries when name filter is cleared\n2092296 - Change Default MachineCIDR of Power VS Platform from 10.x to 192.168.0.0/16\n2092390 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and \u0027Overview\u0027 tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown\n2092395 - etcdHighNumberOfFailedGRPCRequests alerts with wrong results\n2092408 - Wrong icon is used in the virtualization overview permissions card\n2092414 - In virtualization overview \"running vm per templates\" template list can be improved\n2092442 - Minimum time between drain retries is not the expected one\n2092464 - marketplace catalog defaults to v4.10\n2092473 - libovsdb performance backports\n2092495 - ovn: use up to 4 northd threads in non-SNO clusters\n2092502 - [azure-file-csi-driver] Stop shipping a NFS StorageClass\n2092509 - Invalid memory address error if non existing caBundle is configured in DNS-over-TLS using ForwardPlugins\n2092572 - acm-simple-kmod chart should create the namespace on the spoke cluster\n2092579 - Don\u0027t retry pod deletion if objects are not existing\n2092650 - [BM IPI with Provisioning Network] Worker nodes are not provisioned: ironic-agent is stuck before writing into disks\n2092703 - Incorrect mount propagation information in container status\n2092815 - can\u0027t delete the unwanted image from registry by oc-mirror\n2092851 - [Descheduler]: allow to customize the LowNodeUtilization strategy thresholds\n2092867 - make repository name unique in acm-ice/acm-simple-kmod examples\n2092880 - etcdHighNumberOfLeaderChanges returns incorrect number of leadership changes\n2092887 - oc-mirror list releases command uses filter-options flag instead of filter-by-os\n2092889 - Incorrect updating of EgressACLs using direction \"from-lport\"\n2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3)\n2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3)\n2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3)\n2092928 - CVE-2022-26945 go-getter: command injection vulnerability\n2092937 - WebScale: OVN-k8s forwarding to external-gw over the secondary interfaces failing\n2092966 - [OCP 4.11] [azure] /etc/udev/rules.d/66-azure-storage.rules missing from initramfs\n2093044 - Azure machine-api-provider-azure Availability Set Name Length Limit\n2093047 - Dynamic Plugins: Generated API markdown duplicates `checkAccess` and `useAccessReview` doc\n2093126 - [4.11] Bootimage bump tracker\n2093236 - DNS operator stopped reconciling after 4.10 to 4.11 upgrade | 4.11 nightly to 4.11 nightly upgrade\n2093288 - Default catalogs fails liveness/readiness probes\n2093357 - Upgrading sno spoke with acm-ice, causes the sno to get unreachable\n2093368 - Installer orphans FIPs created for LoadBalancer Services on `cluster destroy`\n2093396 - Remove node-tainting for too-small MTU\n2093445 - ManagementState reconciliation breaks SR\n2093454 - Router proxy protocol doesn\u0027t work with dual-stack (IPv4 and IPv6) clusters\n2093462 - Ingress Operator isn\u0027t reconciling the ingress cluster operator object\n2093586 - Topology: Ctrl+space opens the quick search modal, but doesn\u0027t close it again\n2093593 - Import from Devfile shows configuration options that shoudn\u0027t be there\n2093597 - Import: Advanced option sentence is splited into two parts and headlines has no padding\n2093600 - Project access tab should apply new permissions before it delete old ones\n2093601 - Project access page doesn\u0027t allow the user to update the settings twice (without manually reload the content)\n2093783 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.24\n2093797 - \u0027oc registry login\u0027 with serviceaccount function need update\n2093819 - An etcd member for a new machine was never added to the cluster\n2093930 - Gather console helm install  totals metric\n2093957 - Oc-mirror write dup metadata to registry backend\n2093986 - Podsecurity violation error getting logged for pod-identity-webhook\n2093992 - Cluster version operator acknowledges upgrade failing on periodic-ci-openshift-release-master-nightly-4.11-e2e-metal-ipi-upgrade-ovn-ipv6\n2094023 - Add Git Flow - Template Labels for Deployment show as DeploymentConfig\n2094024 - bump oauth-apiserver deps to include 1.23.1 k8s that fixes etcd blips\n2094039 - egressIP panics with nil pointer dereference\n2094055 - Bump coreos-installer for s390x Secure Execution\n2094071 - No runbook created for SouthboundStale alert\n2094088 - Columns in NBDB may never be updated by OVNK\n2094104 - Demo dynamic plugin image tests should be skipped when testing console-operator\n2094152 - Alerts in the virtualization overview status card aren\u0027t filtered\n2094196 - Add default and validating webhooks for Power VS MAPI\n2094227 - Topology: Create Service Binding should not be the last option (even under delete)\n2094239 - custom pool Nodes with 0 nodes are always populated in progress bar\n2094303 - If og is configured with sa, operator installation will be failed. \n2094335 - [Nutanix] - debug logs are enabled by default in machine-controller\n2094342 - apirequests limits of Cluster CAPI Operator are too low for Azure platform\n2094438 - Make AWS URL parsing more lenient for GetNodeEgressIPConfiguration\n2094525 - Allow automatic upgrades for efs operator\n2094532 - ovn-windows CI jobs are broken\n2094675 - PTP Dual Nic  | Extend Events 4.11 - when kill the phc2sys We have notification for the ptp4l physical master moved to free run\n2094694 - [Nutanix] No cluster name sanity validation - cluster name with a dot (\".\") character\n2094704 - Verbose log activated on kube-rbac-proxy in deployment prometheus-k8s\n2094801 - Kuryr controller keep restarting when handling IPs with leading zeros\n2094806 - Machine API oVrit component should use K8s 1.24 dependencies\n2094816 - Kuryr controller restarts when over quota\n2094833 - Repository overview page does not show default PipelineRun template for developer user\n2094857 - CloudShellTerminal loops indefinitely if DevWorkspace CR goes into failed state\n2094864 - Rebase CAPG to latest changes\n2094866 - oc-mirror does not always delete all manifests associated with an image during pruning\n2094896 - Run \u0027openshift-install agent create image\u0027 has segfault exception if cluster-manifests directory missing\n2094902 - Fix installer cross-compiling\n2094932 - MGMT-10403 Ingress should enable single-node cluster expansion on upgraded clusters\n2095049 - managed-csi StorageClass does not create PVs\n2095071 - Backend tests fails after devfile registry update\n2095083 - Observe \u003e Dashboards: Graphs may change a lot on automatic refresh\n2095110 - [ovn] northd container termination script must use bash\n2095113 - [ovnkube] bump to openvswitch2.17-2.17.0-22.el8fdp\n2095226 - Added changes to verify cloud connection and dhcpservices quota of a powervs instance\n2095229 - ingress-operator pod in CrashLoopBackOff in 4.11 after upgrade starting in 4.6 due to go panic\n2095231 - Kafka Sink sidebar in topology is empty\n2095247 - Event sink form doesn\u0027t show channel as sink until app is refreshed\n2095248 - [vSphere-CSI-Driver] does not report volume count limits correctly caused pod with multi volumes maybe schedule to not satisfied volume count node\n2095256 - Samples Owner needs to be Updated\n2095264 - ovs-configuration.service fails with Error: Failed to modify connection \u0027ovs-if-br-ex\u0027: failed to update connection: error writing to file \u0027/etc/NetworkManager/systemConnectionsMerged/ovs-if-br-ex.nmconnection\u0027\n2095362 - oVirt CSI driver operator should use latest go-ovirt-client\n2095574 - e2e-agnostic CI job fails\n2095687 - Debug Container shown for build logs and on click ui breaks\n2095703 - machinedeletionhooks doesn\u0027t work in vsphere cluster and BM cluster\n2095716 - New PSA component for Pod Security Standards enforcement is refusing openshift-operators ns\n2095756 - CNO panics with concurrent map read/write\n2095772 - Memory requests for ovnkube-master containers are over-sized\n2095917 - Nutanix set osDisk with diskSizeGB rather than diskSizeMiB\n2095941 - DNS Traffic not kept local to zone or node when Calico SDN utilized\n2096053 - Builder Image icons in Git Import flow are hard to see in Dark mode\n2096226 - crio fails to bind to tentative IP, causing service failure since RHOCS was rebased on RHEL 8.6\n2096315 - NodeClockNotSynchronising alert\u0027s severity should be critical\n2096350 - Web console doesn\u0027t display webhook errors for upgrades\n2096352 - Collect whole journal in gather\n2096380 - acm-simple-kmod references deprecated KVC example\n2096392 - Topology node icons are not properly visible in Dark mode\n2096394 - Add page Card items background color does not match with column background color in Dark mode\n2096413 - br-ex not created due to default bond interface having a different mac address than expected\n2096496 - FIPS issue on OCP SNO with RT Kernel via performance profile\n2096605 - [vsphere] no validation checking for diskType\n2096691 - [Alibaba 4.11] Specifying ResourceGroup id in install-config.yaml, New pv are still getting created to default ResourceGroups\n2096855 - `oc adm release new` failed with error when use  an existing  multi-arch release image as input\n2096905 - Openshift installer should not use the prism client embedded in nutanix terraform provider\n2096908 - Dark theme issue in pipeline builder, Helm rollback form, and Git import\n2097000 - KafkaConnections disappear from Topology after creating KafkaSink in Topology\n2097043 - No clean way to specify operand issues to KEDA OLM operator\n2097047 - MetalLB:  matchExpressions used in CR like L2Advertisement, BGPAdvertisement, BGPPeers allow duplicate entries\n2097067 - ClusterVersion history pruner does not always retain initial completed update entry\n2097153 - poor performance on API call to vCenter ListTags with thousands of tags\n2097186 - PSa autolabeling in 4.11 env upgraded from 4.10 does not work due to missing RBAC objects\n2097239 - Change Lower CPU limits for Power VS cloud\n2097246 - Kuryr: verify and unit jobs failing due to upstream OpenStack dropping py36 support\n2097260 - openshift-install create manifests failed for Power VS platform\n2097276 - MetalLB CI deploys the operator via manifests and not using the csv\n2097282 - chore: update external-provisioner to the latest upstream release\n2097283 - chore: update external-snapshotter to the latest upstream release\n2097284 - chore: update external-attacher to the latest upstream release\n2097286 - chore: update node-driver-registrar to the latest upstream release\n2097334 - oc plugin help shows \u0027kubectl\u0027\n2097346 - Monitoring must-gather doesn\u0027t seem to be working anymore in 4.11\n2097400 - Shared Resource CSI Driver needs additional permissions for validation webhook\n2097454 - Placeholder bug for OCP 4.11.0 metadata release\n2097503 - chore: rebase against latest external-resizer\n2097555 - IngressControllersNotUpgradeable: load balancer service has been modified; changes must be reverted before upgrading\n2097607 - Add Power VS support to Webhooks tests in actuator e2e test\n2097685 - Ironic-agent can\u0027t restart because of existing container\n2097716 - settings under httpConfig is dropped with AlertmanagerConfig v1beta1\n2097810 - Required Network tools missing for Testing e2e PTP\n2097832 - clean up unused IPv6DualStackNoUpgrade feature gate\n2097940 - openshift-install destroy cluster traps if vpcRegion not specified\n2097954 - 4.11 installation failed at monitoring and network clusteroperators with error \"conmon: option parsing failed: Unknown option --log-global-size-max\" making all jobs failing\n2098172 - oc-mirror does not validatethe registry in the storage config\n2098175 - invalid license in python-dataclasses-0.8-2.el8 spec\n2098177 - python-pint-0.10.1-2.el8 has unused Patch0 in spec file\n2098242 - typo in SRO specialresourcemodule\n2098243 - Add error check to Platform create for Power VS\n2098392 - [OCP 4.11] Ironic cannot match \"wwn\" rootDeviceHint for a multipath device\n2098508 - Control-plane-machine-set-operator report panic\n2098610 - No need to check the push permission with ?manifests-only option\n2099293 - oVirt cluster API provider should use latest go-ovirt-client\n2099330 - Edit application grouping is shown to user with view only access in a cluster\n2099340 - CAPI e2e tests for AWS are missing\n2099357 - ovn-kubernetes needs explicit RBAC coordination leases for 1.24 bump\n2099358 - Dark mode+Topology update: Unexpected selected+hover border and background colors for app groups\n2099528 - Layout issue: No spacing in delete modals\n2099561 - Prometheus returns HTTP 500 error on /favicon.ico\n2099582 - Format and update Repository overview content\n2099611 - Failures on etcd-operator watch channels\n2099637 - Should print error when use --keep-manifest-list\\xfalse for manifestlist image\n2099654 - Topology performance: Endless rerender loop when showing a Http EventSink (KameletBinding)\n2099668 - KubeControllerManager should degrade when GC stops working\n2099695 - Update CAPG after rebase\n2099751 - specialresourcemodule stacktrace while looping over build status\n2099755 - EgressIP node\u0027s mgmtIP reachability configuration option\n2099763 - Update icons for event sources and sinks in topology, Add page, and context menu\n2099811 - UDP Packet loss in OpenShift using IPv6 [upcall]\n2099821 - exporting a pointer for the loop variable\n2099875 - The speaker won\u0027t start if there\u0027s another component on the host listening on 8080\n2099899 - oc-mirror looks for layers in the wrong repository when searching for release images during publishing\n2099928 - [FJ OCP4.11 Bug]: Add unit tests to image_customization_test file\n2099968 - [Azure-File-CSI] failed to provisioning volume in ARO cluster\n2100001 - Sync upstream v1.22.0 downstream\n2100007 - Run bundle-upgrade failed from the traditional File-Based Catalog installed operator\n2100033 - OCP 4.11 IPI - Some csr remain \"Pending\" post deployment\n2100038 - failure to update special-resource-lifecycle table during update Event\n2100079 - SDN needs explicit RBAC coordination leases for 1.24 bump\n2100138 - release info --bugs has no differentiator between Jira and Bugzilla\n2100155 - kube-apiserver-operator should raise an alert when there is a Pod Security admission violation\n2100159 - Dark theme: Build icon for pending status is not inverted in topology sidebar\n2100323 - Sqlit-based catsrc cannot be ready due to \"Error: open ./db-xxxx: permission denied\"\n2100347 - KASO retains old config values when switching from Medium/Default to empty worker latency profile\n2100356 - Remove Condition tab and create option from console as it is deprecated in OSP-1.8\n2100439 - [gce-pd] GCE PD in-tree storage plugin tests not running\n2100496 - [OCPonRHV]-oVirt API returns affinity groups without a description field\n2100507 - Remove redundant log lines from obj_retry.go\n2100536 - Update API to allow EgressIP node reachability check\n2100601 - Update CNO to allow EgressIP node reachability check\n2100643 - [Migration] [GCP]OVN can not rollback to SDN\n2100644 - openshift-ansible FTBFS on RHEL8\n2100669 - Telemetry should not log the full path if it contains a username\n2100749 - [OCP 4.11] multipath support needs multipath modules\n2100825 - Update machine-api-powervs go modules to latest version\n2100841 - tiny openshift-install usability fix for setting KUBECONFIG\n2101460 - An etcd member for a new machine was never added to the cluster\n2101498 - Revert Bug 2082599: add upper bound to number of failed attempts\n2102086 - The base image is still 4.10 for operator-sdk 1.22\n2102302 - Dummy bug for 4.10 backports\n2102362 - Valid regions should be allowed in GCP install config\n2102500 - Kubernetes NMState pods can not evict due to PDB on an SNO cluster\n2102639 - Drain happens before other image-registry pod is ready to service requests, causing disruption\n2102782 - topolvm-controller get into CrashLoopBackOff few minutes after install\n2102834 - [cloud-credential-operator]container has runAsNonRoot and image will run as root\n2102947 - [VPA] recommender is logging errors for pods with init containers\n2103053 - [4.11] Backport Prow CI improvements from master\n2103075 - Listing secrets in all namespaces with a specific labelSelector does not work properly\n2103080 - br-ex not created due to default bond interface having a different mac address than expected\n2103177 - disabling ipv6 router advertisements using \"all\" does not disable it on secondary interfaces\n2103728 - Carry HAProxy patch \u0027BUG/MEDIUM: h2: match absolute-path not path-absolute for :path\u0027\n2103749 - MachineConfigPool is not getting updated\n2104282 - heterogeneous arch: oc adm extract encodes arch specific release payload pullspec rather than the manifestlisted pullspec\n2104432 - [dpu-network-operator] Updating images to be consistent with ART\n2104552 - kube-controller-manager operator 4.11.0-rc.0 degraded on disabled monitoring stack\n2104561 - 4.10 to 4.11 update: Degraded node: unexpected on-disk state: mode mismatch for file: \"/etc/crio/crio.conf.d/01-ctrcfg-pidsLimit\"; expected: -rw-r--r--/420/0644; received: ----------/0/0\n2104589 - must-gather namespace should have ?privileged? warn and audit pod security labels besides enforce\n2104701 - In CI 4.10 HAProxy must-gather takes longer than 10 minutes\n2104717 - NetworkPolicies: ovnkube-master pods crashing due to panic: \"invalid memory address or nil pointer dereference\"\n2104727 - Bootstrap node should honor http proxy\n2104906 - Uninstall fails with Observed a panic: runtime.boundsError\n2104951 - Web console doesn\u0027t display webhook errors for upgrades\n2104991 - Completed pods may not be correctly cleaned up\n2105101 - NodeIP is used instead of EgressIP if egressPod is recreated within 60 seconds\n2105106 - co/node-tuning: Waiting for 15/72 Profiles to be applied\n2105146 - Degraded=True noise with: UpgradeBackupControllerDegraded: unable to retrieve cluster version, no completed update was found in cluster version status history\n2105167 - BuildConfig throws error when using a label with a / in it\n2105334 - vmware-vsphere-csi-driver-controller can\u0027t use host port error on e2e-vsphere-serial\n2105382 - Add a validation webhook for Nutanix machine provider spec in Machine API Operator\n2105468 - The ccoctl does not seem to know how to leverage the VMs service account to talk to GCP APIs. \n2105937 - telemeter golangci-lint outdated blocking ART PRs that update to Go1.18\n2106051 - Unable to deploy acm-ice using latest SRO 4.11 build\n2106058 - vSphere defaults to SecureBoot on; breaks installation of out-of-tree drivers [4.11.0]\n2106062 - [4.11] Bootimage bump tracker\n2106116 - IngressController spec.tuningOptions.healthCheckInterval validation allows invalid values such as \"0abc\"\n2106163 - Samples ImageStreams vs. registry.redhat.io: unsupported: V2 schema 1 manifest digests are no longer supported for image pulls\n2106313 - bond-cni: backport bond-cni GA items to 4.11\n2106543 - Typo in must-gather release-4.10\n2106594 - crud/other-routes.spec.ts Cypress test failing at a high rate in CI\n2106723 - [4.11] Upgrade from 4.11.0-rc0 -\u003e 4.11.0-rc.1 failed. rpm-ostree status shows No space left on device\n2106855 - [4.11.z] externalTrafficPolicy=Local is not working in local gateway mode if ovnkube-node is restarted\n2107493 - ReplicaSet prometheus-operator-admission-webhook has timed out progressing\n2107501 - metallb greenwave tests failure\n2107690 - Driver Container builds fail with \"error determining starting point for build: no FROM statement found\"\n2108175 - etcd backup seems to not be triggered in 4.10.18--\u003e4.10.20 upgrade\n2108617 - [oc adm release] extraction of the installer against a manifestlisted payload referenced by tag leads to a bad release image reference\n2108686 - rpm-ostreed: start limit hit easily\n2110505 - [Upgrade]deployment openshift-machine-api/machine-api-operator has a replica failure FailedCreate\n2110715 - openshift-controller-manager(-operator) namespace should clear run-level annotations\n2111055 - dummy bug for 4.10.z bz2110938\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-25009\nhttps://access.redhat.com/security/cve/CVE-2018-25010\nhttps://access.redhat.com/security/cve/CVE-2018-25012\nhttps://access.redhat.com/security/cve/CVE-2018-25013\nhttps://access.redhat.com/security/cve/CVE-2018-25014\nhttps://access.redhat.com/security/cve/CVE-2018-25032\nhttps://access.redhat.com/security/cve/CVE-2019-5827\nhttps://access.redhat.com/security/cve/CVE-2019-13750\nhttps://access.redhat.com/security/cve/CVE-2019-13751\nhttps://access.redhat.com/security/cve/CVE-2019-17594\nhttps://access.redhat.com/security/cve/CVE-2019-17595\nhttps://access.redhat.com/security/cve/CVE-2019-18218\nhttps://access.redhat.com/security/cve/CVE-2019-19603\nhttps://access.redhat.com/security/cve/CVE-2019-20838\nhttps://access.redhat.com/security/cve/CVE-2020-13435\nhttps://access.redhat.com/security/cve/CVE-2020-14155\nhttps://access.redhat.com/security/cve/CVE-2020-17541\nhttps://access.redhat.com/security/cve/CVE-2020-19131\nhttps://access.redhat.com/security/cve/CVE-2020-24370\nhttps://access.redhat.com/security/cve/CVE-2020-28493\nhttps://access.redhat.com/security/cve/CVE-2020-35492\nhttps://access.redhat.com/security/cve/CVE-2020-36330\nhttps://access.redhat.com/security/cve/CVE-2020-36331\nhttps://access.redhat.com/security/cve/CVE-2020-36332\nhttps://access.redhat.com/security/cve/CVE-2021-3481\nhttps://access.redhat.com/security/cve/CVE-2021-3580\nhttps://access.redhat.com/security/cve/CVE-2021-3634\nhttps://access.redhat.com/security/cve/CVE-2021-3672\nhttps://access.redhat.com/security/cve/CVE-2021-3695\nhttps://access.redhat.com/security/cve/CVE-2021-3696\nhttps://access.redhat.com/security/cve/CVE-2021-3697\nhttps://access.redhat.com/security/cve/CVE-2021-3737\nhttps://access.redhat.com/security/cve/CVE-2021-4115\nhttps://access.redhat.com/security/cve/CVE-2021-4156\nhttps://access.redhat.com/security/cve/CVE-2021-4189\nhttps://access.redhat.com/security/cve/CVE-2021-20095\nhttps://access.redhat.com/security/cve/CVE-2021-20231\nhttps://access.redhat.com/security/cve/CVE-2021-20232\nhttps://access.redhat.com/security/cve/CVE-2021-23177\nhttps://access.redhat.com/security/cve/CVE-2021-23566\nhttps://access.redhat.com/security/cve/CVE-2021-23648\nhttps://access.redhat.com/security/cve/CVE-2021-25219\nhttps://access.redhat.com/security/cve/CVE-2021-31535\nhttps://access.redhat.com/security/cve/CVE-2021-31566\nhttps://access.redhat.com/security/cve/CVE-2021-36084\nhttps://access.redhat.com/security/cve/CVE-2021-36085\nhttps://access.redhat.com/security/cve/CVE-2021-36086\nhttps://access.redhat.com/security/cve/CVE-2021-36087\nhttps://access.redhat.com/security/cve/CVE-2021-38185\nhttps://access.redhat.com/security/cve/CVE-2021-38593\nhttps://access.redhat.com/security/cve/CVE-2021-40528\nhttps://access.redhat.com/security/cve/CVE-2021-41190\nhttps://access.redhat.com/security/cve/CVE-2021-41617\nhttps://access.redhat.com/security/cve/CVE-2021-42771\nhttps://access.redhat.com/security/cve/CVE-2021-43527\nhttps://access.redhat.com/security/cve/CVE-2021-43818\nhttps://access.redhat.com/security/cve/CVE-2021-44225\nhttps://access.redhat.com/security/cve/CVE-2021-44906\nhttps://access.redhat.com/security/cve/CVE-2022-0235\nhttps://access.redhat.com/security/cve/CVE-2022-0778\nhttps://access.redhat.com/security/cve/CVE-2022-1012\nhttps://access.redhat.com/security/cve/CVE-2022-1215\nhttps://access.redhat.com/security/cve/CVE-2022-1271\nhttps://access.redhat.com/security/cve/CVE-2022-1292\nhttps://access.redhat.com/security/cve/CVE-2022-1586\nhttps://access.redhat.com/security/cve/CVE-2022-1621\nhttps://access.redhat.com/security/cve/CVE-2022-1629\nhttps://access.redhat.com/security/cve/CVE-2022-1706\nhttps://access.redhat.com/security/cve/CVE-2022-1729\nhttps://access.redhat.com/security/cve/CVE-2022-2068\nhttps://access.redhat.com/security/cve/CVE-2022-2097\nhttps://access.redhat.com/security/cve/CVE-2022-21698\nhttps://access.redhat.com/security/cve/CVE-2022-22576\nhttps://access.redhat.com/security/cve/CVE-2022-23772\nhttps://access.redhat.com/security/cve/CVE-2022-23773\nhttps://access.redhat.com/security/cve/CVE-2022-23806\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/cve/CVE-2022-24675\nhttps://access.redhat.com/security/cve/CVE-2022-24903\nhttps://access.redhat.com/security/cve/CVE-2022-24921\nhttps://access.redhat.com/security/cve/CVE-2022-25313\nhttps://access.redhat.com/security/cve/CVE-2022-25314\nhttps://access.redhat.com/security/cve/CVE-2022-26691\nhttps://access.redhat.com/security/cve/CVE-2022-26945\nhttps://access.redhat.com/security/cve/CVE-2022-27191\nhttps://access.redhat.com/security/cve/CVE-2022-27774\nhttps://access.redhat.com/security/cve/CVE-2022-27776\nhttps://access.redhat.com/security/cve/CVE-2022-27782\nhttps://access.redhat.com/security/cve/CVE-2022-28327\nhttps://access.redhat.com/security/cve/CVE-2022-28733\nhttps://access.redhat.com/security/cve/CVE-2022-28734\nhttps://access.redhat.com/security/cve/CVE-2022-28735\nhttps://access.redhat.com/security/cve/CVE-2022-28736\nhttps://access.redhat.com/security/cve/CVE-2022-28737\nhttps://access.redhat.com/security/cve/CVE-2022-29162\nhttps://access.redhat.com/security/cve/CVE-2022-29810\nhttps://access.redhat.com/security/cve/CVE-2022-29824\nhttps://access.redhat.com/security/cve/CVE-2022-30321\nhttps://access.redhat.com/security/cve/CVE-2022-30322\nhttps://access.redhat.com/security/cve/CVE-2022-30323\nhttps://access.redhat.com/security/cve/CVE-2022-32250\nhttps://access.redhat.com/security/updates/classification/#important\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYvOfk9zjgjWX9erEAQhJ/w//UlbBGKBBFBAyfEmQf9Zu0yyv6MfZW0Zl\niO1qXVIl9UQUFjTY5ejerx7cP8EBWLhKaiiqRRjbjtj+w+ENGB4LLj6TEUrSM5oA\nYEmhnX3M+GUKF7Px61J7rZfltIOGhYBvJ+qNZL2jvqz1NciVgI4/71cZWnvDbGpa\n02w3Dn0JzhTSR9znNs9LKcV/anttJ3NtOYhqMXnN8EpKdtzQkKRazc7xkOTxfxyl\njRiER2Z0TzKDE6dMoVijS2Sv5j/JF0LRwetkZl6+oh8ehKh5GRV3lPg3eVkhzDEo\n/gp0P9GdLMHi6cS6uqcREbod//waSAa7cssgULoycFwjzbDK3L2c+wMuWQIgXJca\nRYuP6wvrdGwiI1mgUi/226EzcZYeTeoKxnHkp7AsN9l96pJYafj0fnK1p9NM/8g3\njBE/W4K8jdDNVd5l1Z5O0Nyxk6g4P8MKMe10/w/HDXFPSgufiCYIGX4TKqb+ESIR\nSuYlSMjoGsB4mv1KMDEUJX6d8T05lpEwJT0RYNdZOouuObYMtcHLpRQHH9mkj86W\npHdma5aGG/mTMvSMW6l6L05uT41Azm6fVimTv+E5WvViBni2480CVH+9RexKKSyL\nXcJX1gaLdo+72I/gZrtT+XE5tcJ3Sf5fmfsenQeY4KFum/cwzbM6y7RGn47xlEWB\nxBWKPzRxz0Q=9r0B\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202305-16\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Low\n    Title: Vim, gVim: Multiple Vulnerabilities\n     Date: May 03, 2023\n     Bugs: #851231, #861092, #869359, #879257, #883681, #889730\n       ID: 202305-16\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been found in Vim, the worst of which\ncould result in denial of service. gVim is the GUI version of Vim. \n\nAffected packages\n=================\n\n    -------------------------------------------------------------------\n     Package              /     Vulnerable     /            Unaffected\n    -------------------------------------------------------------------\n  1  app-editors/gvim           \u003c 9.0.1157                \u003e= 9.0.1157\n  2  app-editors/vim            \u003c 9.0.1157                \u003e= 9.0.1157\n  3  app-editors/vim-core       \u003c 9.0.1157                \u003e= 9.0.1157\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in Vim, gVim. Please\nreview the CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll Vim users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=app-editors/vim-9.0.1157\"\n\nAll gVim users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=app-editors/gvim-9.0.1157\"\n\nAll vim-core users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=app-editors/vim-core-9.0.1157\"\n\nReferences\n==========\n\n[ 1 ] CVE-2022-1154\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1154\n[ 2 ] CVE-2022-1160\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1160\n[ 3 ] CVE-2022-1381\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1381\n[ 4 ] CVE-2022-1420\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1420\n[ 5 ] CVE-2022-1616\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1616\n[ 6 ] CVE-2022-1619\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1619\n[ 7 ] CVE-2022-1620\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1620\n[ 8 ] CVE-2022-1621\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1621\n[ 9 ] CVE-2022-1629\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1629\n[ 10 ] CVE-2022-1674\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1674\n[ 11 ] CVE-2022-1720\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1720\n[ 12 ] CVE-2022-1725\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1725\n[ 13 ] CVE-2022-1733\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1733\n[ 14 ] CVE-2022-1735\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1735\n[ 15 ] CVE-2022-1769\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1769\n[ 16 ] CVE-2022-1771\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1771\n[ 17 ] CVE-2022-1785\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1785\n[ 18 ] CVE-2022-1796\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1796\n[ 19 ] CVE-2022-1851\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1851\n[ 20 ] CVE-2022-1886\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1886\n[ 21 ] CVE-2022-1897\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1897\n[ 22 ] CVE-2022-1898\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1898\n[ 23 ] CVE-2022-1927\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1927\n[ 24 ] CVE-2022-1942\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1942\n[ 25 ] CVE-2022-1968\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1968\n[ 26 ] CVE-2022-2000\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2000\n[ 27 ] CVE-2022-2042\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2042\n[ 28 ] CVE-2022-2124\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2124\n[ 29 ] CVE-2022-2125\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2125\n[ 30 ] CVE-2022-2126\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2126\n[ 31 ] CVE-2022-2129\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2129\n[ 32 ] CVE-2022-2175\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2175\n[ 33 ] CVE-2022-2182\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2182\n[ 34 ] CVE-2022-2183\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2183\n[ 35 ] CVE-2022-2206\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2206\n[ 36 ] CVE-2022-2207\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2207\n[ 37 ] CVE-2022-2208\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2208\n[ 38 ] CVE-2022-2210\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2210\n[ 39 ] CVE-2022-2231\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2231\n[ 40 ] CVE-2022-2257\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2257\n[ 41 ] CVE-2022-2264\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2264\n[ 42 ] CVE-2022-2284\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2284\n[ 43 ] CVE-2022-2285\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2285\n[ 44 ] CVE-2022-2286\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2286\n[ 45 ] CVE-2022-2287\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2287\n[ 46 ] CVE-2022-2288\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2288\n[ 47 ] CVE-2022-2289\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2289\n[ 48 ] CVE-2022-2304\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2304\n[ 49 ] CVE-2022-2343\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2343\n[ 50 ] CVE-2022-2344\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2344\n[ 51 ] CVE-2022-2345\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2345\n[ 52 ] CVE-2022-2522\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2522\n[ 53 ] CVE-2022-2816\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2816\n[ 54 ] CVE-2022-2817\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2817\n[ 55 ] CVE-2022-2819\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2819\n[ 56 ] CVE-2022-2845\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2845\n[ 57 ] CVE-2022-2849\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2849\n[ 58 ] CVE-2022-2862\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2862\n[ 59 ] CVE-2022-2874\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2874\n[ 60 ] CVE-2022-2889\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2889\n[ 61 ] CVE-2022-2923\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2923\n[ 62 ] CVE-2022-2946\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2946\n[ 63 ] CVE-2022-2980\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2980\n[ 64 ] CVE-2022-2982\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2982\n[ 65 ] CVE-2022-3016\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3016\n[ 66 ] CVE-2022-3099\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3099\n[ 67 ] CVE-2022-3134\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3134\n[ 68 ] CVE-2022-3153\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3153\n[ 69 ] CVE-2022-3234\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3234\n[ 70 ] CVE-2022-3235\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3235\n[ 71 ] CVE-2022-3256\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3256\n[ 72 ] CVE-2022-3278\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3278\n[ 73 ] CVE-2022-3296\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3296\n[ 74 ] CVE-2022-3297\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3297\n[ 75 ] CVE-2022-3324\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3324\n[ 76 ] CVE-2022-3352\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3352\n[ 77 ] CVE-2022-3491\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3491\n[ 78 ] CVE-2022-3520\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3520\n[ 79 ] CVE-2022-3591\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3591\n[ 80 ] CVE-2022-3705\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3705\n[ 81 ] CVE-2022-4141\n      https://nvd.nist.gov/vuln/detail/CVE-2022-4141\n[ 82 ] CVE-2022-4292\n      https://nvd.nist.gov/vuln/detail/CVE-2022-4292\n[ 83 ] CVE-2022-4293\n      https://nvd.nist.gov/vuln/detail/CVE-2022-4293\n[ 84 ] CVE-2022-47024\n      https://nvd.nist.gov/vuln/detail/CVE-2022-47024\n[ 85 ] CVE-2023-0049\n      https://nvd.nist.gov/vuln/detail/CVE-2023-0049\n[ 86 ] CVE-2023-0051\n      https://nvd.nist.gov/vuln/detail/CVE-2023-0051\n[ 87 ] CVE-2023-0054\n      https://nvd.nist.gov/vuln/detail/CVE-2023-0054\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202305-16\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2023 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. ==========================================================================\nUbuntu Security Notice USN-5613-2\nSeptember 19, 2022\n\nvim regression\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 20.04 LTS\n\nSummary:\n\nUSN-5613-1 caused a regression in Vim. \n\nSoftware Description:\n- vim: Vi IMproved - enhanced vi editor\n\nDetails:\n\nUSN-5613-1 fixed vulnerabilities in Vim. Unfortunately that update failed\nto include binary packages for some architectures. This update fixes that\nregression. \n\nWe apologize for the inconvenience. \n\nOriginal advisory details:\n\n It was discovered that Vim was not properly performing bounds checks\n when executing spell suggestion commands. An attacker could possibly use\n this issue to cause a denial of service or execute arbitrary code. \n (CVE-2022-0943)\n \n It was discovered that Vim was using freed memory when dealing with\n regular expressions through its old regular expression engine. If a user\n were tricked into opening a specially crafted file, an attacker could\n crash the application, leading to a denial of service, or possibly achieve\n code execution. (CVE-2022-1154)\n \n It was discovered that Vim was not properly performing checks on name of\n lambda functions. An attacker could possibly use this issue to cause a\n denial of service. This issue affected only Ubuntu 22.04 LTS. \n (CVE-2022-1420)\n \n It was discovered that Vim was incorrectly performing bounds checks\n when processing invalid commands with composing characters in Ex\n mode. An attacker could possibly use this issue to cause a denial of\n service or execute arbitrary code. (CVE-2022-1616)\n \n It was discovered that Vim was not properly processing latin1 data\n when issuing Ex commands. An attacker could possibly use this issue to\n cause a denial of service or execute arbitrary code. (CVE-2022-1619)\n \n It was discovered that Vim was not properly performing memory\n management when dealing with invalid regular expression patterns in\n buffers. An attacker could possibly use this issue to cause a denial of\n service. (CVE-2022-1620)\n \n It was discovered that Vim was not properly processing invalid bytes\n when performing spell check operations. An attacker could possibly use\n this issue to cause a denial of service or execute arbitrary code. \n (CVE-2022-1621)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 20.04 LTS:\n  vim                             2:8.1.2269-1ubuntu5.9\n\nIn general, a standard system update will make all the necessary changes. Solution:\n\nOSP 16.2 Release - OSP Director Operator Containers tech preview\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2011007 - CVE-2021-41103 containerd: insufficiently restricted permissions on container root and plugin directories\n2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic\n2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3)\n2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3)\n2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3)\n2092928 - CVE-2022-26945 go-getter: command injection vulnerability\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n\n5",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-1621"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-010779"
      },
      {
        "db": "VULHUB",
        "id": "VHN-419734"
      },
      {
        "db": "PACKETSTORM",
        "id": "167666"
      },
      {
        "db": "PACKETSTORM",
        "id": "168395"
      },
      {
        "db": "PACKETSTORM",
        "id": "168042"
      },
      {
        "db": "PACKETSTORM",
        "id": "172122"
      },
      {
        "db": "PACKETSTORM",
        "id": "168420"
      },
      {
        "db": "PACKETSTORM",
        "id": "167778"
      },
      {
        "db": "PACKETSTORM",
        "id": "167984"
      }
    ],
    "trust": 2.34
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2022-1621",
        "trust": 4.0
      },
      {
        "db": "PACKETSTORM",
        "id": "167778",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "168395",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "168420",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "167666",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-010779",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "167853",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "167985",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "167419",
        "trust": 0.7
      },
      {
        "db": "CS-HELP",
        "id": "SB2022052018",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022071342",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022072631",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022060635",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022070109",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022070642",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022072127",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022072010",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.2405",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5300",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.6148",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4641",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4601",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4617",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3226",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3821",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.2791",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3977",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3554",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3873",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3644",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202205-2826",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "167984",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "167838",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "167644",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "167845",
        "trust": 0.1
      },
      {
        "db": "VULHUB",
        "id": "VHN-419734",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168042",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "172122",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-419734"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-010779"
      },
      {
        "db": "PACKETSTORM",
        "id": "167666"
      },
      {
        "db": "PACKETSTORM",
        "id": "168395"
      },
      {
        "db": "PACKETSTORM",
        "id": "168042"
      },
      {
        "db": "PACKETSTORM",
        "id": "172122"
      },
      {
        "db": "PACKETSTORM",
        "id": "168420"
      },
      {
        "db": "PACKETSTORM",
        "id": "167778"
      },
      {
        "db": "PACKETSTORM",
        "id": "167984"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202205-2826"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-1621"
      }
    ]
  },
  "id": "VAR-202205-0855",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-419734"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2024-07-23T20:12:05.030000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "HT213488",
        "trust": 0.8,
        "url": "https://lists.debian.org/debian-lts-announce/2022/05/msg00022.html"
      },
      {
        "title": "Vim Buffer error vulnerability fix",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=193122"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-010779"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202205-2826"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-122",
        "trust": 1.0
      },
      {
        "problemtype": "Out-of-bounds writing (CWE-787) [NVD evaluation ]",
        "trust": 0.8
      },
      {
        "problemtype": "CWE-787",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-419734"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-010779"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-1621"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.7,
        "url": "https://support.apple.com/kb/ht213488"
      },
      {
        "trust": 1.7,
        "url": "https://huntr.dev/bounties/520ce714-bfd2-4646-9458-f52cd22bb2fb"
      },
      {
        "trust": 1.7,
        "url": "http://seclists.org/fulldisclosure/2022/oct/28"
      },
      {
        "trust": 1.7,
        "url": "http://seclists.org/fulldisclosure/2022/oct/41"
      },
      {
        "trust": 1.7,
        "url": "https://security.gentoo.org/glsa/202208-32"
      },
      {
        "trust": 1.7,
        "url": "https://github.com/vim/vim/commit/7c824682d2028432ee082703ef0ab399867a089b"
      },
      {
        "trust": 1.7,
        "url": "https://lists.debian.org/debian-lts-announce/2022/05/msg00022.html"
      },
      {
        "trust": 1.7,
        "url": "https://lists.debian.org/debian-lts-announce/2022/11/msg00032.html"
      },
      {
        "trust": 1.7,
        "url": "https://security.gentoo.org/glsa/202305-16"
      },
      {
        "trust": 1.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1621"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/hip7kg7tvs5yf3qreay2gogut3yubzai/"
      },
      {
        "trust": 0.7,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/hip7kg7tvs5yf3qreay2gogut3yubzai/"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022072631"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3977"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.2405"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022071342"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/167853/red-hat-security-advisory-2022-5531-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.2791"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5300"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022070109"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/167985/red-hat-security-advisory-2022-5909-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022052018"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3226"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3644"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3821"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022072127"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4617"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/vim-buffer-overflow-via-in-vim-strncpy-find-word-38384"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022070642"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb20220720108"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/167666/red-hat-security-advisory-2022-5242-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/167419/ubuntu-security-notice-usn-5460-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022060635"
      },
      {
        "trust": 0.6,
        "url": "https://cxsecurity.com/cveshow/cve-2022-1621/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.6148"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4641"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3554"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3873"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/168395/ubuntu-security-notice-usn-5613-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/168420/ubuntu-security-notice-usn-5613-2.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/167778/red-hat-security-advisory-2022-5673-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4601"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1154"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-1621"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1420"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1629"
      },
      {
        "trust": 0.4,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.4,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-1629"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0943"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1619"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1620"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1616"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-27776"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-27774"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-1271"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-25313"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-29824"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-27782"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-22576"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-40528"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-25314"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.2,
        "url": "https://ubuntu.com/security/notices/usn-5613-1"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-26945"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-4189"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3634"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3737"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-30321"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-30322"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-30323"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-27774"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22576"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-40528"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-25314"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-25313"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:5242"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0554"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1154"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0943"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1420"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0554"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/vim/2:8.2.3995-1ubuntu2.1"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/vim/2:8.1.2269-1ubuntu5.8"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/vim/2:8.0.1453-1ubuntu1.9"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36084"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28327"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44225"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36085"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0235"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32250"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41617"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43818"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20838"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:5068"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36331"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-38593"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20095"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-5827"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25014"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2097"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25009"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3481"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-19131"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3580"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3696"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24921"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-38185"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23648"
      },
      {
        "trust": 0.1,
        "url": "https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2068"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24370"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4156"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:5069"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28733"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27191"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-29162"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36330"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25010"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-35492"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3672"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23772"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23177"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14155"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28736"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19603"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-42771"
      },
      {
        "trust": 0.1,
        "url": "https://10.0.0.7:2379"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21698"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1292"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-17541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25012"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3697"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36087"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1706"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20231"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28734"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28737"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20232"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25219"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44906"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-31566"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3695"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17595"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28735"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1215"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36086"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23806"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1729"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36332"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41190"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18218"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43527"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-29810"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26691"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24903"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4115"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1012"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23566"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25013"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-31535"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28493"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23773"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13435"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24675"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0778"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1733"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1942"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2345"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2207"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2845"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2182"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2231"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2210"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2816"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2862"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1796"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3256"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2285"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3296"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2000"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3153"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3705"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3235"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1771"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1735"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2889"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2288"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2183"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1886"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2304"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1674"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2287"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2343"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-0051"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2923"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2982"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1851"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1897"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2264"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3520"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1927"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1898"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-4293"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2126"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3099"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2208"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2042"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2874"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3016"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2124"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3278"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-47024"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1720"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-0054"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2286"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1381"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-4141"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2819"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2946"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1785"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1769"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2206"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-0049"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2175"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2849"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2284"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3324"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2980"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2817"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2344"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2522"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2289"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2129"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1968"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3591"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2257"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-4292"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3134"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3297"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3352"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3491"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2125"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1725"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1160"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3234"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/vim/2:8.1.2269-1ubuntu5.9"
      },
      {
        "trust": 0.1,
        "url": "https://ubuntu.com/security/notices/usn-5613-2"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/bugs/1989973"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41103"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:4991"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26945"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3737"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3634"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/containers"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4189"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43565"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43565"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:5673"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-41103"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:5908"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-34169"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-38561"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21540"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-27782"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21540"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-29824"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21541"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-27776"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21541"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-38561"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-419734"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-010779"
      },
      {
        "db": "PACKETSTORM",
        "id": "167666"
      },
      {
        "db": "PACKETSTORM",
        "id": "168395"
      },
      {
        "db": "PACKETSTORM",
        "id": "168042"
      },
      {
        "db": "PACKETSTORM",
        "id": "172122"
      },
      {
        "db": "PACKETSTORM",
        "id": "168420"
      },
      {
        "db": "PACKETSTORM",
        "id": "167778"
      },
      {
        "db": "PACKETSTORM",
        "id": "167984"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202205-2826"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-1621"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-419734"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-010779"
      },
      {
        "db": "PACKETSTORM",
        "id": "167666"
      },
      {
        "db": "PACKETSTORM",
        "id": "168395"
      },
      {
        "db": "PACKETSTORM",
        "id": "168042"
      },
      {
        "db": "PACKETSTORM",
        "id": "172122"
      },
      {
        "db": "PACKETSTORM",
        "id": "168420"
      },
      {
        "db": "PACKETSTORM",
        "id": "167778"
      },
      {
        "db": "PACKETSTORM",
        "id": "167984"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202205-2826"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-1621"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-05-10T00:00:00",
        "db": "VULHUB",
        "id": "VHN-419734"
      },
      {
        "date": "2023-08-17T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-010779"
      },
      {
        "date": "2022-07-01T15:00:50",
        "db": "PACKETSTORM",
        "id": "167666"
      },
      {
        "date": "2022-09-15T14:21:20",
        "db": "PACKETSTORM",
        "id": "168395"
      },
      {
        "date": "2022-08-10T15:56:22",
        "db": "PACKETSTORM",
        "id": "168042"
      },
      {
        "date": "2023-05-03T15:29:00",
        "db": "PACKETSTORM",
        "id": "172122"
      },
      {
        "date": "2022-09-19T18:26:16",
        "db": "PACKETSTORM",
        "id": "168420"
      },
      {
        "date": "2022-07-21T20:26:52",
        "db": "PACKETSTORM",
        "id": "167778"
      },
      {
        "date": "2022-08-05T14:51:51",
        "db": "PACKETSTORM",
        "id": "167984"
      },
      {
        "date": "2022-05-10T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202205-2826"
      },
      {
        "date": "2022-05-10T14:15:08.460000",
        "db": "NVD",
        "id": "CVE-2022-1621"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-11-29T00:00:00",
        "db": "VULHUB",
        "id": "VHN-419734"
      },
      {
        "date": "2023-08-17T04:23:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-010779"
      },
      {
        "date": "2023-05-04T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202205-2826"
      },
      {
        "date": "2023-11-07T03:42:03.430000",
        "db": "NVD",
        "id": "CVE-2022-1621"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "local",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202205-2826"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "vim/vim\u00a0 Out-of-bounds write vulnerability in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-010779"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "buffer error",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202205-2826"
      }
    ],
    "trust": 0.6
  }
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading…

Loading…

Loading…

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
  • Confirmed: The vulnerability is confirmed from an analyst perspective.
  • Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
  • Patched: This vulnerability was successfully patched by the user reporting the sighting.
  • Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
  • Not confirmed: The user expresses doubt about the veracity of the vulnerability.
  • Not patched: This vulnerability was not successfully patched by the user reporting the sighting.